Spaces:
Running
on
Zero
Running
on
Zero
Upload 2 files
Browse files
README.md
CHANGED
|
@@ -23,11 +23,11 @@ You can call this Space from Python (via `gradio_client`) or from plain `curl`.
|
|
| 23 |
> ⚠️ Note: This README may lag behind the actual API definition shown in the Space’s “View API” page.
|
| 24 |
> If something does not work, always double-check the latest argument list and endpoint names there.
|
| 25 |
|
| 26 |
-
|
| 27 |
|
| 28 |
- Space ID: `John6666/DiffuseCraftMod`
|
| 29 |
- You have a valid Hugging Face access token: `hf_xxx...` (read access is enough)
|
| 30 |
-
-
|
| 31 |
|
| 32 |
---
|
| 33 |
|
|
@@ -47,27 +47,34 @@ from gradio_client import Client
|
|
| 47 |
client = Client("John6666/DiffuseCraftMod", hf_token="hf_xxx...")
|
| 48 |
|
| 49 |
status, images, info = client.predict(
|
|
|
|
| 50 |
prompt="Hello!!",
|
| 51 |
negative_prompt=(
|
| 52 |
"lowres, bad anatomy, bad hands, missing fingers, extra digit, "
|
| 53 |
"fewer digits, worst quality, low quality"
|
| 54 |
),
|
|
|
|
|
|
|
| 55 |
num_images=1,
|
| 56 |
num_inference_steps=28,
|
| 57 |
guidance_scale=7.0,
|
| 58 |
clip_skip=0,
|
| 59 |
seed=-1,
|
|
|
|
|
|
|
| 60 |
height=1024,
|
| 61 |
width=1024,
|
| 62 |
model_name="votepurchase/animagine-xl-3.1",
|
| 63 |
vae_model="None",
|
| 64 |
task="txt2img",
|
|
|
|
|
|
|
| 65 |
api_name="/generate_image",
|
| 66 |
)
|
| 67 |
|
| 68 |
-
print(status)
|
| 69 |
-
print(images)
|
| 70 |
-
print(info)
|
| 71 |
```
|
| 72 |
|
| 73 |
#### 1.2 Streaming API – `generate_image_stream`
|
|
@@ -101,6 +108,8 @@ for status, images, info in job:
|
|
| 101 |
print(status, images, info)
|
| 102 |
```
|
| 103 |
|
|
|
|
|
|
|
| 104 |
---
|
| 105 |
|
| 106 |
### 2. `curl` examples
|
|
@@ -111,6 +120,9 @@ When calling from `curl`, include your HF token; anonymous calls may be rate-lim
|
|
| 111 |
export HF_TOKEN="hf_xxx..." # your Hugging Face access token
|
| 112 |
```
|
| 113 |
|
|
|
|
|
|
|
|
|
|
| 114 |
#### 2.1 Synchronous API – `generate_image`
|
| 115 |
|
| 116 |
```bash
|
|
@@ -119,18 +131,14 @@ curl -X POST "https://john6666-diffusecraftmod.hf.space/call/generate_image" \
|
|
| 119 |
-H "Content-Type: application/json" \
|
| 120 |
-d '{
|
| 121 |
"data": [
|
| 122 |
-
"Hello!!",
|
| 123 |
-
"lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, worst quality, low quality",
|
| 124 |
-
1,
|
| 125 |
-
28,
|
| 126 |
-
7.0,
|
| 127 |
-
0,
|
| 128 |
-
-1
|
| 129 |
-
|
| 130 |
-
1024,
|
| 131 |
-
"votepurchase/animagine-xl-3.1",
|
| 132 |
-
"None",
|
| 133 |
-
"txt2img"
|
| 134 |
]
|
| 135 |
}'
|
| 136 |
```
|
|
@@ -149,15 +157,10 @@ curl -X POST "https://john6666-diffusecraftmod.hf.space/call/generate_image_stre
|
|
| 149 |
28,
|
| 150 |
7.0,
|
| 151 |
0,
|
| 152 |
-
-1
|
| 153 |
-
1024,
|
| 154 |
-
1024,
|
| 155 |
-
"votepurchase/animagine-xl-3.1",
|
| 156 |
-
"None",
|
| 157 |
-
"txt2img"
|
| 158 |
]
|
| 159 |
}'
|
| 160 |
```
|
| 161 |
|
| 162 |
-
For full parameter coverage (all advanced options
|
| 163 |
-
examples above accordingly.
|
|
|
|
| 23 |
> ⚠️ Note: This README may lag behind the actual API definition shown in the Space’s “View API” page.
|
| 24 |
> If something does not work, always double-check the latest argument list and endpoint names there.
|
| 25 |
|
| 26 |
+
Assumptions:
|
| 27 |
|
| 28 |
- Space ID: `John6666/DiffuseCraftMod`
|
| 29 |
- You have a valid Hugging Face access token: `hf_xxx...` (read access is enough)
|
| 30 |
+
- Replace `hf_xxx...` with your own token
|
| 31 |
|
| 32 |
---
|
| 33 |
|
|
|
|
| 47 |
client = Client("John6666/DiffuseCraftMod", hf_token="hf_xxx...")
|
| 48 |
|
| 49 |
status, images, info = client.predict(
|
| 50 |
+
# Core text controls
|
| 51 |
prompt="Hello!!",
|
| 52 |
negative_prompt=(
|
| 53 |
"lowres, bad anatomy, bad hands, missing fingers, extra digit, "
|
| 54 |
"fewer digits, worst quality, low quality"
|
| 55 |
),
|
| 56 |
+
|
| 57 |
+
# Basic generation controls
|
| 58 |
num_images=1,
|
| 59 |
num_inference_steps=28,
|
| 60 |
guidance_scale=7.0,
|
| 61 |
clip_skip=0,
|
| 62 |
seed=-1,
|
| 63 |
+
|
| 64 |
+
# Canvas / model / task (optional, server has defaults)
|
| 65 |
height=1024,
|
| 66 |
width=1024,
|
| 67 |
model_name="votepurchase/animagine-xl-3.1",
|
| 68 |
vae_model="None",
|
| 69 |
task="txt2img",
|
| 70 |
+
|
| 71 |
+
# All other arguments are optional; defaults match the UI
|
| 72 |
api_name="/generate_image",
|
| 73 |
)
|
| 74 |
|
| 75 |
+
print(status) # e.g. "COMPLETE"
|
| 76 |
+
print(images) # list of image paths / URLs
|
| 77 |
+
print(info) # generation metadata (seed, model, etc.)
|
| 78 |
```
|
| 79 |
|
| 80 |
#### 1.2 Streaming API – `generate_image_stream`
|
|
|
|
| 108 |
print(status, images, info)
|
| 109 |
```
|
| 110 |
|
| 111 |
+
You can stop iterating once you see a `"COMPLETE"` status if you only care about the final output.
|
| 112 |
+
|
| 113 |
---
|
| 114 |
|
| 115 |
### 2. `curl` examples
|
|
|
|
| 120 |
export HF_TOKEN="hf_xxx..." # your Hugging Face access token
|
| 121 |
```
|
| 122 |
|
| 123 |
+
The `data` field is a positional array. The order must match the function signature.
|
| 124 |
+
For simplicity, the examples below only send the first few arguments and rely on server defaults for the rest.
|
| 125 |
+
|
| 126 |
#### 2.1 Synchronous API – `generate_image`
|
| 127 |
|
| 128 |
```bash
|
|
|
|
| 131 |
-H "Content-Type: application/json" \
|
| 132 |
-d '{
|
| 133 |
"data": [
|
| 134 |
+
"Hello!!", // prompt
|
| 135 |
+
"lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, worst quality, low quality", // negative_prompt
|
| 136 |
+
1, // num_images
|
| 137 |
+
28, // num_inference_steps
|
| 138 |
+
7.0, // guidance_scale
|
| 139 |
+
0, // clip_skip
|
| 140 |
+
-1 // seed
|
| 141 |
+
// All subsequent parameters will use their default values
|
|
|
|
|
|
|
|
|
|
|
|
|
| 142 |
]
|
| 143 |
}'
|
| 144 |
```
|
|
|
|
| 157 |
28,
|
| 158 |
7.0,
|
| 159 |
0,
|
| 160 |
+
-1
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 161 |
]
|
| 162 |
}'
|
| 163 |
```
|
| 164 |
|
| 165 |
+
For full parameter coverage (all advanced options such as LoRAs, ControlNet, IP-Adapter, etc.),
|
| 166 |
+
refer to the Space’s “View API” page and adapt the examples above accordingly.
|
app.py
CHANGED
|
@@ -1887,9 +1887,19 @@ with gr.Blocks(theme=args.theme, elem_id="main", fill_width=True, fill_height=Fa
|
|
| 1887 |
copy_prompt_btn_pony.click(gradio_copy_prompt, inputs=[output_text_pony], outputs=[prompt_gui], show_api=False)
|
| 1888 |
|
| 1889 |
from typing import Any, Dict, List, Optional, Tuple, Generator
|
| 1890 |
-
|
| 1891 |
-
|
| 1892 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1893 |
# 0..6
|
| 1894 |
prompt: str,
|
| 1895 |
negative_prompt: str = "",
|
|
@@ -1952,7 +1962,7 @@ with gr.Blocks(theme=args.theme, elem_id="main", fill_width=True, fill_height=Fa
|
|
| 1952 |
adetailer_verbose: bool = False,
|
| 1953 |
hires_schedule_type: str = "Use same schedule type",
|
| 1954 |
hires_guidance_scale: float = -1.0,
|
| 1955 |
-
# 61 controlnet model (loader
|
| 1956 |
controlnet_model: str = "Automatic",
|
| 1957 |
# 62..71 loop/ui/save/cache
|
| 1958 |
loop_generation: bool = False,
|
|
@@ -2011,106 +2021,209 @@ with gr.Blocks(theme=args.theme, elem_id="main", fill_width=True, fill_height=Fa
|
|
| 2011 |
verbose_info_gui: int = 0,
|
| 2012 |
gpu_duration: int = 20,
|
| 2013 |
) -> Tuple[str, Optional[List[str]], Optional[str]]:
|
| 2014 |
-
|
| 2015 |
-
|
| 2016 |
-
|
| 2017 |
-
|
| 2018 |
-
|
| 2019 |
-
|
| 2020 |
-
|
| 2021 |
-
|
| 2022 |
-
|
| 2023 |
-
|
| 2024 |
-
|
| 2025 |
-
|
| 2026 |
-
|
| 2027 |
-
|
| 2028 |
-
|
| 2029 |
-
|
| 2030 |
-
|
| 2031 |
-
|
| 2032 |
-
|
| 2033 |
-
|
| 2034 |
-
|
| 2035 |
-
|
| 2036 |
-
|
| 2037 |
-
|
| 2038 |
-
|
| 2039 |
-
|
| 2040 |
-
|
| 2041 |
-
|
| 2042 |
-
|
| 2043 |
-
|
| 2044 |
-
|
| 2045 |
-
|
| 2046 |
-
|
| 2047 |
-
|
| 2048 |
-
|
| 2049 |
-
|
| 2050 |
-
|
| 2051 |
-
|
| 2052 |
-
|
| 2053 |
-
|
| 2054 |
-
"load_lora_cpu","verbose_info_gui","gpu_duration",
|
| 2055 |
-
]
|
| 2056 |
-
|
| 2057 |
-
def _argv_from_kwargs(kwargs: Dict[str, Any]) -> List[Any]:
|
| 2058 |
-
# Convert kwargs to the exact positional argv expected by generator.
|
| 2059 |
-
return [kwargs.get(k) for k in _GEN_ARG_ORDER]
|
| 2060 |
|
| 2061 |
-
def _generate_image(argv: List[Any]) -> Generator[Tuple[str, Optional[List[str]], Optional[str]], None, None]:
|
| 2062 |
-
# Delegate to existing generator
|
| 2063 |
-
yield from sd_gen_generate_pipeline(*argv)
|
| 2064 |
-
|
| 2065 |
-
_API_SIG = inspect.signature(_signature_src)
|
| 2066 |
-
|
| 2067 |
-
def _bind_api_args(*args: Any, **kwargs: Any) -> Dict[str, Any]:
|
| 2068 |
-
bound = _API_SIG.bind_partial(*args, **kwargs)
|
| 2069 |
-
for name, param in _API_SIG.parameters.items():
|
| 2070 |
-
if name not in bound.arguments or bound.arguments[name] is None:
|
| 2071 |
-
if param.default is not inspect._empty:
|
| 2072 |
-
bound.arguments[name] = param.default
|
| 2073 |
-
return bound.arguments
|
| 2074 |
-
|
| 2075 |
-
# 3) Signature-clone decorator: keep one signature across both endpoints
|
| 2076 |
-
def clone_signature(src):
|
| 2077 |
-
def deco(dst):
|
| 2078 |
-
dst.__signature__ = inspect.signature(src)
|
| 2079 |
-
dst.__annotations__ = src.__annotations__.copy()
|
| 2080 |
-
return dst
|
| 2081 |
-
return deco
|
| 2082 |
-
|
| 2083 |
-
# 4) Implementations: both share the same signature via clone_signature
|
| 2084 |
-
@clone_signature(_signature_src)
|
| 2085 |
-
def generate_image(*args: Any, **kwargs: Any) -> Tuple[str, Optional[List[str]], Optional[str]]:
|
| 2086 |
-
params = _bind_api_args(*args, **kwargs)
|
| 2087 |
-
_load_model(
|
| 2088 |
-
params["model_name"],
|
| 2089 |
-
params["vae_model"],
|
| 2090 |
-
params["task"],
|
| 2091 |
-
params["controlnet_model"],
|
| 2092 |
-
)
|
| 2093 |
-
argv = _argv_from_kwargs(params)
|
| 2094 |
last: Tuple[str, Optional[List[str]], Optional[str]] = ("COMPLETE", None, None)
|
| 2095 |
for last in _generate_image(argv):
|
|
|
|
|
|
|
| 2096 |
pass
|
| 2097 |
return last
|
| 2098 |
|
| 2099 |
-
|
| 2100 |
-
def generate_image_stream(
|
| 2101 |
-
|
| 2102 |
-
|
| 2103 |
-
|
| 2104 |
-
|
| 2105 |
-
|
| 2106 |
-
|
| 2107 |
-
|
| 2108 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2109 |
yield from _generate_image(argv)
|
| 2110 |
|
| 2111 |
-
|
| 2112 |
-
|
| 2113 |
-
|
| 2114 |
|
| 2115 |
gr.LoginButton()
|
| 2116 |
gr.DuplicateButton(value="Duplicate Space for private use (This demo does not work on CPU. Requires GPU Space)")
|
|
|
|
| 1887 |
copy_prompt_btn_pony.click(gradio_copy_prompt, inputs=[output_text_pony], outputs=[prompt_gui], show_api=False)
|
| 1888 |
|
| 1889 |
from typing import Any, Dict, List, Optional, Tuple, Generator
|
| 1890 |
+
# 1) Helper: model loader (keeps existing behavior)
|
| 1891 |
+
def _load_model(model_name: str, vae_model: str, task: str, controlnet_model: str) -> None:
|
| 1892 |
+
# Exhaust the load_new_model generator to complete model loading
|
| 1893 |
+
for _ in sd_gen.load_new_model(model_name, vae_model, task, controlnet_model):
|
| 1894 |
+
pass
|
| 1895 |
+
|
| 1896 |
+
# 2) Thin wrapper over the existing generator pipeline
|
| 1897 |
+
def _generate_image(argv: List[Any]) -> Generator[Tuple[str, Optional[List[str]], Optional[str]], None, None]:
|
| 1898 |
+
# Delegate to the existing generator
|
| 1899 |
+
yield from sd_gen_generate_pipeline(*argv)
|
| 1900 |
+
|
| 1901 |
+
# 3) Explicit-argument API: generate_image (sync)
|
| 1902 |
+
def generate_image(
|
| 1903 |
# 0..6
|
| 1904 |
prompt: str,
|
| 1905 |
negative_prompt: str = "",
|
|
|
|
| 1962 |
adetailer_verbose: bool = False,
|
| 1963 |
hires_schedule_type: str = "Use same schedule type",
|
| 1964 |
hires_guidance_scale: float = -1.0,
|
| 1965 |
+
# 61 controlnet model (used in loader as well)
|
| 1966 |
controlnet_model: str = "Automatic",
|
| 1967 |
# 62..71 loop/ui/save/cache
|
| 1968 |
loop_generation: bool = False,
|
|
|
|
| 2021 |
verbose_info_gui: int = 0,
|
| 2022 |
gpu_duration: int = 20,
|
| 2023 |
) -> Tuple[str, Optional[List[str]], Optional[str]]:
|
| 2024 |
+
# Ensure the correct model is loaded before generation.
|
| 2025 |
+
_load_model(model_name, vae_model, task, controlnet_model)
|
| 2026 |
+
|
| 2027 |
+
# Build argv in the exact order expected by sd_gen_generate_pipeline(*argv).
|
| 2028 |
+
argv: List[Any] = [
|
| 2029 |
+
# keep in sync with pipeline
|
| 2030 |
+
prompt, negative_prompt, num_images, num_inference_steps, guidance_scale, clip_skip, seed,
|
| 2031 |
+
lora1, lora1_wt, lora2, lora2_wt, lora3, lora3_wt, lora4, lora4_wt, lora5, lora5_wt,
|
| 2032 |
+
lora6, lora6_wt, lora7, lora7_wt,
|
| 2033 |
+
sampler, schedule_type, schedule_prediction_type,
|
| 2034 |
+
height, width, model_name, vae_model, task,
|
| 2035 |
+
image_control_dict, preprocessor_name, preprocess_resolution, image_resolution,
|
| 2036 |
+
style_prompt, style_json, image_mask,
|
| 2037 |
+
strength, low_threshold, high_threshold, value_threshold, distance_threshold,
|
| 2038 |
+
recolor_gamma_correction, tile_blur_sigma,
|
| 2039 |
+
control_net_output_scaling, control_net_start_threshold, control_net_stop_threshold,
|
| 2040 |
+
textual_inversion, prompt_syntax,
|
| 2041 |
+
upscaler_model_path, upscaler_increases_size, upscaler_tile_size, upscaler_tile_overlap,
|
| 2042 |
+
hires_steps, hires_denoising_strength, hires_sampler, hires_prompt, hires_negative_prompt,
|
| 2043 |
+
adetailer_inpaint_only, adetailer_verbose, hires_schedule_type, hires_guidance_scale,
|
| 2044 |
+
controlnet_model,
|
| 2045 |
+
loop_generation, leave_progress_bar, disable_progress_bar,
|
| 2046 |
+
image_previews, display_images, save_generated_images,
|
| 2047 |
+
filename_pattern, image_storage_location,
|
| 2048 |
+
retain_compel_previous_load, retain_detailfix_model_previous_load, retain_hires_model_previous_load,
|
| 2049 |
+
t2i_adapter_preprocessor, t2i_adapter_conditioning_scale, t2i_adapter_conditioning_factor,
|
| 2050 |
+
xformers_memory_efficient_attention, free_u, generator_in_cpu,
|
| 2051 |
+
adetailer_sampler,
|
| 2052 |
+
adetailer_active_a, prompt_ad_a, negative_prompt_ad_a, strength_ad_a,
|
| 2053 |
+
face_detector_ad_a, person_detector_ad_a, hand_detector_ad_a,
|
| 2054 |
+
mask_dilation_a, mask_blur_a, mask_padding_a,
|
| 2055 |
+
adetailer_active_b, prompt_ad_b, negative_prompt_ad_b, strength_ad_b,
|
| 2056 |
+
face_detector_ad_b, person_detector_ad_b, hand_detector_ad_b,
|
| 2057 |
+
mask_dilation_b, mask_blur_b, mask_padding_b,
|
| 2058 |
+
cache_compel_texts, guidance_rescale,
|
| 2059 |
+
image_ip1_dict, mask_ip1, model_ip1, mode_ip1, scale_ip1,
|
| 2060 |
+
image_ip2_dict, mask_ip2, model_ip2, mode_ip2, scale_ip2,
|
| 2061 |
+
pag_scale, face_restoration_model, face_restoration_visibility, face_restoration_weight,
|
| 2062 |
+
load_lora_cpu, verbose_info_gui, gpu_duration,
|
| 2063 |
+
]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2064 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2065 |
last: Tuple[str, Optional[List[str]], Optional[str]] = ("COMPLETE", None, None)
|
| 2066 |
for last in _generate_image(argv):
|
| 2067 |
+
# Iterate over all generator updates and keep the last tuple.
|
| 2068 |
+
# This matches the behavior of returning only the final result.
|
| 2069 |
pass
|
| 2070 |
return last
|
| 2071 |
|
| 2072 |
+
# 4) Explicit-argument API: generate_image_stream (streaming)
|
| 2073 |
+
def generate_image_stream(
|
| 2074 |
+
# Same signature as generate_image; kept duplicated for clarity and API docs.
|
| 2075 |
+
prompt: str,
|
| 2076 |
+
negative_prompt: str = "",
|
| 2077 |
+
num_images: int = 1,
|
| 2078 |
+
num_inference_steps: int = 28,
|
| 2079 |
+
guidance_scale: float = 7.0,
|
| 2080 |
+
clip_skip: int = 0,
|
| 2081 |
+
seed: int = -1,
|
| 2082 |
+
lora1: str = "", lora1_wt: float = 1.0,
|
| 2083 |
+
lora2: str = "", lora2_wt: float = 1.0,
|
| 2084 |
+
lora3: str = "", lora3_wt: float = 1.0,
|
| 2085 |
+
lora4: str = "", lora4_wt: float = 1.0,
|
| 2086 |
+
lora5: str = "", lora5_wt: float = 1.0,
|
| 2087 |
+
lora6: str = "", lora6_wt: float = 1.0,
|
| 2088 |
+
lora7: str = "", lora7_wt: float = 1.0,
|
| 2089 |
+
sampler: str = "Euler",
|
| 2090 |
+
schedule_type: str = "Automatic",
|
| 2091 |
+
schedule_prediction_type: str = "Automatic",
|
| 2092 |
+
height: int = 1024,
|
| 2093 |
+
width: int = 1024,
|
| 2094 |
+
model_name: str = "votepurchase/animagine-xl-3.1",
|
| 2095 |
+
vae_model: str = "None",
|
| 2096 |
+
task: str = "txt2img",
|
| 2097 |
+
image_control_dict: Optional[dict] = None,
|
| 2098 |
+
preprocessor_name: str = "Canny",
|
| 2099 |
+
preprocess_resolution: int = 512,
|
| 2100 |
+
image_resolution: int = 1024,
|
| 2101 |
+
style_prompt: Optional[List[str]] = None,
|
| 2102 |
+
style_json: Optional[dict] = None,
|
| 2103 |
+
image_mask: Optional[Any] = None,
|
| 2104 |
+
strength: float = 0.55,
|
| 2105 |
+
low_threshold: int = 100,
|
| 2106 |
+
high_threshold: int = 200,
|
| 2107 |
+
value_threshold: float = 0.1,
|
| 2108 |
+
distance_threshold: float = 0.1,
|
| 2109 |
+
recolor_gamma_correction: float = 1.0,
|
| 2110 |
+
tile_blur_sigma: int = 9,
|
| 2111 |
+
control_net_output_scaling: float = 1.0,
|
| 2112 |
+
control_net_start_threshold: float = 0.0,
|
| 2113 |
+
control_net_stop_threshold: float = 1.0,
|
| 2114 |
+
textual_inversion: bool = False,
|
| 2115 |
+
prompt_syntax: str = "Classic",
|
| 2116 |
+
upscaler_model_path: Optional[str] = None,
|
| 2117 |
+
upscaler_increases_size: float = 1.2,
|
| 2118 |
+
upscaler_tile_size: int = 0,
|
| 2119 |
+
upscaler_tile_overlap: int = 8,
|
| 2120 |
+
hires_steps: int = 30,
|
| 2121 |
+
hires_denoising_strength: float = 0.55,
|
| 2122 |
+
hires_sampler: str = "Use same sampler",
|
| 2123 |
+
hires_prompt: str = "",
|
| 2124 |
+
hires_negative_prompt: str = "",
|
| 2125 |
+
adetailer_inpaint_only: bool = True,
|
| 2126 |
+
adetailer_verbose: bool = False,
|
| 2127 |
+
hires_schedule_type: str = "Use same schedule type",
|
| 2128 |
+
hires_guidance_scale: float = -1.0,
|
| 2129 |
+
controlnet_model: str = "Automatic",
|
| 2130 |
+
loop_generation: bool = False,
|
| 2131 |
+
leave_progress_bar: bool = False,
|
| 2132 |
+
disable_progress_bar: bool = False,
|
| 2133 |
+
image_previews: bool = True,
|
| 2134 |
+
display_images: bool = True,
|
| 2135 |
+
save_generated_images: bool = True,
|
| 2136 |
+
filename_pattern: str = "model,seed",
|
| 2137 |
+
image_storage_location: str = "./images/",
|
| 2138 |
+
retain_compel_previous_load: bool = True,
|
| 2139 |
+
retain_detailfix_model_previous_load: bool = True,
|
| 2140 |
+
retain_hires_model_previous_load: bool = True,
|
| 2141 |
+
t2i_adapter_preprocessor: Optional[str] = None,
|
| 2142 |
+
t2i_adapter_conditioning_scale: float = 0.55,
|
| 2143 |
+
t2i_adapter_conditioning_factor: float = 1.0,
|
| 2144 |
+
xformers_memory_efficient_attention: bool = True,
|
| 2145 |
+
free_u: bool = False,
|
| 2146 |
+
generator_in_cpu: bool = False,
|
| 2147 |
+
adetailer_sampler: str = "Use same sampler",
|
| 2148 |
+
adetailer_active_a: bool = False,
|
| 2149 |
+
prompt_ad_a: str = "",
|
| 2150 |
+
negative_prompt_ad_a: str = "",
|
| 2151 |
+
strength_ad_a: float = 0.35,
|
| 2152 |
+
face_detector_ad_a: bool = False,
|
| 2153 |
+
person_detector_ad_a: bool = True,
|
| 2154 |
+
hand_detector_ad_a: bool = False,
|
| 2155 |
+
mask_dilation_a: int = 4,
|
| 2156 |
+
mask_blur_a: int = 4,
|
| 2157 |
+
mask_padding_a: int = 32,
|
| 2158 |
+
adetailer_active_b: bool = False,
|
| 2159 |
+
prompt_ad_b: str = "",
|
| 2160 |
+
negative_prompt_ad_b: str = "",
|
| 2161 |
+
strength_ad_b: float = 0.35,
|
| 2162 |
+
face_detector_ad_b: bool = False,
|
| 2163 |
+
person_detector_ad_b: bool = True,
|
| 2164 |
+
hand_detector_ad_b: bool = False,
|
| 2165 |
+
mask_dilation_b: int = 4,
|
| 2166 |
+
mask_blur_b: int = 4,
|
| 2167 |
+
mask_padding_b: int = 32,
|
| 2168 |
+
cache_compel_texts: bool = True,
|
| 2169 |
+
guidance_rescale: float = 0.0,
|
| 2170 |
+
image_ip1_dict: Optional[dict] = None, mask_ip1: Optional[Any] = None,
|
| 2171 |
+
model_ip1: str = "plus_face", mode_ip1: str = "original", scale_ip1: float = 0.7,
|
| 2172 |
+
image_ip2_dict: Optional[dict] = None, mask_ip2: Optional[Any] = None,
|
| 2173 |
+
model_ip2: str = "base", mode_ip2: str = "style", scale_ip2: float = 0.7,
|
| 2174 |
+
pag_scale: float = 0.0,
|
| 2175 |
+
face_restoration_model: Optional[str] = None,
|
| 2176 |
+
face_restoration_visibility: float = 1.0,
|
| 2177 |
+
face_restoration_weight: float = 0.5,
|
| 2178 |
+
load_lora_cpu: bool = False,
|
| 2179 |
+
verbose_info_gui: int = 0,
|
| 2180 |
+
gpu_duration: int = 20,
|
| 2181 |
+
) -> Generator[Tuple[str, Optional[List[str]], Optional[str]], None, None]:
|
| 2182 |
+
# Ensure the correct model is loaded before generation.
|
| 2183 |
+
_load_model(model_name, vae_model, task, controlnet_model)
|
| 2184 |
+
|
| 2185 |
+
argv: List[Any] = [
|
| 2186 |
+
prompt, negative_prompt, num_images, num_inference_steps, guidance_scale, clip_skip, seed,
|
| 2187 |
+
lora1, lora1_wt, lora2, lora2_wt, lora3, lora3_wt, lora4, lora4_wt, lora5, lora5_wt,
|
| 2188 |
+
lora6, lora6_wt, lora7, lora7_wt,
|
| 2189 |
+
sampler, schedule_type, schedule_prediction_type,
|
| 2190 |
+
height, width, model_name, vae_model, task,
|
| 2191 |
+
image_control_dict, preprocessor_name, preprocess_resolution, image_resolution,
|
| 2192 |
+
style_prompt, style_json, image_mask,
|
| 2193 |
+
strength, low_threshold, high_threshold, value_threshold, distance_threshold,
|
| 2194 |
+
recolor_gamma_correction, tile_blur_sigma,
|
| 2195 |
+
control_net_output_scaling, control_net_start_threshold, control_net_stop_threshold,
|
| 2196 |
+
textual_inversion, prompt_syntax,
|
| 2197 |
+
upscaler_model_path, upscaler_increases_size, upscaler_tile_size, upscaler_tile_overlap,
|
| 2198 |
+
hires_steps, hires_denoising_strength, hires_sampler, hires_prompt, hires_negative_prompt,
|
| 2199 |
+
adetailer_inpaint_only, adetailer_verbose, hires_schedule_type, hires_guidance_scale,
|
| 2200 |
+
controlnet_model,
|
| 2201 |
+
loop_generation, leave_progress_bar, disable_progress_bar,
|
| 2202 |
+
image_previews, display_images, save_generated_images,
|
| 2203 |
+
filename_pattern, image_storage_location,
|
| 2204 |
+
retain_compel_previous_load, retain_detailfix_model_previous_load, retain_hires_model_previous_load,
|
| 2205 |
+
t2i_adapter_preprocessor, t2i_adapter_conditioning_scale, t2i_adapter_conditioning_factor,
|
| 2206 |
+
xformers_memory_efficient_attention, free_u, generator_in_cpu,
|
| 2207 |
+
adetailer_sampler,
|
| 2208 |
+
adetailer_active_a, prompt_ad_a, negative_prompt_ad_a, strength_ad_a,
|
| 2209 |
+
face_detector_ad_a, person_detector_ad_a, hand_detector_ad_a,
|
| 2210 |
+
mask_dilation_a, mask_blur_a, mask_padding_a,
|
| 2211 |
+
adetailer_active_b, prompt_ad_b, negative_prompt_ad_b, strength_ad_b,
|
| 2212 |
+
face_detector_ad_b, person_detector_ad_b, hand_detector_ad_b,
|
| 2213 |
+
mask_dilation_b, mask_blur_b, mask_padding_b,
|
| 2214 |
+
cache_compel_texts, guidance_rescale,
|
| 2215 |
+
image_ip1_dict, mask_ip1, model_ip1, mode_ip1, scale_ip1,
|
| 2216 |
+
image_ip2_dict, mask_ip2, model_ip2, mode_ip2, scale_ip2,
|
| 2217 |
+
pag_scale, face_restoration_model, face_restoration_visibility, face_restoration_weight,
|
| 2218 |
+
load_lora_cpu, verbose_info_gui, gpu_duration,
|
| 2219 |
+
]
|
| 2220 |
+
|
| 2221 |
+
# Yield all updates from the generator directly.
|
| 2222 |
yield from _generate_image(argv)
|
| 2223 |
|
| 2224 |
+
# 5) Register two APIs with explicit signatures
|
| 2225 |
+
gr.api(generate_image, api_name="generate_image", show_api=True, queue=True, concurrency_id="gpu")
|
| 2226 |
+
gr.api(generate_image_stream, api_name="generate_image_stream", show_api=True, queue=True, concurrency_id="gpu")
|
| 2227 |
|
| 2228 |
gr.LoginButton()
|
| 2229 |
gr.DuplicateButton(value="Duplicate Space for private use (This demo does not work on CPU. Requires GPU Space)")
|