Compare commits

...

47 Commits
v2.5.37 ... cf

Author SHA1 Message Date
7417c1af48 changelog 2023-06-03 10:02:34 +05:30
dd95df8f02 Refactor the default model download code, remove check_models.py, don't check in legacy paths since that's already migrated during initialization; Download CodeFormer's model only when it's used for the first time 2023-06-02 16:34:29 +05:30
0860e35d17 sdkit 1.0.101 - CodeFormer as an option to improve faces 2023-06-01 16:50:01 +05:30
32c4f10626 Merge pull request #1274 from patriceac/beta
Support for CodeFormer face restoration
2023-06-01 15:28:25 +05:30
3e90eafafb Merge branch 'cf' into beta 2023-06-01 15:27:37 +05:30
16fcb4ed79 Merge pull request #1314 from JeLuF/dndgan
Fix GFPGAN settings import
2023-05-29 15:46:28 +05:30
9be48b3fc5 Merge pull request #1317 from ogmaresca/fix-metadata-SyntaxWarning
Fix SyntaxWarning on startup
2023-05-29 10:15:29 +05:30
7830ec7ca2 Fix SyntaxWarning on startup
Fixes
```
/ssd2/easydiffusion/ui/easydiffusion/utils/save_utils.py:222: SyntaxWarning: "is not" with a literal. Did you mean "!="?
  if task_data.use_upscale is not "latent_upscaler" and "latent_upscaler_steps" in metadata:
  ```
2023-05-28 14:39:36 -04:00
0ebf9df207 Merge pull request #1316 from JeLuF/fix1312
Fix #1312 - invert model A and B ratio in merge
2023-05-28 17:33:26 +05:30
40682405cc Merge pull request #1309 from ogmaresca/add-tiling-to-metadata
Add tiling and latent upscaler steps to metadata
2023-05-28 17:33:00 +05:30
9fdd482811 Merge pull request #1311 from patriceac/patch-3
Fix regression in restore task to UI flow
2023-05-28 17:32:26 +05:30
7202ffba6e Fix #1312 - invert model A and B ratio in merge 2023-05-28 02:36:56 +02:00
30dcc7477f Fix GFPGAN settings import
The word None which many txt metadata files contain as value for the GFPGAN field should not be considered to be a model name.
If the value is None, disable the checkbox
2023-05-28 01:43:58 +02:00
6826435046 Fix restore task to UI flow
Fixes a regression introduced by https://github.com/cmdr2/stable-diffusion-ui/pull/1304
2023-05-27 00:26:25 -07:00
69d937e0b1 Add tiling and latent upscaler steps to metadata
Also fix txt metadata labels when also embedding metadata
2023-05-26 19:51:30 -04:00
edd92b724f UniPC TU 2 isn't working with diffusers either 2023-05-26 19:47:00 +05:30
0990d8fc4d Merge pull request #1304 from JeLuF/dndfix
Remove warning when reusing settings - Fixes #1290
2023-05-26 15:24:56 +05:30
1da35e89f6 Capitalization 2023-05-25 18:38:40 +05:30
d818107953 Remove warning when reusing settings - Fixes #1290 2023-05-25 13:36:45 +02:00
b3f65c0b3c changelog 2023-05-25 15:52:05 +05:30
59c322dc3b Show seamless tiling only in diffusers mode 2023-05-25 15:41:41 +05:30
096f9ad3a6 Merge branch 'beta' of github.com:cmdr2/stable-diffusion-ui into beta 2023-05-25 15:39:02 +05:30
5c8965b3ab changelog 2023-05-25 15:38:49 +05:30
090f8f6070 Merge pull request #1300 from JeLuF/tile
Add seamless tiling support
2023-05-25 15:38:15 +05:30
5f4fc63645 Merge branch 'beta' of github.com:cmdr2/stable-diffusion-ui into beta 2023-05-25 15:37:37 +05:30
a0b3b5af53 sdkit 1.0.98 - seamless tiling 2023-05-25 15:36:27 +05:30
351dd97500 Merge pull request #1303 from cmdr2/main
Main
2023-05-25 15:05:05 +05:30
b511000441 Merge pull request #1302 from cmdr2/beta
Beta
2023-05-25 14:57:51 +05:30
523131de79 Merge pull request #1298 from JeLuF/confix
Fix confirmation dialog
2023-05-25 07:19:21 +05:30
9dfa300083 Add seamless tiling support 2023-05-25 00:16:14 +02:00
3ea74af76d Fix confirmation dialog
By splitting the confirmation function into two halves, the closure was lost
2023-05-24 19:29:54 +02:00
3d7e16cfd9 changelog 2023-05-24 16:29:58 +05:30
db265309a5 Show an explanation for why the CPU toggle is disabled; utility class for alert() and confirm() that matches the ED theme; code formatting 2023-05-24 16:24:29 +05:30
8554b0eab2 Better reporting of model load errors - sends the report to the browser UI during the next image rendering task 2023-05-24 16:02:53 +05:30
f641e6e69d Merge branch 'beta' of github.com:cmdr2/stable-diffusion-ui into beta 2023-05-24 15:40:26 +05:30
30c07eab6b Cleaner reporting of errors in the UI; Suggest increasing the page size if that's the error 2023-05-24 15:30:55 +05:30
eba83386c1 make a note about a flood fill library 2023-05-24 10:08:00 +05:30
d3334f9dfa Merge branch 'beta' of github.com:cmdr2/stable-diffusion-ui into beta 2023-05-23 16:55:52 +05:30
a87dca1ef4 changelog 2023-05-23 16:55:42 +05:30
2bab4341a3 Add 'Latent Upscaler' as an option in the upscaling dropdown 2023-05-23 16:53:53 +05:30
01fb2fde47 Merge pull request #1293 from JeLuF/edready
Add 'ED is ready, go to localhost:9000' msg to log
2023-05-23 15:15:08 +05:30
0127714929 Add 'ED is ready, go to localhost:9000' msg to log
Sometimes the browser window does not open (esp. on Linux and Mac).
Show a prominent message to the log so that users don't wait for hours.
2023-05-22 21:19:31 +02:00
a5a1d33589 Fix face restoration model selection 2023-05-21 18:32:48 -07:00
a25364732b Support for CodeFormer
Depends on https://github.com/easydiffusion/sdkit/pull/34.
2023-05-17 02:04:20 -07:00
0adaf6c0a0 Merge branch 'beta' of https://github.com/patriceac/stable-diffusion-ui into beta 2023-05-16 18:20:46 -07:00
654749de40 Revert "Toast notifications for ED"
This reverts commit dde51c0cef.
2023-04-29 19:26:11 -07:00
dde51c0cef Toast notifications for ED
Adding support for toast notifications for use in Core and user plugins.
2023-04-29 19:25:10 -07:00
22 changed files with 466 additions and 263 deletions

View File

@ -22,6 +22,10 @@
Our focus continues to remain on an easy installation experience, and an easy user-interface. While still remaining pretty powerful, in terms of features and speed.
### Detailed changelog
* 2.5.40 - 3 Jun 2023 - Added CodeFormer as another option for fixing faces and eyes. CodeFormer tends to perform better than GFPGAN for many images. Thanks @patriceac for the implementation, and for contacting the CodeFormer team (who were supportive of it being integrated into Easy Diffusion).
* 2.5.39 - 25 May 2023 - (beta-only) Seamless Tiling - make seamlessly tiled images, e.g. rock and grass textures. Thanks @JeLuf.
* 2.5.38 - 24 May 2023 - Better reporting of errors, and show an explanation if the user cannot disable the "Use CPU" setting.
* 2.5.38 - 23 May 2023 - Add Latent Upscaler as another option for upscaling images. Thanks @JeLuf for the implementation of the Latent Upscaler model.
* 2.5.37 - 19 May 2023 - (beta-only) Two more samplers: DDPM and DEIS. Also disables the samplers that aren't working yet in the Diffusers version. Thanks @ogmaresca.
* 2.5.37 - 19 May 2023 - (beta-only) Support CLIP-Skip. You can set this option under the models dropdown. Thanks @JeLuf.
* 2.5.37 - 19 May 2023 - (beta-only) More VRAM optimizations for all modes in diffusers. The VRAM usage for diffusers in "low" and "balanced" should now be equal or less than the non-diffusers version. Performs softmax in half precision, like sdkit does.

View File

@ -1,101 +0,0 @@
# this script runs inside the legacy "stable-diffusion" folder
from sdkit.models import download_model, get_model_info_from_db
from sdkit.utils import hash_file_quick
import os
import shutil
from glob import glob
import traceback
models_base_dir = os.path.abspath(os.path.join("..", "models"))
models_to_check = {
"stable-diffusion": [
{"file_name": "sd-v1-4.ckpt", "model_id": "1.4"},
],
"gfpgan": [
{"file_name": "GFPGANv1.4.pth", "model_id": "1.4"},
],
"realesrgan": [
{"file_name": "RealESRGAN_x4plus.pth", "model_id": "x4plus"},
{"file_name": "RealESRGAN_x4plus_anime_6B.pth", "model_id": "x4plus_anime_6"},
],
"vae": [
{"file_name": "vae-ft-mse-840000-ema-pruned.ckpt", "model_id": "vae-ft-mse-840000-ema-pruned"},
],
}
MODEL_EXTENSIONS = { # copied from easydiffusion/model_manager.py
"stable-diffusion": [".ckpt", ".safetensors"],
"vae": [".vae.pt", ".ckpt", ".safetensors"],
"hypernetwork": [".pt", ".safetensors"],
"gfpgan": [".pth"],
"realesrgan": [".pth"],
"lora": [".ckpt", ".safetensors"],
}
def download_if_necessary(model_type: str, file_name: str, model_id: str):
model_path = os.path.join(models_base_dir, model_type, file_name)
expected_hash = get_model_info_from_db(model_type=model_type, model_id=model_id)["quick_hash"]
other_models_exist = any_model_exists(model_type)
known_model_exists = os.path.exists(model_path)
known_model_is_corrupt = known_model_exists and hash_file_quick(model_path) != expected_hash
if known_model_is_corrupt or (not other_models_exist and not known_model_exists):
print("> download", model_type, model_id)
download_model(model_type, model_id, download_base_dir=models_base_dir)
def init():
migrate_legacy_model_location()
for model_type, models in models_to_check.items():
for model in models:
try:
download_if_necessary(model_type, model["file_name"], model["model_id"])
except:
traceback.print_exc()
fail(model_type)
print(model_type, "model(s) found.")
### utilities
def any_model_exists(model_type: str) -> bool:
extensions = MODEL_EXTENSIONS.get(model_type, [])
for ext in extensions:
if any(glob(f"{models_base_dir}/{model_type}/**/*{ext}", recursive=True)):
return True
return False
def migrate_legacy_model_location():
'Move the models inside the legacy "stable-diffusion" folder, to their respective folders'
for model_type, models in models_to_check.items():
for model in models:
file_name = model["file_name"]
if os.path.exists(file_name):
dest_dir = os.path.join(models_base_dir, model_type)
os.makedirs(dest_dir, exist_ok=True)
shutil.move(file_name, os.path.join(dest_dir, file_name))
def fail(model_name):
print(
f"""Error downloading the {model_name} model. Sorry about that, please try to:
1. Run this installer again.
2. If that doesn't fix it, please try to download the file manually. The address to download from, and the destination to save to are printed above this message.
3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB
4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues
Thanks!"""
)
exit(1)
### start
init()

View File

@ -18,7 +18,7 @@ os_name = platform.system()
modules_to_check = {
"torch": ("1.11.0", "1.13.1", "2.0.0"),
"torchvision": ("0.12.0", "0.14.1", "0.15.1"),
"sdkit": "1.0.97",
"sdkit": "1.0.101",
"stable-diffusion-sdkit": "2.1.4",
"rich": "12.6.0",
"uvicorn": "0.19.0",

View File

@ -79,13 +79,6 @@ call WHERE uvicorn > .tmp
@echo conda_sd_ui_deps_installed >> ..\scripts\install_status.txt
)
@rem Download the required models
call python ..\scripts\check_models.py
if "%ERRORLEVEL%" NEQ "0" (
pause
exit /b
)
@>nul findstr /m "sd_install_complete" ..\scripts\install_status.txt
@if "%ERRORLEVEL%" NEQ "0" (
@echo sd_weights_downloaded >> ..\scripts\install_status.txt

View File

@ -51,12 +51,6 @@ if ! command -v uvicorn &> /dev/null; then
fail "UI packages not found!"
fi
# Download the required models
if ! python ../scripts/check_models.py; then
read -p "Press any key to continue"
exit 1
fi
if [ `grep -c sd_install_complete ../scripts/install_status.txt` -gt "0" ]; then
echo sd_weights_downloaded >> ../scripts/install_status.txt
echo sd_install_complete >> ../scripts/install_status.txt

View File

@ -10,6 +10,8 @@ import warnings
from easydiffusion import task_manager
from easydiffusion.utils import log
from rich.logging import RichHandler
from rich.console import Console
from rich.panel import Panel
from sdkit.utils import log as sdkit_log # hack, so we can overwrite the log config
# Remove all handlers associated with the root logger object.
@ -88,8 +90,8 @@ def init():
os.makedirs(USER_SERVER_PLUGINS_DIR, exist_ok=True)
# https://pytorch.org/docs/stable/storage.html
warnings.filterwarnings('ignore', category=UserWarning, message='TypedStorage is deprecated')
warnings.filterwarnings("ignore", category=UserWarning, message="TypedStorage is deprecated")
load_server_plugins()
update_render_threads()
@ -213,11 +215,48 @@ def open_browser():
ui = config.get("ui", {})
net = config.get("net", {})
port = net.get("listen_port", 9000)
if ui.get("open_browser_on_start", True):
import webbrowser
webbrowser.open(f"http://localhost:{port}")
Console().print(
Panel(
"\n"
+ "[white]Easy Diffusion is ready to serve requests.\n\n"
+ "A new browser tab should have been opened by now.\n"
+ f"If not, please open your web browser and navigate to [bold yellow underline]http://localhost:{port}/\n",
title="Easy Diffusion is ready",
style="bold yellow on blue",
)
)
def fail_and_die(fail_type: str, data: str):
suggestions = [
"Run this installer again.",
"If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB",
"If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues",
]
if fail_type == "model_download":
fail_label = f"Error downloading the {data} model"
suggestions.insert(
1,
"If that doesn't fix it, please try to download the file manually. The address to download from, and the destination to save to are printed above this message.",
)
else:
fail_label = "Error while installing Easy Diffusion"
msg = [f"{fail_label}. Sorry about that, please try to:"]
for i, suggestion in enumerate(suggestions):
msg.append(f"{i+1}. {suggestion}")
msg.append("Thanks!")
print("\n".join(msg))
exit(1)
def get_image_modifiers():
modifiers_json_path = os.path.join(SD_UI_DIR, "modifiers.json")

View File

@ -1,10 +1,14 @@
import os
import shutil
from glob import glob
import traceback
from easydiffusion import app
from easydiffusion.types import TaskData
from easydiffusion.utils import log
from sdkit import Context
from sdkit.models import load_model, scan_model, unload_model
from sdkit.models import load_model, scan_model, unload_model, download_model, get_model_info_from_db
from sdkit.utils import hash_file_quick
KNOWN_MODEL_TYPES = [
"stable-diffusion",
@ -13,6 +17,7 @@ KNOWN_MODEL_TYPES = [
"gfpgan",
"realesrgan",
"lora",
"codeformer",
]
MODEL_EXTENSIONS = {
"stable-diffusion": [".ckpt", ".safetensors"],
@ -21,14 +26,22 @@ MODEL_EXTENSIONS = {
"gfpgan": [".pth"],
"realesrgan": [".pth"],
"lora": [".ckpt", ".safetensors"],
"codeformer": [".pth"],
}
DEFAULT_MODELS = {
"stable-diffusion": [ # needed to support the legacy installations
"custom-model", # only one custom model file was supported initially, creatively named 'custom-model'
"sd-v1-4", # Default fallback.
"stable-diffusion": [
{"file_name": "sd-v1-4.ckpt", "model_id": "1.4"},
],
"gfpgan": [
{"file_name": "GFPGANv1.4.pth", "model_id": "1.4"},
],
"realesrgan": [
{"file_name": "RealESRGAN_x4plus.pth", "model_id": "x4plus"},
{"file_name": "RealESRGAN_x4plus_anime_6B.pth", "model_id": "x4plus_anime_6"},
],
"vae": [
{"file_name": "vae-ft-mse-840000-ema-pruned.ckpt", "model_id": "vae-ft-mse-840000-ema-pruned"},
],
"gfpgan": ["GFPGANv1.3"],
"realesrgan": ["RealESRGAN_x4plus"],
}
MODELS_TO_LOAD_ON_START = ["stable-diffusion", "vae", "hypernetwork", "lora"]
@ -37,6 +50,8 @@ known_models = {}
def init():
make_model_folders()
migrate_legacy_model_location() # if necessary
download_default_models_if_necessary()
getModels() # run this once, to cache the picklescan results
@ -53,15 +68,21 @@ def load_default_models(context: Context):
scan_model=context.model_paths[model_type] != None
and not context.model_paths[model_type].endswith(".safetensors"),
)
if model_type in context.model_load_errors:
del context.model_load_errors[model_type]
except Exception as e:
log.error(f"[red]Error while loading {model_type} model: {context.model_paths[model_type]}[/red]")
log.exception(e)
del context.model_paths[model_type]
context.model_load_errors[model_type] = str(e) # storing the entire Exception can lead to memory leaks
def unload_all(context: Context):
for model_type in KNOWN_MODEL_TYPES:
unload_model(context, model_type)
if model_type in context.model_load_errors:
del context.model_load_errors[model_type]
def resolve_model_to_use(model_name: str = None, model_type: str = None):
@ -69,7 +90,7 @@ def resolve_model_to_use(model_name: str = None, model_type: str = None):
default_models = DEFAULT_MODELS.get(model_type, [])
config = app.getConfig()
model_dirs = [os.path.join(app.MODELS_DIR, model_type), app.SD_DIR]
model_dir = os.path.join(app.MODELS_DIR, model_type)
if not model_name: # When None try user configured model.
# config = getConfig()
if "model" in config and model_type in config["model"]:
@ -77,42 +98,41 @@ def resolve_model_to_use(model_name: str = None, model_type: str = None):
if model_name:
# Check models directory
models_dir_path = os.path.join(app.MODELS_DIR, model_type, model_name)
model_path = os.path.join(model_dir, model_name)
if os.path.exists(model_path):
return model_path
for model_extension in model_extensions:
if os.path.exists(models_dir_path + model_extension):
return models_dir_path + model_extension
if os.path.exists(model_path + model_extension):
return model_path + model_extension
if os.path.exists(model_name + model_extension):
return os.path.abspath(model_name + model_extension)
# Default locations
if model_name in default_models:
default_model_path = os.path.join(app.SD_DIR, model_name)
for model_extension in model_extensions:
if os.path.exists(default_model_path + model_extension):
return default_model_path + model_extension
# Can't find requested model, check the default paths.
for default_model in default_models:
for model_dir in model_dirs:
default_model_path = os.path.join(model_dir, default_model)
for model_extension in model_extensions:
if os.path.exists(default_model_path + model_extension):
if model_name is not None:
log.warn(
f"Could not find the configured custom model {model_name}{model_extension}. Using the default one: {default_model_path}{model_extension}"
)
return default_model_path + model_extension
if model_type == "stable-diffusion":
for default_model in default_models:
default_model_path = os.path.join(model_dir, default_model["file_name"])
if os.path.exists(default_model_path):
if model_name is not None:
log.warn(
f"Could not find the configured custom model {model_name}. Using the default one: {default_model_path}"
)
return default_model_path
return None
def reload_models_if_necessary(context: Context, task_data: TaskData):
face_fix_lower = task_data.use_face_correction.lower() if task_data.use_face_correction else ""
upscale_lower = task_data.use_upscale.lower() if task_data.use_upscale else ""
model_paths_in_req = {
"stable-diffusion": task_data.use_stable_diffusion_model,
"vae": task_data.use_vae_model,
"hypernetwork": task_data.use_hypernetwork_model,
"gfpgan": task_data.use_face_correction,
"realesrgan": task_data.use_upscale,
"codeformer": task_data.use_face_correction if "codeformer" in face_fix_lower else None,
"gfpgan": task_data.use_face_correction if "gfpgan" in face_fix_lower else None,
"realesrgan": task_data.use_upscale if "realesrgan" in upscale_lower else None,
"latent_upscaler": True if "latent_upscaler" in upscale_lower else None,
"nsfw_checker": True if task_data.block_nsfw else None,
"lora": task_data.use_lora_model,
}
@ -122,6 +142,11 @@ def reload_models_if_necessary(context: Context, task_data: TaskData):
if context.model_paths.get(model_type) != path
}
if task_data.codeformer_upscale_faces and "realesrgan" not in models_to_reload.keys():
models_to_reload["realesrgan"] = resolve_model_to_use(
DEFAULT_MODELS["realesrgan"][0]["file_name"], "realesrgan"
)
if set_vram_optimizations(context) or set_clip_skip(context, task_data): # reload SD
models_to_reload["stable-diffusion"] = model_paths_in_req["stable-diffusion"]
@ -129,7 +154,14 @@ def reload_models_if_necessary(context: Context, task_data: TaskData):
context.model_paths[model_type] = model_path_in_req
action_fn = unload_model if context.model_paths[model_type] is None else load_model
action_fn(context, model_type, scan_model=False) # we've scanned them already
try:
action_fn(context, model_type, scan_model=False) # we've scanned them already
if model_type in context.model_load_errors:
del context.model_load_errors[model_type]
except Exception as e:
log.exception(e)
if action_fn == load_model:
context.model_load_errors[model_type] = str(e) # storing the entire Exception can lead to memory leaks
def resolve_model_paths(task_data: TaskData):
@ -141,11 +173,49 @@ def resolve_model_paths(task_data: TaskData):
task_data.use_lora_model = resolve_model_to_use(task_data.use_lora_model, model_type="lora")
if task_data.use_face_correction:
task_data.use_face_correction = resolve_model_to_use(task_data.use_face_correction, "gfpgan")
if task_data.use_upscale:
if "gfpgan" in task_data.use_face_correction.lower():
model_type = "gfpgan"
elif "codeformer" in task_data.use_face_correction.lower():
model_type = "codeformer"
download_if_necessary("codeformer", "codeformer.pth", "codeformer-0.1.0")
task_data.use_face_correction = resolve_model_to_use(task_data.use_face_correction, model_type)
if task_data.use_upscale and "realesrgan" in task_data.use_upscale.lower():
task_data.use_upscale = resolve_model_to_use(task_data.use_upscale, "realesrgan")
def fail_if_models_did_not_load(context: Context):
for model_type in KNOWN_MODEL_TYPES:
if model_type in context.model_load_errors:
e = context.model_load_errors[model_type]
raise Exception(f"Could not load the {model_type} model! Reason: " + e)
def download_default_models_if_necessary():
for model_type, models in DEFAULT_MODELS.items():
for model in models:
try:
download_if_necessary(model_type, model["file_name"], model["model_id"])
except:
traceback.print_exc()
app.fail_and_die(fail_type="model_download", data=model_type)
print(model_type, "model(s) found.")
def download_if_necessary(model_type: str, file_name: str, model_id: str):
model_path = os.path.join(app.MODELS_DIR, model_type, file_name)
expected_hash = get_model_info_from_db(model_type=model_type, model_id=model_id)["quick_hash"]
other_models_exist = any_model_exists(model_type)
known_model_exists = os.path.exists(model_path)
known_model_is_corrupt = known_model_exists and hash_file_quick(model_path) != expected_hash
if known_model_is_corrupt or (not other_models_exist and not known_model_exists):
print("> download", model_type, model_id)
download_model(model_type, model_id, download_base_dir=app.MODELS_DIR)
def set_vram_optimizations(context: Context):
config = app.getConfig()
vram_usage_level = config.get("vram_usage_level", "balanced")
@ -157,6 +227,26 @@ def set_vram_optimizations(context: Context):
return False
def migrate_legacy_model_location():
'Move the models inside the legacy "stable-diffusion" folder, to their respective folders'
for model_type, models in DEFAULT_MODELS.items():
for model in models:
file_name = model["file_name"]
legacy_path = os.path.join(app.SD_DIR, file_name)
if os.path.exists(legacy_path):
shutil.move(legacy_path, os.path.join(app.MODELS_DIR, model_type, file_name))
def any_model_exists(model_type: str) -> bool:
extensions = MODEL_EXTENSIONS.get(model_type, [])
for ext in extensions:
if any(glob(f"{app.MODELS_DIR}/{model_type}/**/*{ext}", recursive=True)):
return True
return False
def set_clip_skip(context: Context, task_data: TaskData):
clip_skip = task_data.clip_skip
@ -214,17 +304,12 @@ def is_malicious_model(file_path):
def getModels():
models = {
"active": {
"stable-diffusion": "sd-v1-4",
"vae": "",
"hypernetwork": "",
"lora": "",
},
"options": {
"stable-diffusion": ["sd-v1-4"],
"vae": [],
"hypernetwork": [],
"lora": [],
"codeformer": ["codeformer"],
},
}
@ -285,9 +370,4 @@ def getModels():
if models_scanned > 0:
log.info(f"[green]Scanned {models_scanned} models. Nothing infected[/]")
# legacy
custom_weight_path = os.path.join(app.SD_DIR, "custom-model.ckpt")
if os.path.exists(custom_weight_path):
models["options"]["stable-diffusion"].append("custom-model")
return models

View File

@ -33,6 +33,8 @@ def init(device):
context.stop_processing = False
context.temp_images = {}
context.partial_x_samples = None
context.model_load_errors = {}
context.enable_codeformer = True
from easydiffusion import app
@ -95,7 +97,7 @@ def make_images_internal(
task_data.stream_image_progress_interval,
)
gc(context)
filtered_images = filter_images(task_data, images, user_stopped)
filtered_images = filter_images(req, task_data, images, user_stopped)
if task_data.save_to_disk_path is not None:
save_images_to_disk(images, filtered_images, req, task_data)
@ -151,22 +153,40 @@ def generate_images_internal(
return images, user_stopped
def filter_images(task_data: TaskData, images: list, user_stopped):
def filter_images(req: GenerateImageRequest, task_data: TaskData, images: list, user_stopped):
if user_stopped:
return images
filters_to_apply = []
filter_params = {}
if task_data.block_nsfw:
filters_to_apply.append("nsfw_checker")
if task_data.use_face_correction and "gfpgan" in task_data.use_face_correction.lower():
if task_data.use_face_correction and "codeformer" in task_data.use_face_correction.lower():
filters_to_apply.append("codeformer")
filter_params["upscale_faces"] = task_data.codeformer_upscale_faces
elif task_data.use_face_correction and "gfpgan" in task_data.use_face_correction.lower():
filters_to_apply.append("gfpgan")
if task_data.use_upscale and "realesrgan" in task_data.use_upscale.lower():
filters_to_apply.append("realesrgan")
if task_data.use_upscale:
if "realesrgan" in task_data.use_upscale.lower():
filters_to_apply.append("realesrgan")
elif task_data.use_upscale == "latent_upscaler":
filters_to_apply.append("latent_upscaler")
filter_params["latent_upscaler_options"] = {
"prompt": req.prompt,
"negative_prompt": req.negative_prompt,
"seed": req.seed,
"num_inference_steps": task_data.latent_upscaler_steps,
"guidance_scale": 0,
}
filter_params["scale"] = task_data.upscale_amount
if len(filters_to_apply) == 0:
return images
return apply_filters(context, filters_to_apply, images, scale=task_data.upscale_amount)
return apply_filters(context, filters_to_apply, images, **filter_params)
def construct_response(images: list, seeds: list, task_data: TaskData, base_seed: int):

View File

@ -336,6 +336,7 @@ def thread_render(device):
current_state = ServerStates.LoadingModel
model_manager.resolve_model_paths(task.task_data)
model_manager.reload_models_if_necessary(renderer.context, task.task_data)
model_manager.fail_if_models_did_not_load(renderer.context)
current_state = ServerStates.Rendering
task.response = renderer.make_images(

View File

@ -23,6 +23,7 @@ class GenerateImageRequest(BaseModel):
sampler_name: str = None # "ddim", "plms", "heun", "euler", "euler_a", "dpm2", "dpm2_a", "lms"
hypernetwork_strength: float = 0
lora_alpha: float = 0
tiling: str = "none" # "none", "x", "y", "xy"
class TaskData(BaseModel):
@ -32,8 +33,9 @@ class TaskData(BaseModel):
vram_usage_level: str = "balanced" # or "low" or "medium"
use_face_correction: str = None # or "GFPGANv1.3"
use_upscale: str = None # or "RealESRGAN_x4plus" or "RealESRGAN_x4plus_anime_6B"
use_upscale: str = None # or "RealESRGAN_x4plus" or "RealESRGAN_x4plus_anime_6B" or "latent_upscaler"
upscale_amount: int = 4 # or 2
latent_upscaler_steps: int = 10
use_stable_diffusion_model: str = "sd-v1-4"
# use_stable_diffusion_config: str = "v1-inference"
use_vae_model: str = None
@ -49,6 +51,7 @@ class TaskData(BaseModel):
stream_image_progress: bool = False
stream_image_progress_interval: int = 5
clip_skip: bool = False
codeformer_upscale_faces: bool = False
class MergeRequest(BaseModel):

View File

@ -30,9 +30,11 @@ TASK_TEXT_MAPPING = {
"lora_alpha": "LoRA Strength",
"use_hypernetwork_model": "Hypernetwork model",
"hypernetwork_strength": "Hypernetwork Strength",
"tiling": "Seamless Tiling",
"use_face_correction": "Use Face Correction",
"use_upscale": "Use Upscaling",
"upscale_amount": "Upscale By",
"latent_upscaler_steps": "Latent Upscaler Steps"
}
time_placeholders = {
@ -169,21 +171,23 @@ def save_images_to_disk(images: list, filtered_images: list, req: GenerateImageR
output_quality=task_data.output_quality,
output_lossless=task_data.output_lossless,
)
if task_data.metadata_output_format.lower() in ["json", "txt", "embed"]:
save_dicts(
metadata_entries,
save_dir_path,
file_name=make_filter_filename,
output_format=task_data.metadata_output_format,
file_format=task_data.output_format,
)
if task_data.metadata_output_format:
for metadata_output_format in task_data.metadata_output_format.split(","):
if metadata_output_format.lower() in ["json", "txt", "embed"]:
save_dicts(
metadata_entries,
save_dir_path,
file_name=make_filter_filename,
output_format=task_data.metadata_output_format,
file_format=task_data.output_format,
)
def get_metadata_entries_for_request(req: GenerateImageRequest, task_data: TaskData):
metadata = get_printable_request(req, task_data)
# if text, format it in the text format expected by the UI
is_txt_format = task_data.metadata_output_format.lower() == "txt"
is_txt_format = task_data.metadata_output_format and "txt" in task_data.metadata_output_format.lower().split(",")
if is_txt_format:
metadata = {TASK_TEXT_MAPPING[key]: val for key, val in metadata.items() if key in TASK_TEXT_MAPPING}
@ -215,10 +219,12 @@ def get_printable_request(req: GenerateImageRequest, task_data: TaskData):
del metadata["hypernetwork_strength"]
if task_data.use_lora_model is None and "lora_alpha" in metadata:
del metadata["lora_alpha"]
if task_data.use_upscale != "latent_upscaler" and "latent_upscaler_steps" in metadata:
del metadata["latent_upscaler_steps"]
app_config = app.getConfig()
if not app_config.get("test_diffusers", False):
for key in (x for x in ["use_lora_model", "lora_alpha", "clip_skip"] if x in metadata):
for key in (x for x in ["use_lora_model", "lora_alpha", "clip_skip", "tiling", "latent_upscaler_steps"] if x in metadata):
del metadata[key]
return metadata

View File

@ -30,7 +30,7 @@
<h1>
<img id="logo_img" src="/media/images/icon-512x512.png" >
Easy Diffusion
<small>v2.5.37 <span id="updateBranchLabel"></span></small>
<small>v2.5.40 <span id="updateBranchLabel"></span></small>
</h1>
</div>
<div id="server-status">
@ -167,7 +167,7 @@
<option value="unipc_snr" class="k_diffusion-only">UniPC SNR</option>
<option value="unipc_tu">UniPC TU</option>
<option value="unipc_snr_2" class="k_diffusion-only">UniPC SNR 2</option>
<option value="unipc_tu_2">UniPC TU 2</option>
<option value="unipc_tu_2" class="k_diffusion-only">UniPC TU 2</option>
<option value="unipc_tq" class="k_diffusion-only">UniPC TQ</option>
</select>
<a href="https://github.com/cmdr2/stable-diffusion-ui/wiki/How-to-Use#samplers" target="_blank"><i class="fa-solid fa-circle-question help-btn"><span class="simple-tooltip top-left">Click to learn more about samplers</span></i></a>
@ -236,6 +236,15 @@
<td><label for="hypernetwork_strength_slider">Hypernetwork Strength:</label></td>
<td> <input id="hypernetwork_strength_slider" name="hypernetwork_strength_slider" class="editor-slider" value="100" type="range" min="0" max="100"> <input id="hypernetwork_strength" name="hypernetwork_strength" size="4" pattern="^[0-9\.]+$" onkeypress="preventNonNumericalInput(event)"><br/></td>
</tr>
<tr id="tiling_container" class="pl-5"><td><label for="tiling">Seamless Tiling:</label></td><td>
<select id="tiling" name="tiling">
<option value="none" selected>None</option>
<option value="x">Horizontal</option>
<option value="y">Vertical</option>
<option value="xy">Both</option>
</select>
<a href="https://github.com/cmdr2/stable-diffusion-ui/wiki/Seamless-Tiling" target="_blank"><i class="fa-solid fa-circle-question help-btn"><span class="simple-tooltip top-left">Click to learn more about Seamless Tiling</span></i></a>
</td></tr>
<tr class="pl-5"><td><label for="output_format">Output Format:</label></td><td>
<select id="output_format" name="output_format">
<option value="jpeg" selected>jpeg</option>
@ -254,18 +263,27 @@
<div><ul>
<li><b class="settings-subheader">Render Settings</b></li>
<li class="pl-5"><input id="stream_image_progress" name="stream_image_progress" type="checkbox"> <label for="stream_image_progress">Show a live preview <small>(uses more VRAM, slower images)</small></label></li>
<li class="pl-5"><input id="use_face_correction" name="use_face_correction" type="checkbox"> <label for="use_face_correction">Fix incorrect faces and eyes</label> <div style="display:inline-block;"><input id="gfpgan_model" type="text" spellcheck="false" autocomplete="off" class="model-filter" data-path="" /></div></li>
<li class="pl-5">
<input id="use_face_correction" name="use_face_correction" type="checkbox"> <label for="use_face_correction">Fix incorrect faces and eyes</label> <div style="display:inline-block;"><input id="gfpgan_model" type="text" spellcheck="false" autocomplete="off" class="model-filter" data-path="" /></div>
<div id="codeformer_settings" class="displayNone sub-settings">
<input id="codeformer_upscale_faces" name="codeformer_upscale_faces" type="checkbox"><label for="codeformer_upscale_faces">Upscale Faces <small>(improves the resolution of faces)</small></label>
</div>
</li>
<li class="pl-5">
<input id="use_upscale" name="use_upscale" type="checkbox"> <label for="use_upscale">Scale up by</label>
<select id="upscale_amount" name="upscale_amount">
<option value="2">2x</option>
<option value="4" selected>4x</option>
<option id="upscale_amount_2x" value="2">2x</option>
<option id="upscale_amount_4x" value="4" selected>4x</option>
</select>
with
<select id="upscale_model" name="upscale_model">
<option value="RealESRGAN_x4plus" selected>RealESRGAN_x4plus</option>
<option value="RealESRGAN_x4plus_anime_6B">RealESRGAN_x4plus_anime_6B</option>
<option value="latent_upscaler">Latent Upscaler 2x</option>
</select>
<div id="latent_upscaler_settings" class="displayNone sub-settings">
<label for="latent_upscaler_steps_slider">Upscaling Steps:</label></td><td> <input id="latent_upscaler_steps_slider" name="latent_upscaler_steps_slider" class="editor-slider" value="10" type="range" min="1" max="50"> <input id="latent_upscaler_steps" name="latent_upscaler_steps" size="4" pattern="^[0-9\.]+$" onkeypress="preventNonNumericalInput(event)">
</div>
</li>
<li class="pl-5"><input id="show_only_filtered_image" name="show_only_filtered_image" type="checkbox" checked> <label for="show_only_filtered_image">Show only the corrected/upscaled image</label></li>
</ul></div>

View File

@ -1303,6 +1303,12 @@ body.wait-pause {
display:none !important;
}
.sub-settings {
padding-top: 3pt;
padding-bottom: 3pt;
padding-left: 5pt;
}
/* TOAST NOTIFICATIONS */
.toast-notification {
position: fixed;
@ -1316,7 +1322,7 @@ body.wait-pause {
box-shadow: 0 0 10px rgba(0, 0, 0, 0.5);
z-index: 9999;
animation: slideInRight 0.5s ease forwards;
transition: bottom 0.5s ease; // Add a transition to smoothly reposition the toasts
transition: bottom 0.5s ease; /* Add a transition to smoothly reposition the toasts */
}
.toast-notification-error {

View File

@ -25,6 +25,7 @@ const SETTINGS_IDS_LIST = [
"prompt_strength",
"hypernetwork_strength",
"lora_alpha",
"tiling",
"output_format",
"output_quality",
"output_lossless",
@ -34,6 +35,7 @@ const SETTINGS_IDS_LIST = [
"gfpgan_model",
"use_upscale",
"upscale_amount",
"latent_upscaler_steps",
"block_nsfw",
"show_only_filtered_image",
"upscale_model",

View File

@ -79,6 +79,7 @@ const TASK_MAPPING = {
if (!widthField.value) {
widthField.value = oldVal
}
widthField.dispatchEvent(new Event("change"))
},
readUI: () => parseInt(widthField.value),
parse: (val) => parseInt(val),
@ -91,6 +92,7 @@ const TASK_MAPPING = {
if (!heightField.value) {
heightField.value = oldVal
}
heightField.dispatchEvent(new Event("change"))
},
readUI: () => parseInt(heightField.value),
parse: (val) => parseInt(val),
@ -172,16 +174,22 @@ const TASK_MAPPING = {
name: "Use Face Correction",
setUI: (use_face_correction) => {
const oldVal = gfpganModelField.value
gfpganModelField.value = getModelPath(use_face_correction, [".pth"])
if (gfpganModelField.value) {
// Is a valid value for the field.
useFaceCorrectionField.checked = true
gfpganModelField.disabled = false
} else {
// Not a valid value, restore the old value and disable the filter.
console.log("use face correction", use_face_correction)
if (use_face_correction == null || use_face_correction == "None") {
gfpganModelField.disabled = true
gfpganModelField.value = oldVal
useFaceCorrectionField.checked = false
} else {
gfpganModelField.value = getModelPath(use_face_correction, [".pth"])
if (gfpganModelField.value) {
// Is a valid value for the field.
useFaceCorrectionField.checked = true
gfpganModelField.disabled = false
} else {
// Not a valid value, restore the old value and disable the filter.
gfpganModelField.disabled = true
gfpganModelField.value = oldVal
useFaceCorrectionField.checked = false
}
}
//useFaceCorrectionField.checked = parseBoolean(use_face_correction)
@ -218,6 +226,14 @@ const TASK_MAPPING = {
readUI: () => upscaleAmountField.value,
parse: (val) => val,
},
latent_upscaler_steps: {
name: "Latent Upscaler Steps",
setUI: (latent_upscaler_steps) => {
latentUpscalerStepsField.value = latent_upscaler_steps
},
readUI: () => latentUpscalerStepsField.value,
parse: (val) => val,
},
sampler_name: {
name: "Sampler",
setUI: (sampler_name) => {
@ -249,6 +265,14 @@ const TASK_MAPPING = {
readUI: () => clip_skip.checked,
parse: (val) => Boolean(val),
},
tiling: {
name: "Tiling",
setUI: (val) => {
tilingField.value = val
},
readUI: () => tilingField.value,
parse: (val) => val,
},
use_vae_model: {
name: "VAE model",
setUI: (use_vae_model) => {
@ -411,6 +435,7 @@ function restoreTaskToUI(task, fieldsToSkip) {
if (!("original_prompt" in task.reqBody)) {
promptField.value = task.reqBody.prompt
}
promptField.dispatchEvent(new Event("input"))
// properly reset checkboxes
if (!("use_face_correction" in task.reqBody)) {

View File

@ -789,9 +789,10 @@
use_hypernetwork_model: "string",
hypernetwork_strength: "number",
output_lossless: "boolean",
tiling: "string",
}
// Higer values will result in...
// Higher values will result in...
// pytorch_lightning/utilities/seed.py:60: UserWarning: X is not in bounds, numpy accepts from 0 to 4294967295
const MAX_SEED_VALUE = 4294967295

View File

@ -834,6 +834,7 @@ function pixelCompare(int1, int2) {
}
// adapted from https://ben.akrin.com/canvas_fill/fill_04.html
// May 2023 - look at using a library instead of custom code: https://github.com/shaneosullivan/example-canvas-fill
function flood_fill(editor, the_canvas_context, x, y, color) {
pixel_stack = [{ x: x, y: y }]
pixels = the_canvas_context.getImageData(0, 0, editor.width, editor.height)

View File

@ -18,6 +18,11 @@ const taskConfigSetup = {
visible: ({ reqBody }) => reqBody?.clip_skip,
value: ({ reqBody }) => "yes",
},
tiling: {
label: "Tiling",
visible: ({ reqBody }) => reqBody?.tiling != "none",
value: ({ reqBody }) => reqBody?.tiling,
},
use_vae_model: {
label: "VAE",
visible: ({ reqBody }) => reqBody?.use_vae_model !== undefined && reqBody?.use_vae_model.trim() !== "",
@ -82,12 +87,16 @@ let promptStrengthField = document.querySelector("#prompt_strength")
let samplerField = document.querySelector("#sampler_name")
let samplerSelectionContainer = document.querySelector("#samplerSelection")
let useFaceCorrectionField = document.querySelector("#use_face_correction")
let gfpganModelField = new ModelDropdown(document.querySelector("#gfpgan_model"), "gfpgan")
let gfpganModelField = new ModelDropdown(document.querySelector("#gfpgan_model"), ["codeformer", "gfpgan"])
let useUpscalingField = document.querySelector("#use_upscale")
let upscaleModelField = document.querySelector("#upscale_model")
let upscaleAmountField = document.querySelector("#upscale_amount")
let latentUpscalerSettings = document.querySelector("#latent_upscaler_settings")
let latentUpscalerStepsSlider = document.querySelector("#latent_upscaler_steps_slider")
let latentUpscalerStepsField = document.querySelector("#latent_upscaler_steps")
let stableDiffusionModelField = new ModelDropdown(document.querySelector("#stable_diffusion_model"), "stable-diffusion")
let clipSkipField = document.querySelector("#clip_skip")
let tilingField = document.querySelector("#tiling")
let vaeModelField = new ModelDropdown(document.querySelector("#vae_model"), "vae", "None")
let hypernetworkModelField = new ModelDropdown(document.querySelector("#hypernetwork_model"), "hypernetwork", "None")
let hypernetworkStrengthSlider = document.querySelector("#hypernetwork_strength_slider")
@ -239,7 +248,7 @@ function setServerStatus(event) {
break
}
if (SD.serverState.devices) {
document.dispatchEvent(new CustomEvent("system_info_update", { detail: SD.serverState.devices}))
document.dispatchEvent(new CustomEvent("system_info_update", { detail: SD.serverState.devices }))
}
}
@ -258,20 +267,13 @@ function shiftOrConfirm(e, prompt, fn) {
if (e.shiftKey || !confirmDangerousActionsField.checked) {
fn(e)
} else {
$.confirm({
theme: "modern",
title: prompt,
useBootstrap: false,
animateFromElement: false,
content:
'<small>Tip: To skip this dialog, use shift-click or disable the "Confirm dangerous actions" setting in the Settings tab.</small>',
buttons: {
yes: () => {
fn(e)
},
cancel: () => {},
},
})
confirm(
'<small>Tip: To skip this dialog, use shift-click or disable the "Confirm dangerous actions" setting in the Settings tab.</small>',
prompt,
() => {
fn(e)
}
)
}
}
@ -293,6 +295,7 @@ function logError(msg, res, outputMsg) {
logMsg(msg, "error", outputMsg)
console.log("request error", res)
console.trace()
setStatus("request", "error", "error")
}
@ -784,11 +787,6 @@ function getTaskUpdater(task, reqBody, outputContainer) {
}
msg += "</pre>"
logError(msg, event, outputMsg)
} else {
let msg = `Unexpected Read Error:<br/><pre>Error:${
this.exception
}<br/>EventInfo: ${JSON.stringify(event, undefined, 4)}</pre>`
logError(msg, event, outputMsg)
}
break
}
@ -885,15 +883,15 @@ function onTaskCompleted(task, reqBody, instance, outputContainer, stepUpdate) {
1. If you have set an initial image, please try reducing its dimension to ${MAX_INIT_IMAGE_DIMENSION}x${MAX_INIT_IMAGE_DIMENSION} or smaller.<br/>
2. Try picking a lower level in the '<em>GPU Memory Usage</em>' setting (in the '<em>Settings</em>' tab).<br/>
3. Try generating a smaller image.<br/>`
} else if (msg.toLowerCase().includes("DefaultCPUAllocator: not enough memory")) {
} else if (msg.includes("DefaultCPUAllocator: not enough memory")) {
msg += `<br/><br/>
Reason: Your computer is running out of system RAM!
<br/>
<br/><br/>
<b>Suggestions</b>:
<br/>
1. Try closing unnecessary programs and browser tabs.<br/>
2. If that doesn't help, please increase your computer's virtual memory by following these steps for
<a href="https://www.ibm.com/docs/en/opw/8.2.0?topic=tuning-optional-increasing-paging-file-size-windows-computers" target="_blank">Windows</a>, or
<a href="https://www.ibm.com/docs/en/opw/8.2.0?topic=tuning-optional-increasing-paging-file-size-windows-computers" target="_blank">Windows</a> or
<a href="https://linuxhint.com/increase-swap-space-linux/" target="_blank">Linux</a>.<br/>
3. Try restarting your computer.<br/>`
}
@ -1231,6 +1229,7 @@ function getCurrentUserRequest() {
//render_device: undefined, // Set device affinity. Prefer this device, but wont activate.
use_stable_diffusion_model: stableDiffusionModelField.value,
clip_skip: clipSkipField.checked,
tiling: tilingField.value,
use_vae_model: vaeModelField.value,
stream_progress_updates: true,
stream_image_progress: numOutputsTotal > 50 ? false : streamImageProgressField.checked,
@ -1264,10 +1263,18 @@ function getCurrentUserRequest() {
}
if (useFaceCorrectionField.checked) {
newTask.reqBody.use_face_correction = gfpganModelField.value
if (gfpganModelField.value.includes("codeformer")) {
newTask.reqBody.codeformer_upscale_faces = document.querySelector("#codeformer_upscale_faces").checked
}
}
if (useUpscalingField.checked) {
newTask.reqBody.use_upscale = upscaleModelField.value
newTask.reqBody.upscale_amount = upscaleAmountField.value
if (upscaleModelField.value === "latent_upscaler") {
newTask.reqBody.upscale_amount = "2"
newTask.reqBody.latent_upscaler_steps = latentUpscalerStepsField.value
}
}
if (hypernetworkModelField.value) {
newTask.reqBody.use_hypernetwork_model = hypernetworkModelField.value
@ -1573,15 +1580,44 @@ metadataOutputFormatField.disabled = !saveToDiskField.checked
gfpganModelField.disabled = !useFaceCorrectionField.checked
useFaceCorrectionField.addEventListener("change", function(e) {
gfpganModelField.disabled = !this.checked
onFixFaceModelChange()
})
function onFixFaceModelChange() {
let codeformerSettings = document.querySelector("#codeformer_settings")
if (gfpganModelField.value === "codeformer" && !gfpganModelField.disabled) {
codeformerSettings.classList.remove("displayNone")
} else {
codeformerSettings.classList.add("displayNone")
}
}
gfpganModelField.addEventListener("change", onFixFaceModelChange)
onFixFaceModelChange()
upscaleModelField.disabled = !useUpscalingField.checked
upscaleAmountField.disabled = !useUpscalingField.checked
useUpscalingField.addEventListener("change", function(e) {
upscaleModelField.disabled = !this.checked
upscaleAmountField.disabled = !this.checked
onUpscaleModelChange()
})
function onUpscaleModelChange() {
let upscale4x = document.querySelector("#upscale_amount_4x")
if (upscaleModelField.value === "latent_upscaler" && !upscaleModelField.disabled) {
upscale4x.disabled = true
upscaleAmountField.value = "2"
latentUpscalerSettings.classList.remove("displayNone")
} else {
upscale4x.disabled = false
latentUpscalerSettings.classList.add("displayNone")
}
}
upscaleModelField.addEventListener("change", onUpscaleModelChange)
onUpscaleModelChange()
makeImageBtn.addEventListener("click", makeImage)
document.onkeydown = function(e) {
@ -1591,6 +1627,27 @@ document.onkeydown = function(e) {
}
}
/********************* Latent Upscaler Steps **************************/
function updateLatentUpscalerSteps() {
latentUpscalerStepsField.value = latentUpscalerStepsSlider.value
latentUpscalerStepsField.dispatchEvent(new Event("change"))
}
function updateLatentUpscalerStepsSlider() {
if (latentUpscalerStepsField.value < 1) {
latentUpscalerStepsField.value = 1
} else if (latentUpscalerStepsField.value > 50) {
latentUpscalerStepsField.value = 50
}
latentUpscalerStepsSlider.value = latentUpscalerStepsField.value
latentUpscalerStepsSlider.dispatchEvent(new Event("change"))
}
latentUpscalerStepsSlider.addEventListener("input", updateLatentUpscalerSteps)
latentUpscalerStepsField.addEventListener("input", updateLatentUpscalerStepsSlider)
updateLatentUpscalerSteps()
/********************* Guidance **************************/
function updateGuidanceScale() {
guidanceScaleField.value = guidanceScaleSlider.value / 10

View File

@ -191,7 +191,8 @@ var PARAMETERS = [
id: "listen_port",
type: ParameterType.custom,
label: "Network port",
note: "Port that this server listens to. The '9000' part in 'http://localhost:9000'. Please restart the program after changing this.",
note:
"Port that this server listens to. The '9000' part in 'http://localhost:9000'. Please restart the program after changing this.",
icon: "fa-anchor",
render: (parameter) => {
return `<input id="${parameter.id}" name="${parameter.id}" size="6" value="9000" onkeypress="preventNonNumericalInput(event)">`
@ -395,15 +396,17 @@ async function getAppConfig() {
if (!testDiffusersEnabled) {
document.querySelector("#lora_model_container").style.display = "none"
document.querySelector("#lora_alpha_container").style.display = "none"
document.querySelector("#tiling_container").style.display = "none"
document.querySelectorAll("#sampler_name option.diffusers-only").forEach(option => {
document.querySelectorAll("#sampler_name option.diffusers-only").forEach((option) => {
option.style.display = "none"
})
} else {
document.querySelector("#lora_model_container").style.display = ""
document.querySelector("#lora_alpha_container").style.display = loraModelField.value ? "" : "none"
document.querySelector("#tiling_container").style.display = ""
document.querySelectorAll("#sampler_name option.k_diffusion-only").forEach(option => {
document.querySelectorAll("#sampler_name option.k_diffusion-only").forEach((option) => {
option.disabled = true
})
document.querySelector("#clip_skip_config").classList.remove("displayNone")
@ -568,6 +571,16 @@ async function getSystemInfo() {
if (allDeviceIds.length === 0) {
useCPUField.checked = true
useCPUField.disabled = true // no compatible GPUs, so make the CPU mandatory
getParameterSettingsEntry("use_cpu").addEventListener("click", function() {
alert(
"Sorry, we could not find a compatible graphics card! Easy Diffusion supports graphics cards with minimum 2 GB of RAM. " +
"Only NVIDIA cards are supported on Windows. NVIDIA and AMD cards are supported on Linux.<br/><br/>" +
"If you have a compatible graphics card, please try updating to the latest drivers.<br/><br/>" +
"Only the CPU can be used for generating images, without a compatible graphics card.",
"No compatible graphics card found!"
)
})
}
autoPickGPUsField.checked = devices["config"] === "auto"
@ -586,7 +599,7 @@ async function getSystemInfo() {
$("#use_gpus").val(activeDeviceIds)
}
document.dispatchEvent(new CustomEvent("system_info_update", { detail: devices}))
document.dispatchEvent(new CustomEvent("system_info_update", { detail: devices }))
setHostInfo(res["hosts"])
let force = false
if (res["enforce_output_dir"] !== undefined) {

View File

@ -90,7 +90,12 @@ class ModelDropdown {
if (modelsOptions !== undefined) {
// reuse models from cache (only useful for plugins, which are loaded after models)
this.inputModels = modelsOptions[this.modelKey]
this.inputModels = []
let modelKeys = Array.isArray(this.modelKey) ? this.modelKey : [this.modelKey]
for (let i = 0; i < modelKeys.length; i++) {
let key = modelKeys[i]
this.inputModels.push(...modelsOptions[key])
}
this.populateModels()
}
document.addEventListener(
@ -98,6 +103,12 @@ class ModelDropdown {
this.bind(function(e) {
// reload the models
this.inputModels = modelsOptions[this.modelKey]
this.inputModels = []
let modelKeys = Array.isArray(this.modelKey) ? this.modelKey : [this.modelKey]
for (let i = 0; i < modelKeys.length; i++) {
let key = modelKeys[i]
this.inputModels.push(...modelsOptions[key])
}
this.populateModels()
}, this)
)

View File

@ -843,57 +843,83 @@ function createTab(request) {
/* TOAST NOTIFICATIONS */
function showToast(message, duration = 5000, error = false) {
const toast = document.createElement("div");
toast.classList.add("toast-notification");
const toast = document.createElement("div")
toast.classList.add("toast-notification")
if (error === true) {
toast.classList.add("toast-notification-error");
toast.classList.add("toast-notification-error")
}
toast.innerHTML = message;
document.body.appendChild(toast);
toast.innerHTML = message
document.body.appendChild(toast)
// Set the position of the toast on the screen
const toastCount = document.querySelectorAll(".toast-notification").length;
const toastHeight = toast.offsetHeight;
const toastCount = document.querySelectorAll(".toast-notification").length
const toastHeight = toast.offsetHeight
const previousToastsHeight = Array.from(document.querySelectorAll(".toast-notification"))
.slice(0, -1) // exclude current toast
.reduce((totalHeight, toast) => totalHeight + toast.offsetHeight + 10, 0); // add 10 pixels for spacing
toast.style.bottom = `${10 + previousToastsHeight}px`;
toast.style.right = "10px";
.reduce((totalHeight, toast) => totalHeight + toast.offsetHeight + 10, 0) // add 10 pixels for spacing
toast.style.bottom = `${10 + previousToastsHeight}px`
toast.style.right = "10px"
// Delay the removal of the toast until animation has completed
const removeToast = () => {
toast.classList.add("hide");
toast.classList.add("hide")
const removeTimeoutId = setTimeout(() => {
toast.remove();
toast.remove()
// Adjust the position of remaining toasts
const remainingToasts = document.querySelectorAll(".toast-notification");
const removedToastBottom = toast.getBoundingClientRect().bottom;
const remainingToasts = document.querySelectorAll(".toast-notification")
const removedToastBottom = toast.getBoundingClientRect().bottom
remainingToasts.forEach((toast) => {
if (toast.getBoundingClientRect().bottom < removedToastBottom) {
toast.classList.add("slide-down");
toast.classList.add("slide-down")
}
});
})
// Wait for the slide-down animation to complete
setTimeout(() => {
// Remove the slide-down class after the animation has completed
const slidingToasts = document.querySelectorAll(".slide-down");
const slidingToasts = document.querySelectorAll(".slide-down")
slidingToasts.forEach((toast) => {
toast.classList.remove("slide-down");
});
toast.classList.remove("slide-down")
})
// Adjust the position of remaining toasts again, in case there are multiple toasts being removed at once
const remainingToastsDown = document.querySelectorAll(".toast-notification");
let heightSoFar = 0;
const remainingToastsDown = document.querySelectorAll(".toast-notification")
let heightSoFar = 0
remainingToastsDown.forEach((toast) => {
toast.style.bottom = `${10 + heightSoFar}px`;
heightSoFar += toast.offsetHeight + 10; // add 10 pixels for spacing
});
}, 0); // The duration of the slide-down animation (in milliseconds)
}, 500);
};
toast.style.bottom = `${10 + heightSoFar}px`
heightSoFar += toast.offsetHeight + 10 // add 10 pixels for spacing
})
}, 0) // The duration of the slide-down animation (in milliseconds)
}, 500)
}
// Remove the toast after specified duration
setTimeout(removeToast, duration);
setTimeout(removeToast, duration)
}
function alert(msg, title) {
title = title || ""
$.alert({
theme: "modern",
title: title,
useBootstrap: false,
animateFromElement: false,
content: msg,
})
}
function confirm(msg, title, fn) {
title = title || ""
$.confirm({
theme: "modern",
title: title,
useBootstrap: false,
animateFromElement: false,
content: msg,
buttons: {
yes: fn,
cancel: () => {},
},
})
}

View File

@ -403,16 +403,19 @@
// Batch main loop
for (let i = 0; i < iterations; i++) {
let alpha = (start + i * step) / 100
switch (document.querySelector("#merge-interpolation").value) {
case "SmoothStep":
alpha = smoothstep(alpha)
break
case "SmootherStep":
alpha = smootherstep(alpha)
break
case "SmoothestStep":
alpha = smootheststep(alpha)
break
if (isTabActive(tabSettingsBatch)) {
switch (document.querySelector("#merge-interpolation").value) {
case "SmoothStep":
alpha = smoothstep(alpha)
break
case "SmootherStep":
alpha = smootherstep(alpha)
break
case "SmoothestStep":
alpha = smootheststep(alpha)
break
}
}
addLogMessage(`merging batch job ${i + 1}/${iterations}, alpha = ${alpha.toFixed(5)}...`)
@ -420,7 +423,8 @@
request["out_path"] += "-" + alpha.toFixed(5) + "." + document.querySelector("#merge-format").value
addLogMessage(`&nbsp;&nbsp;filename: ${request["out_path"]}`)
request["ratio"] = alpha
// sdkit documentation: "ratio - the ratio of the second model. 1 means only the second model will be used."
request["ratio"] = 1-alpha
let res = await fetch("/model/merge", {
method: "POST",
headers: { "Content-Type": "application/json" },