Merge branch 'cf' into beta

This commit is contained in:
cmdr2 2023-06-01 15:27:37 +05:30 committed by GitHub
commit 3e90eafafb
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
26 changed files with 524 additions and 193 deletions

View File

@ -4,6 +4,7 @@
### Major Changes ### Major Changes
- **Nearly twice as fast** - significantly faster speed of image generation. Code contributions are welcome to make our project even faster: https://github.com/easydiffusion/sdkit/#is-it-fast - **Nearly twice as fast** - significantly faster speed of image generation. Code contributions are welcome to make our project even faster: https://github.com/easydiffusion/sdkit/#is-it-fast
- **Mac M1/M2 support** - Experimental support for Mac M1/M2. Thanks @michaelgallacher, @JeLuf and vishae. - **Mac M1/M2 support** - Experimental support for Mac M1/M2. Thanks @michaelgallacher, @JeLuf and vishae.
- **AMD support for Linux** - Experimental support for AMD GPUs on Linux. Thanks @DianaNites and @JeLuf.
- **Full support for Stable Diffusion 2.1 (including CPU)** - supports loading v1.4 or v2.0 or v2.1 models seamlessly. No need to enable "Test SD2", and no need to add `sd2_` to your SD 2.0 model file names. Works on CPU as well. - **Full support for Stable Diffusion 2.1 (including CPU)** - supports loading v1.4 or v2.0 or v2.1 models seamlessly. No need to enable "Test SD2", and no need to add `sd2_` to your SD 2.0 model file names. Works on CPU as well.
- **Memory optimized Stable Diffusion 2.1** - you can now use Stable Diffusion 2.1 models, with the same low VRAM optimizations that we've always had for SD 1.4. Please note, the SD 2.0 and 2.1 models require more GPU and System RAM, as compared to the SD 1.4 and 1.5 models. - **Memory optimized Stable Diffusion 2.1** - you can now use Stable Diffusion 2.1 models, with the same low VRAM optimizations that we've always had for SD 1.4. Please note, the SD 2.0 and 2.1 models require more GPU and System RAM, as compared to the SD 1.4 and 1.5 models.
- **11 new samplers!** - explore the new samplers, some of which can generate great images in less than 10 inference steps! We've added the Karras and UniPC samplers. Thanks @Schorny for the UniPC samplers. - **11 new samplers!** - explore the new samplers, some of which can generate great images in less than 10 inference steps! We've added the Karras and UniPC samplers. Thanks @Schorny for the UniPC samplers.
@ -21,6 +22,16 @@
Our focus continues to remain on an easy installation experience, and an easy user-interface. While still remaining pretty powerful, in terms of features and speed. Our focus continues to remain on an easy installation experience, and an easy user-interface. While still remaining pretty powerful, in terms of features and speed.
### Detailed changelog ### Detailed changelog
* 2.5.39 - 25 May 2023 - (beta-only) Seamless Tiling - make seamlessly tiled images, e.g. rock and grass textures. Thanks @JeLuf.
* 2.5.38 - 24 May 2023 - Better reporting of errors, and show an explanation if the user cannot disable the "Use CPU" setting.
* 2.5.38 - 23 May 2023 - Add Latent Upscaler as another option for upscaling images. Thanks @JeLuf for the implementation of the Latent Upscaler model.
* 2.5.37 - 19 May 2023 - (beta-only) Two more samplers: DDPM and DEIS. Also disables the samplers that aren't working yet in the Diffusers version. Thanks @ogmaresca.
* 2.5.37 - 19 May 2023 - (beta-only) Support CLIP-Skip. You can set this option under the models dropdown. Thanks @JeLuf.
* 2.5.37 - 19 May 2023 - (beta-only) More VRAM optimizations for all modes in diffusers. The VRAM usage for diffusers in "low" and "balanced" should now be equal or less than the non-diffusers version. Performs softmax in half precision, like sdkit does.
* 2.5.36 - 16 May 2023 - (beta-only) More VRAM optimizations for "balanced" VRAM usage mode.
* 2.5.36 - 11 May 2023 - (beta-only) More VRAM optimizations for "low" VRAM usage mode.
* 2.5.36 - 10 May 2023 - (beta-only) Bug fix for "meta" error when using a LoRA in 'low' VRAM usage mode.
* 2.5.35 - 8 May 2023 - Allow dragging a zoomed-in image (after opening an image with the "expand" button). Thanks @ogmaresca.
* 2.5.35 - 3 May 2023 - (beta-only) First round of VRAM Optimizations for the "Test Diffusers" version. This change significantly reduces the amount of VRAM used by the diffusers version during image generation. The VRAM usage is still not equal to the "non-diffusers" version, but more optimizations are coming soon. * 2.5.35 - 3 May 2023 - (beta-only) First round of VRAM Optimizations for the "Test Diffusers" version. This change significantly reduces the amount of VRAM used by the diffusers version during image generation. The VRAM usage is still not equal to the "non-diffusers" version, but more optimizations are coming soon.
* 2.5.34 - 22 Apr 2023 - Don't start the browser in an incognito new profile (on Windows). Thanks @JeLuf. * 2.5.34 - 22 Apr 2023 - Don't start the browser in an incognito new profile (on Windows). Thanks @JeLuf.
* 2.5.33 - 21 Apr 2023 - Install PyTorch 2.0 on new installations (on Windows and Linux). * 2.5.33 - 21 Apr 2023 - Install PyTorch 2.0 on new installations (on Windows and Linux).

9
PRIVACY.md Normal file
View File

@ -0,0 +1,9 @@
// placeholder until a more formal and legal-sounding privacy policy document is written. but the information below is true.
This is a summary of whether Easy Diffusion uses your data or tracks you:
* The short answer is - Easy Diffusion does *not* use your data, and does *not* track you.
* Easy Diffusion does not send your prompts or usage or analytics to anyone. There is no tracking. We don't even know how many people use Easy Diffusion, let alone their prompts.
* Easy Diffusion fetches updates to the code whenever it starts up. It does this by contacting GitHub directly, via SSL (secure connection). Only your computer and GitHub and [this repository](https://github.com/cmdr2/stable-diffusion-ui) are involved, and no third party is involved. Some countries intercepts SSL connections, that's not something we can do much about. GitHub does *not* share statistics (even with me) about how many people fetched code updates.
* Easy Diffusion fetches the models from huggingface.co and github.com, if they don't exist on your PC. For e.g. if the safety checker (NSFW) model doesn't exist, it'll try to download it.
* Easy Diffusion fetches code packages from pypi.org, which is the standard hosting service for all Python projects. That's where packages installed via `pip install` are stored.
* Occasionally, antivirus software are known to *incorrectly* flag and delete some model files, which will result in Easy Diffusion re-downloading `pytorch_model.bin`. This *incorrect deletion* affects other Stable Diffusion UIs as well, like Invoke AI - https://itch.io/post/7509488

View File

@ -17,9 +17,11 @@ Click the download button for your operating system:
</p> </p>
**Hardware requirements:** **Hardware requirements:**
- **Windows:** NVIDIA graphics card, or run on your CPU - **Windows:** NVIDIA graphics card (minimum 2 GB RAM), or run on your CPU.
- **Linux:** NVIDIA or AMD graphics card, or run on your CPU - **Linux:** NVIDIA or AMD graphics card (minimum 2 GB RAM), or run on your CPU.
- **Mac:** M1 or M2, or run on your CPU - **Mac:** M1 or M2, or run on your CPU.
- Minimum 8 GB of system RAM.
- Atleast 25 GB of space on the hard disk.
The installer will take care of whatever is needed. If you face any problems, you can join the friendly [Discord community](https://discord.com/invite/u9yhsFmEkB) and ask for assistance. The installer will take care of whatever is needed. If you face any problems, you can join the friendly [Discord community](https://discord.com/invite/u9yhsFmEkB) and ask for assistance.
@ -58,7 +60,7 @@ Just delete the `EasyDiffusion` folder to uninstall all the downloaded packages.
### Image generation ### Image generation
- **Supports**: "*Text to Image*" and "*Image to Image*". - **Supports**: "*Text to Image*" and "*Image to Image*".
- **19 Samplers**: `ddim`, `plms`, `heun`, `euler`, `euler_a`, `dpm2`, `dpm2_a`, `lms`, `dpm_solver_stability`, `dpmpp_2s_a`, `dpmpp_2m`, `dpmpp_sde`, `dpm_fast`, `dpm_adaptive`, `unipc_snr`, `unipc_tu`, `unipc_tq`, `unipc_snr_2`, `unipc_tu_2`. - **21 Samplers**: `ddim`, `plms`, `heun`, `euler`, `euler_a`, `dpm2`, `dpm2_a`, `lms`, `dpm_solver_stability`, `dpmpp_2s_a`, `dpmpp_2m`, `dpmpp_sde`, `dpm_fast`, `dpm_adaptive`, `ddpm`, `deis`, `unipc_snr`, `unipc_tu`, `unipc_tq`, `unipc_snr_2`, `unipc_tu_2`.
- **In-Painting**: Specify areas of your image to paint into. - **In-Painting**: Specify areas of your image to paint into.
- **Simple Drawing Tool**: Draw basic images to guide the AI, without needing an external drawing program. - **Simple Drawing Tool**: Draw basic images to guide the AI, without needing an external drawing program.
- **Face Correction (GFPGAN)** - **Face Correction (GFPGAN)**
@ -84,7 +86,7 @@ Just delete the `EasyDiffusion` folder to uninstall all the downloaded packages.
### Performance and security ### Performance and security
- **Fast**: Creates a 512x512 image with euler_a in 5 seconds, on an NVIDIA 3060 12GB. - **Fast**: Creates a 512x512 image with euler_a in 5 seconds, on an NVIDIA 3060 12GB.
- **Low Memory Usage**: Create 512x512 images with less than 3 GB of GPU RAM, and 768x768 images with less than 4 GB of GPU RAM! - **Low Memory Usage**: Create 512x512 images with less than 2 GB of GPU RAM, and 768x768 images with less than 3 GB of GPU RAM!
- **Use CPU setting**: If you don't have a compatible graphics card, but still want to run it on your CPU. - **Use CPU setting**: If you don't have a compatible graphics card, but still want to run it on your CPU.
- **Multi-GPU support**: Automatically spreads your tasks across multiple GPUs (if available), for faster performance! - **Multi-GPU support**: Automatically spreads your tasks across multiple GPUs (if available), for faster performance!
- **Auto scan for malicious models**: Uses picklescan to prevent malicious models. - **Auto scan for malicious models**: Uses picklescan to prevent malicious models.
@ -113,14 +115,6 @@ Useful for judging (and stopping) an image quickly, without waiting for it to fi
![Screenshot of task queue](https://user-images.githubusercontent.com/844287/217043984-0b35f73b-1318-47cb-9eed-a2a91b430490.png) ![Screenshot of task queue](https://user-images.githubusercontent.com/844287/217043984-0b35f73b-1318-47cb-9eed-a2a91b430490.png)
# System Requirements
1. Windows 10/11, or Linux. Experimental support for Mac is coming soon.
2. An NVIDIA graphics card, preferably with 4GB or more of VRAM. If you don't have a compatible graphics card, it'll automatically run in the slower "CPU Mode".
3. Minimum 8 GB of RAM and 25GB of disk space.
You don't need to install or struggle with Python, Anaconda, Docker etc. The installer will take care of whatever is needed.
---- ----
# How to use? # How to use?

View File

@ -18,7 +18,7 @@ os_name = platform.system()
modules_to_check = { modules_to_check = {
"torch": ("1.11.0", "1.13.1", "2.0.0"), "torch": ("1.11.0", "1.13.1", "2.0.0"),
"torchvision": ("0.12.0", "0.14.1", "0.15.1"), "torchvision": ("0.12.0", "0.14.1", "0.15.1"),
"sdkit": "1.0.87", "sdkit": "1.0.98",
"stable-diffusion-sdkit": "2.1.4", "stable-diffusion-sdkit": "2.1.4",
"rich": "12.6.0", "rich": "12.6.0",
"uvicorn": "0.19.0", "uvicorn": "0.19.0",
@ -130,10 +130,13 @@ def include_cuda_versions(module_versions: tuple) -> tuple:
def is_amd_on_linux(): def is_amd_on_linux():
if os_name == "Linux": if os_name == "Linux":
try:
with open("/proc/bus/pci/devices", "r") as f: with open("/proc/bus/pci/devices", "r") as f:
device_info = f.read() device_info = f.read()
if "amdgpu" in device_info and "nvidia" not in device_info: if "amdgpu" in device_info and "nvidia" not in device_info:
return True return True
except:
return False
return False return False

View File

@ -1,5 +1,6 @@
import os import os
import argparse import argparse
import sys
# The config file is in the same directory as this script # The config file is in the same directory as this script
config_directory = os.path.dirname(__file__) config_directory = os.path.dirname(__file__)
@ -21,16 +22,16 @@ if os.path.isfile(config_yaml):
try: try:
config = yaml.safe_load(configfile) config = yaml.safe_load(configfile)
except Exception as e: except Exception as e:
print(e) print(e, file=sys.stderr)
exit() config = {}
elif os.path.isfile(config_json): elif os.path.isfile(config_json):
import json import json
with open(config_json, 'r') as configfile: with open(config_json, 'r') as configfile:
try: try:
config = json.load(configfile) config = json.load(configfile)
except Exception as e: except Exception as e:
print(e) print(e, file=sys.stderr)
exit() config = {}
else: else:
config = {} config = {}

View File

@ -10,6 +10,8 @@ import warnings
from easydiffusion import task_manager from easydiffusion import task_manager
from easydiffusion.utils import log from easydiffusion.utils import log
from rich.logging import RichHandler from rich.logging import RichHandler
from rich.console import Console
from rich.panel import Panel
from sdkit.utils import log as sdkit_log # hack, so we can overwrite the log config from sdkit.utils import log as sdkit_log # hack, so we can overwrite the log config
# Remove all handlers associated with the root logger object. # Remove all handlers associated with the root logger object.
@ -213,11 +215,19 @@ def open_browser():
ui = config.get("ui", {}) ui = config.get("ui", {})
net = config.get("net", {}) net = config.get("net", {})
port = net.get("listen_port", 9000) port = net.get("listen_port", 9000)
if ui.get("open_browser_on_start", True): if ui.get("open_browser_on_start", True):
import webbrowser import webbrowser
webbrowser.open(f"http://localhost:{port}") webbrowser.open(f"http://localhost:{port}")
Console().print(Panel(
"\n" +
"[white]Easy Diffusion is ready to serve requests.\n\n" +
"A new browser tab should have been opened by now.\n" +
f"If not, please open your web browser and navigate to [bold yellow underline]http://localhost:{port}/\n",
title="Easy Diffusion is ready", style="bold yellow on blue"))
def get_image_modifiers(): def get_image_modifiers():
modifiers_json_path = os.path.join(SD_UI_DIR, "modifiers.json") modifiers_json_path = os.path.join(SD_UI_DIR, "modifiers.json")

View File

@ -98,8 +98,8 @@ def auto_pick_devices(currently_active_devices):
continue continue
mem_free, mem_total = torch.cuda.mem_get_info(device) mem_free, mem_total = torch.cuda.mem_get_info(device)
mem_free /= float(10 ** 9) mem_free /= float(10**9)
mem_total /= float(10 ** 9) mem_total /= float(10**9)
device_name = torch.cuda.get_device_name(device) device_name = torch.cuda.get_device_name(device)
log.debug( log.debug(
f"{device} detected: {device_name} - Memory (free/total): {round(mem_free, 2)}Gb / {round(mem_total, 2)}Gb" f"{device} detected: {device_name} - Memory (free/total): {round(mem_free, 2)}Gb / {round(mem_total, 2)}Gb"
@ -165,6 +165,7 @@ def needs_to_force_full_precision(context):
and ( and (
" 1660" in device_name " 1660" in device_name
or " 1650" in device_name or " 1650" in device_name
or " 1630" in device_name
or " t400" in device_name or " t400" in device_name
or " t550" in device_name or " t550" in device_name
or " t600" in device_name or " t600" in device_name
@ -181,7 +182,7 @@ def get_max_vram_usage_level(device):
else: else:
return "high" return "high"
mem_total /= float(10 ** 9) mem_total /= float(10**9)
if mem_total < 4.5: if mem_total < 4.5:
return "low" return "low"
elif mem_total < 6.5: elif mem_total < 6.5:
@ -223,10 +224,10 @@ def is_device_compatible(device):
# Memory check # Memory check
try: try:
_, mem_total = torch.cuda.mem_get_info(device) _, mem_total = torch.cuda.mem_get_info(device)
mem_total /= float(10 ** 9) mem_total /= float(10**9)
if mem_total < 3.0: if mem_total < 1.9:
if is_device_compatible.history.get(device) == None: if is_device_compatible.history.get(device) == None:
log.warn(f"GPU {device} with less than 3 GB of VRAM is not compatible with Stable Diffusion") log.warn(f"GPU {device} with less than 2 GB of VRAM is not compatible with Stable Diffusion")
is_device_compatible.history[device] = 1 is_device_compatible.history[device] = 1
return False return False
except RuntimeError as e: except RuntimeError as e:

View File

@ -53,15 +53,21 @@ def load_default_models(context: Context):
scan_model=context.model_paths[model_type] != None scan_model=context.model_paths[model_type] != None
and not context.model_paths[model_type].endswith(".safetensors"), and not context.model_paths[model_type].endswith(".safetensors"),
) )
if model_type in context.model_load_errors:
del context.model_load_errors[model_type]
except Exception as e: except Exception as e:
log.error(f"[red]Error while loading {model_type} model: {context.model_paths[model_type]}[/red]") log.error(f"[red]Error while loading {model_type} model: {context.model_paths[model_type]}[/red]")
log.exception(e) log.exception(e)
del context.model_paths[model_type] del context.model_paths[model_type]
context.model_load_errors[model_type] = str(e) # storing the entire Exception can lead to memory leaks
def unload_all(context: Context): def unload_all(context: Context):
for model_type in KNOWN_MODEL_TYPES: for model_type in KNOWN_MODEL_TYPES:
unload_model(context, model_type) unload_model(context, model_type)
if model_type in context.model_load_errors:
del context.model_load_errors[model_type]
def resolve_model_to_use(model_name: str = None, model_type: str = None): def resolve_model_to_use(model_name: str = None, model_type: str = None):
@ -107,19 +113,17 @@ def resolve_model_to_use(model_name: str = None, model_type: str = None):
def reload_models_if_necessary(context: Context, task_data: TaskData): def reload_models_if_necessary(context: Context, task_data: TaskData):
if hasattr(task_data, 'use_face_correction') and task_data.use_face_correction: face_fix_lower = task_data.use_face_correction.lower() if task_data.use_face_correction else ""
face_correction_model = "codeformer" if "codeformer" in task_data.use_face_correction.lower() else "gfpgan" upscale_lower = task_data.use_upscale.lower() if task_data.use_upscale else ""
face_correction_value = task_data.use_face_correction
else:
face_correction_model = "gfpgan"
face_correction_value = None
model_paths_in_req = { model_paths_in_req = {
"stable-diffusion": task_data.use_stable_diffusion_model, "stable-diffusion": task_data.use_stable_diffusion_model,
"vae": task_data.use_vae_model, "vae": task_data.use_vae_model,
"hypernetwork": task_data.use_hypernetwork_model, "hypernetwork": task_data.use_hypernetwork_model,
face_correction_model: face_correction_value, "codeformer": task_data.use_face_correction if "codeformer" in face_fix_lower else None,
"realesrgan": task_data.use_upscale, "gfpgan": task_data.use_face_correction if "gfpgan" in face_fix_lower else None,
"realesrgan": task_data.use_upscale if "realesrgan" in upscale_lower else None,
"latent_upscaler": True if "latent_upscaler" in upscale_lower else None,
"nsfw_checker": True if task_data.block_nsfw else None, "nsfw_checker": True if task_data.block_nsfw else None,
"lora": task_data.use_lora_model, "lora": task_data.use_lora_model,
} }
@ -129,14 +133,21 @@ def reload_models_if_necessary(context: Context, task_data: TaskData):
if context.model_paths.get(model_type) != path if context.model_paths.get(model_type) != path
} }
if set_vram_optimizations(context): # reload SD if set_vram_optimizations(context) or set_clip_skip(context, task_data): # reload SD
models_to_reload["stable-diffusion"] = model_paths_in_req["stable-diffusion"] models_to_reload["stable-diffusion"] = model_paths_in_req["stable-diffusion"]
for model_type, model_path_in_req in models_to_reload.items(): for model_type, model_path_in_req in models_to_reload.items():
context.model_paths[model_type] = model_path_in_req context.model_paths[model_type] = model_path_in_req
action_fn = unload_model if context.model_paths[model_type] is None else load_model action_fn = unload_model if context.model_paths[model_type] is None else load_model
try:
action_fn(context, model_type, scan_model=False) # we've scanned them already action_fn(context, model_type, scan_model=False) # we've scanned them already
if model_type in context.model_load_errors:
del context.model_load_errors[model_type]
except Exception as e:
log.exception(e)
if action_fn == load_model:
context.model_load_errors[model_type] = str(e) # storing the entire Exception can lead to memory leaks
def resolve_model_paths(task_data: TaskData): def resolve_model_paths(task_data: TaskData):
@ -149,10 +160,18 @@ def resolve_model_paths(task_data: TaskData):
if task_data.use_face_correction: if task_data.use_face_correction:
task_data.use_face_correction = resolve_model_to_use(task_data.use_face_correction, "gfpgan") task_data.use_face_correction = resolve_model_to_use(task_data.use_face_correction, "gfpgan")
if task_data.use_upscale: if task_data.use_upscale and "realesrgan" in task_data.use_upscale.lower():
task_data.use_upscale = resolve_model_to_use(task_data.use_upscale, "realesrgan") task_data.use_upscale = resolve_model_to_use(task_data.use_upscale, "realesrgan")
def fail_if_models_did_not_load(context: Context):
for model_type in KNOWN_MODEL_TYPES:
if model_type in context.model_load_errors:
e = context.model_load_errors[model_type]
raise Exception(f"Could not load the {model_type} model! Reason: " + e)
# concat 'e', don't use in format string (injection attack)
def set_vram_optimizations(context: Context): def set_vram_optimizations(context: Context):
config = app.getConfig() config = app.getConfig()
vram_usage_level = config.get("vram_usage_level", "balanced") vram_usage_level = config.get("vram_usage_level", "balanced")
@ -164,6 +183,16 @@ def set_vram_optimizations(context: Context):
return False return False
def set_clip_skip(context: Context, task_data: TaskData):
clip_skip = task_data.clip_skip
if clip_skip != context.clip_skip:
context.clip_skip = clip_skip
return True
return False
def make_model_folders(): def make_model_folders():
for model_type in KNOWN_MODEL_TYPES: for model_type in KNOWN_MODEL_TYPES:
model_dir_path = os.path.join(app.MODELS_DIR, model_type) model_dir_path = os.path.join(app.MODELS_DIR, model_type)

View File

@ -33,6 +33,7 @@ def init(device):
context.stop_processing = False context.stop_processing = False
context.temp_images = {} context.temp_images = {}
context.partial_x_samples = None context.partial_x_samples = None
context.model_load_errors = {}
from easydiffusion import app from easydiffusion import app
@ -72,7 +73,7 @@ def make_images(
def print_task_info(req: GenerateImageRequest, task_data: TaskData): def print_task_info(req: GenerateImageRequest, task_data: TaskData):
req_str = pprint.pformat(get_printable_request(req)).replace("[", "\[") req_str = pprint.pformat(get_printable_request(req, task_data)).replace("[", "\[")
task_str = pprint.pformat(task_data.dict()).replace("[", "\[") task_str = pprint.pformat(task_data.dict()).replace("[", "\[")
log.info(f"request: {req_str}") log.info(f"request: {req_str}")
log.info(f"task data: {task_str}") log.info(f"task data: {task_str}")
@ -95,7 +96,7 @@ def make_images_internal(
task_data.stream_image_progress_interval, task_data.stream_image_progress_interval,
) )
gc(context) gc(context)
filtered_images = filter_images(task_data, images, user_stopped) filtered_images = filter_images(req, task_data, images, user_stopped)
if task_data.save_to_disk_path is not None: if task_data.save_to_disk_path is not None:
save_images_to_disk(images, filtered_images, req, task_data) save_images_to_disk(images, filtered_images, req, task_data)
@ -151,24 +152,38 @@ def generate_images_internal(
return images, user_stopped return images, user_stopped
def filter_images(task_data: TaskData, images: list, user_stopped): def filter_images(req: GenerateImageRequest, task_data: TaskData, images: list, user_stopped):
if user_stopped: if user_stopped:
return images return images
filters_to_apply = [] filters_to_apply = []
filter_params = {}
if task_data.block_nsfw: if task_data.block_nsfw:
filters_to_apply.append("nsfw_checker") filters_to_apply.append("nsfw_checker")
if task_data.use_face_correction and "codeformer" in task_data.use_face_correction.lower(): if task_data.use_face_correction and "codeformer" in task_data.use_face_correction.lower():
filters_to_apply.append("codeformer") filters_to_apply.append("codeformer")
elif task_data.use_face_correction and "gfpgan" in task_data.use_face_correction.lower(): elif task_data.use_face_correction and "gfpgan" in task_data.use_face_correction.lower():
filters_to_apply.append("gfpgan") filters_to_apply.append("gfpgan")
if task_data.use_upscale and "realesrgan" in task_data.use_upscale.lower(): if task_data.use_upscale:
if "realesrgan" in task_data.use_upscale.lower():
filters_to_apply.append("realesrgan") filters_to_apply.append("realesrgan")
elif task_data.use_upscale == "latent_upscaler":
filters_to_apply.append("latent_upscaler")
filter_params["latent_upscaler_options"] = {
"prompt": req.prompt,
"negative_prompt": req.negative_prompt,
"seed": req.seed,
"num_inference_steps": task_data.latent_upscaler_steps,
"guidance_scale": 0,
}
filter_params["scale"] = task_data.upscale_amount
if len(filters_to_apply) == 0: if len(filters_to_apply) == 0:
return images return images
return apply_filters(context, filters_to_apply, images, scale=task_data.upscale_amount) return apply_filters(context, filters_to_apply, images, **filter_params)
def construct_response(images: list, seeds: list, task_data: TaskData, base_seed: int): def construct_response(images: list, seeds: list, task_data: TaskData, base_seed: int):

View File

@ -336,6 +336,7 @@ def thread_render(device):
current_state = ServerStates.LoadingModel current_state = ServerStates.LoadingModel
model_manager.resolve_model_paths(task.task_data) model_manager.resolve_model_paths(task.task_data)
model_manager.reload_models_if_necessary(renderer.context, task.task_data) model_manager.reload_models_if_necessary(renderer.context, task.task_data)
model_manager.fail_if_models_did_not_load(renderer.context)
current_state = ServerStates.Rendering current_state = ServerStates.Rendering
task.response = renderer.make_images( task.response = renderer.make_images(

View File

@ -23,6 +23,7 @@ class GenerateImageRequest(BaseModel):
sampler_name: str = None # "ddim", "plms", "heun", "euler", "euler_a", "dpm2", "dpm2_a", "lms" sampler_name: str = None # "ddim", "plms", "heun", "euler", "euler_a", "dpm2", "dpm2_a", "lms"
hypernetwork_strength: float = 0 hypernetwork_strength: float = 0
lora_alpha: float = 0 lora_alpha: float = 0
tiling: str = "none" # "none", "x", "y", "xy"
class TaskData(BaseModel): class TaskData(BaseModel):
@ -32,8 +33,9 @@ class TaskData(BaseModel):
vram_usage_level: str = "balanced" # or "low" or "medium" vram_usage_level: str = "balanced" # or "low" or "medium"
use_face_correction: str = None # or "GFPGANv1.3" use_face_correction: str = None # or "GFPGANv1.3"
use_upscale: str = None # or "RealESRGAN_x4plus" or "RealESRGAN_x4plus_anime_6B" use_upscale: str = None # or "RealESRGAN_x4plus" or "RealESRGAN_x4plus_anime_6B" or "latent_upscaler"
upscale_amount: int = 4 # or 2 upscale_amount: int = 4 # or 2
latent_upscaler_steps: int = 10
use_stable_diffusion_model: str = "sd-v1-4" use_stable_diffusion_model: str = "sd-v1-4"
# use_stable_diffusion_config: str = "v1-inference" # use_stable_diffusion_config: str = "v1-inference"
use_vae_model: str = None use_vae_model: str = None
@ -48,6 +50,7 @@ class TaskData(BaseModel):
metadata_output_format: str = "txt" # or "json" metadata_output_format: str = "txt" # or "json"
stream_image_progress: bool = False stream_image_progress: bool = False
stream_image_progress_interval: int = 5 stream_image_progress_interval: int = 5
clip_skip: bool = False
class MergeRequest(BaseModel): class MergeRequest(BaseModel):

View File

@ -15,23 +15,26 @@ img_number_regex = re.compile("([0-9]{5,})")
# keep in sync with `ui/media/js/dnd.js` # keep in sync with `ui/media/js/dnd.js`
TASK_TEXT_MAPPING = { TASK_TEXT_MAPPING = {
"prompt": "Prompt", "prompt": "Prompt",
"negative_prompt": "Negative Prompt",
"seed": "Seed",
"use_stable_diffusion_model": "Stable Diffusion model",
"clip_skip": "Clip Skip",
"use_vae_model": "VAE model",
"sampler_name": "Sampler",
"width": "Width", "width": "Width",
"height": "Height", "height": "Height",
"seed": "Seed",
"num_inference_steps": "Steps", "num_inference_steps": "Steps",
"guidance_scale": "Guidance Scale", "guidance_scale": "Guidance Scale",
"prompt_strength": "Prompt Strength", "prompt_strength": "Prompt Strength",
"use_lora_model": "LoRA model",
"lora_alpha": "LoRA Strength",
"use_hypernetwork_model": "Hypernetwork model",
"hypernetwork_strength": "Hypernetwork Strength",
"tiling": "Seamless Tiling",
"use_face_correction": "Use Face Correction", "use_face_correction": "Use Face Correction",
"use_upscale": "Use Upscaling", "use_upscale": "Use Upscaling",
"upscale_amount": "Upscale By", "upscale_amount": "Upscale By",
"sampler_name": "Sampler", "latent_upscaler_steps": "Latent Upscaler Steps"
"negative_prompt": "Negative Prompt",
"use_stable_diffusion_model": "Stable Diffusion model",
"use_vae_model": "VAE model",
"use_hypernetwork_model": "Hypernetwork model",
"hypernetwork_strength": "Hypernetwork Strength",
"use_lora_model": "LoRA model",
"lora_alpha": "LoRA Strength",
} }
time_placeholders = { time_placeholders = {
@ -168,7 +171,9 @@ def save_images_to_disk(images: list, filtered_images: list, req: GenerateImageR
output_quality=task_data.output_quality, output_quality=task_data.output_quality,
output_lossless=task_data.output_lossless, output_lossless=task_data.output_lossless,
) )
if task_data.metadata_output_format.lower() in ["json", "txt", "embed"]: if task_data.metadata_output_format:
for metadata_output_format in task_data.metadata_output_format.split(","):
if metadata_output_format.lower() in ["json", "txt", "embed"]:
save_dicts( save_dicts(
metadata_entries, metadata_entries,
save_dir_path, save_dir_path,
@ -179,30 +184,10 @@ def save_images_to_disk(images: list, filtered_images: list, req: GenerateImageR
def get_metadata_entries_for_request(req: GenerateImageRequest, task_data: TaskData): def get_metadata_entries_for_request(req: GenerateImageRequest, task_data: TaskData):
metadata = get_printable_request(req) metadata = get_printable_request(req, task_data)
metadata.update(
{
"use_stable_diffusion_model": task_data.use_stable_diffusion_model,
"use_vae_model": task_data.use_vae_model,
"use_hypernetwork_model": task_data.use_hypernetwork_model,
"use_lora_model": task_data.use_lora_model,
"use_face_correction": task_data.use_face_correction,
"use_upscale": task_data.use_upscale,
}
)
if metadata["use_upscale"] is not None:
metadata["upscale_amount"] = task_data.upscale_amount
if task_data.use_hypernetwork_model is None:
del metadata["hypernetwork_strength"]
if task_data.use_lora_model is None:
if "lora_alpha" in metadata:
del metadata["lora_alpha"]
app_config = app.getConfig()
if not app_config.get("test_diffusers", False) and "use_lora_model" in metadata:
del metadata["use_lora_model"]
# if text, format it in the text format expected by the UI # if text, format it in the text format expected by the UI
is_txt_format = task_data.metadata_output_format.lower() == "txt" is_txt_format = task_data.metadata_output_format and "txt" in task_data.metadata_output_format.lower().split(",")
if is_txt_format: if is_txt_format:
metadata = {TASK_TEXT_MAPPING[key]: val for key, val in metadata.items() if key in TASK_TEXT_MAPPING} metadata = {TASK_TEXT_MAPPING[key]: val for key, val in metadata.items() if key in TASK_TEXT_MAPPING}
@ -213,12 +198,35 @@ def get_metadata_entries_for_request(req: GenerateImageRequest, task_data: TaskD
return entries return entries
def get_printable_request(req: GenerateImageRequest): def get_printable_request(req: GenerateImageRequest, task_data: TaskData):
metadata = req.dict() req_metadata = req.dict()
del metadata["init_image"] task_data_metadata = task_data.dict()
del metadata["init_image_mask"]
if req.init_image is None: # Save the metadata in the order defined in TASK_TEXT_MAPPING
metadata = {}
for key in TASK_TEXT_MAPPING.keys():
if key in req_metadata:
metadata[key] = req_metadata[key]
elif key in task_data_metadata:
metadata[key] = task_data_metadata[key]
# Clean up the metadata
if req.init_image is None and "prompt_strength" in metadata:
del metadata["prompt_strength"] del metadata["prompt_strength"]
if task_data.use_upscale is None and "upscale_amount" in metadata:
del metadata["upscale_amount"]
if task_data.use_hypernetwork_model is None and "hypernetwork_strength" in metadata:
del metadata["hypernetwork_strength"]
if task_data.use_lora_model is None and "lora_alpha" in metadata:
del metadata["lora_alpha"]
if task_data.use_upscale != "latent_upscaler" and "latent_upscaler_steps" in metadata:
del metadata["latent_upscaler_steps"]
app_config = app.getConfig()
if not app_config.get("test_diffusers", False):
for key in (x for x in ["use_lora_model", "lora_alpha", "clip_skip", "tiling", "latent_upscaler_steps"] if x in metadata):
del metadata[key]
return metadata return metadata

View File

@ -30,7 +30,7 @@
<h1> <h1>
<img id="logo_img" src="/media/images/icon-512x512.png" > <img id="logo_img" src="/media/images/icon-512x512.png" >
Easy Diffusion Easy Diffusion
<small>v2.5.35 <span id="updateBranchLabel"></span></small> <small>v2.5.39 <span id="updateBranchLabel"></span></small>
</h1> </h1>
</div> </div>
<div id="server-status"> <div id="server-status">
@ -135,10 +135,13 @@
<button id="reload-models" class="secondaryButton reloadModels"><i class='fa-solid fa-rotate'></i></button> <button id="reload-models" class="secondaryButton reloadModels"><i class='fa-solid fa-rotate'></i></button>
<a href="https://github.com/cmdr2/stable-diffusion-ui/wiki/Custom-Models" target="_blank"><i class="fa-solid fa-circle-question help-btn"><span class="simple-tooltip top-left">Click to learn more about custom models</span></i></a> <a href="https://github.com/cmdr2/stable-diffusion-ui/wiki/Custom-Models" target="_blank"><i class="fa-solid fa-circle-question help-btn"><span class="simple-tooltip top-left">Click to learn more about custom models</span></i></a>
</td></tr> </td></tr>
<!-- <tr id="modelConfigSelection" class="pl-5"><td><label for="model_config">Model Config:</label></td><td> <tr class="pl-5 displayNone" id="clip_skip_config">
<select id="model_config" name="model_config"> <td><label for="clip_skip">Clip Skip:</label></td>
</select> <td>
</td></tr> --> <input id="clip_skip" name="clip_skip" type="checkbox">
<a href="https://github.com/cmdr2/stable-diffusion-ui/wiki/Clip-Skip" target="_blank"><i class="fa-solid fa-circle-question help-btn"><span class="simple-tooltip top-left">Click to learn more about Clip Skip</span></i></a>
</td>
</tr>
<tr class="pl-5"><td><label for="vae_model">Custom VAE:</label></td><td> <tr class="pl-5"><td><label for="vae_model">Custom VAE:</label></td><td>
<input id="vae_model" type="text" spellcheck="false" autocomplete="off" class="model-filter" data-path="" /> <input id="vae_model" type="text" spellcheck="false" autocomplete="off" class="model-filter" data-path="" />
<a href="https://github.com/cmdr2/stable-diffusion-ui/wiki/VAE-Variational-Auto-Encoder" target="_blank"><i class="fa-solid fa-circle-question help-btn"><span class="simple-tooltip top-left">Click to learn more about VAEs</span></i></a> <a href="https://github.com/cmdr2/stable-diffusion-ui/wiki/VAE-Variational-Auto-Encoder" target="_blank"><i class="fa-solid fa-circle-question help-btn"><span class="simple-tooltip top-left">Click to learn more about VAEs</span></i></a>
@ -154,16 +157,18 @@
<option value="dpm2_a">DPM2 Ancestral</option> <option value="dpm2_a">DPM2 Ancestral</option>
<option value="lms">LMS</option> <option value="lms">LMS</option>
<option value="dpm_solver_stability">DPM Solver (Stability AI)</option> <option value="dpm_solver_stability">DPM Solver (Stability AI)</option>
<option value="dpmpp_2s_a">DPM++ 2s Ancestral (Karras)</option> <option value="dpmpp_2s_a" class="k_diffusion-only">DPM++ 2s Ancestral (Karras)</option>
<option value="dpmpp_2m">DPM++ 2m (Karras)</option> <option value="dpmpp_2m">DPM++ 2m (Karras)</option>
<option value="dpmpp_sde">DPM++ SDE (Karras)</option> <option value="dpmpp_sde" class="k_diffusion-only">DPM++ SDE (Karras)</option>
<option value="dpm_fast">DPM Fast (Karras)</option> <option value="dpm_fast" class="k_diffusion-only">DPM Fast (Karras)</option>
<option value="dpm_adaptive">DPM Adaptive (Karras)</option> <option value="dpm_adaptive" class="k_diffusion-only">DPM Adaptive (Karras)</option>
<option value="unipc_snr">UniPC SNR</option> <option value="ddpm" class="diffusers-only">DDPM</option>
<option value="deis" class="diffusers-only">DEIS</option>
<option value="unipc_snr" class="k_diffusion-only">UniPC SNR</option>
<option value="unipc_tu">UniPC TU</option> <option value="unipc_tu">UniPC TU</option>
<option value="unipc_snr_2">UniPC SNR 2</option> <option value="unipc_snr_2" class="k_diffusion-only">UniPC SNR 2</option>
<option value="unipc_tu_2">UniPC TU 2</option> <option value="unipc_tu_2" class="k_diffusion-only">UniPC TU 2</option>
<option value="unipc_tq">UniPC TQ</option> <option value="unipc_tq" class="k_diffusion-only">UniPC TQ</option>
</select> </select>
<a href="https://github.com/cmdr2/stable-diffusion-ui/wiki/How-to-Use#samplers" target="_blank"><i class="fa-solid fa-circle-question help-btn"><span class="simple-tooltip top-left">Click to learn more about samplers</span></i></a> <a href="https://github.com/cmdr2/stable-diffusion-ui/wiki/How-to-Use#samplers" target="_blank"><i class="fa-solid fa-circle-question help-btn"><span class="simple-tooltip top-left">Click to learn more about samplers</span></i></a>
</td></tr> </td></tr>
@ -231,6 +236,15 @@
<td><label for="hypernetwork_strength_slider">Hypernetwork Strength:</label></td> <td><label for="hypernetwork_strength_slider">Hypernetwork Strength:</label></td>
<td> <input id="hypernetwork_strength_slider" name="hypernetwork_strength_slider" class="editor-slider" value="100" type="range" min="0" max="100"> <input id="hypernetwork_strength" name="hypernetwork_strength" size="4" pattern="^[0-9\.]+$" onkeypress="preventNonNumericalInput(event)"><br/></td> <td> <input id="hypernetwork_strength_slider" name="hypernetwork_strength_slider" class="editor-slider" value="100" type="range" min="0" max="100"> <input id="hypernetwork_strength" name="hypernetwork_strength" size="4" pattern="^[0-9\.]+$" onkeypress="preventNonNumericalInput(event)"><br/></td>
</tr> </tr>
<tr id="tiling_container" class="pl-5"><td><label for="tiling">Seamless Tiling:</label></td><td>
<select id="tiling" name="tiling">
<option value="none" selected>None</option>
<option value="x">Horizontal</option>
<option value="y">Vertical</option>
<option value="xy">Both</option>
</select>
<a href="https://github.com/cmdr2/stable-diffusion-ui/wiki/Seamless-Tiling" target="_blank"><i class="fa-solid fa-circle-question help-btn"><span class="simple-tooltip top-left">Click to learn more about Seamless Tiling</span></i></a>
</td></tr>
<tr class="pl-5"><td><label for="output_format">Output Format:</label></td><td> <tr class="pl-5"><td><label for="output_format">Output Format:</label></td><td>
<select id="output_format" name="output_format"> <select id="output_format" name="output_format">
<option value="jpeg" selected>jpeg</option> <option value="jpeg" selected>jpeg</option>
@ -253,14 +267,18 @@
<li class="pl-5"> <li class="pl-5">
<input id="use_upscale" name="use_upscale" type="checkbox"> <label for="use_upscale">Scale up by</label> <input id="use_upscale" name="use_upscale" type="checkbox"> <label for="use_upscale">Scale up by</label>
<select id="upscale_amount" name="upscale_amount"> <select id="upscale_amount" name="upscale_amount">
<option value="2">2x</option> <option id="upscale_amount_2x" value="2">2x</option>
<option value="4" selected>4x</option> <option id="upscale_amount_4x" value="4" selected>4x</option>
</select> </select>
with with
<select id="upscale_model" name="upscale_model"> <select id="upscale_model" name="upscale_model">
<option value="RealESRGAN_x4plus" selected>RealESRGAN_x4plus</option> <option value="RealESRGAN_x4plus" selected>RealESRGAN_x4plus</option>
<option value="RealESRGAN_x4plus_anime_6B">RealESRGAN_x4plus_anime_6B</option> <option value="RealESRGAN_x4plus_anime_6B">RealESRGAN_x4plus_anime_6B</option>
<option value="latent_upscaler">Latent Upscaler 2x</option>
</select> </select>
<div id="latent_upscaler_settings" class="displayNone">
<label for="latent_upscaler_steps_slider">Upscaling Steps:</label></td><td> <input id="latent_upscaler_steps_slider" name="latent_upscaler_steps_slider" class="editor-slider" value="10" type="range" min="1" max="50"> <input id="latent_upscaler_steps" name="latent_upscaler_steps" size="4" pattern="^[0-9\.]+$" onkeypress="preventNonNumericalInput(event)">
</div>
</li> </li>
<li class="pl-5"><input id="show_only_filtered_image" name="show_only_filtered_image" type="checkbox" checked> <label for="show_only_filtered_image">Show only the corrected/upscaled image</label></li> <li class="pl-5"><input id="show_only_filtered_image" name="show_only_filtered_image" type="checkbox" checked> <label for="show_only_filtered_image">Show only the corrected/upscaled image</label></li>
</ul></div> </ul></div>

View File

@ -70,6 +70,14 @@
max-height: calc(100vh - (var(--popup-padding) * 2) - 4px); max-height: calc(100vh - (var(--popup-padding) * 2) - 4px);
} }
#viewFullSizeImgModal img:not(.natural-zoom) {
cursor: grab;
}
#viewFullSizeImgModal .grabbing img:not(.natural-zoom) {
cursor: grabbing;
}
#viewFullSizeImgModal .content > div::-webkit-scrollbar-track, #viewFullSizeImgModal .content > div::-webkit-scrollbar-corner { #viewFullSizeImgModal .content > div::-webkit-scrollbar-track, #viewFullSizeImgModal .content > div::-webkit-scrollbar-corner {
background: rgba(0, 0, 0, .5) background: rgba(0, 0, 0, .5)
} }

View File

@ -1303,6 +1303,12 @@ body.wait-pause {
display:none !important; display:none !important;
} }
#latent_upscaler_settings {
padding-top: 3pt;
padding-bottom: 3pt;
padding-left: 5pt;
}
/* TOAST NOTIFICATIONS */ /* TOAST NOTIFICATIONS */
.toast-notification { .toast-notification {
position: fixed; position: fixed;

View File

@ -13,6 +13,7 @@ const SETTINGS_IDS_LIST = [
"num_outputs_total", "num_outputs_total",
"num_outputs_parallel", "num_outputs_parallel",
"stable_diffusion_model", "stable_diffusion_model",
"clip_skip",
"vae_model", "vae_model",
"hypernetwork_model", "hypernetwork_model",
"lora_model", "lora_model",
@ -24,6 +25,7 @@ const SETTINGS_IDS_LIST = [
"prompt_strength", "prompt_strength",
"hypernetwork_strength", "hypernetwork_strength",
"lora_alpha", "lora_alpha",
"tiling",
"output_format", "output_format",
"output_quality", "output_quality",
"output_lossless", "output_lossless",
@ -33,6 +35,7 @@ const SETTINGS_IDS_LIST = [
"gfpgan_model", "gfpgan_model",
"use_upscale", "use_upscale",
"upscale_amount", "upscale_amount",
"latent_upscaler_steps",
"block_nsfw", "block_nsfw",
"show_only_filtered_image", "show_only_filtered_image",
"upscale_model", "upscale_model",
@ -168,6 +171,22 @@ function loadSettings() {
} }
}) })
CURRENTLY_LOADING_SETTINGS = false CURRENTLY_LOADING_SETTINGS = false
} else if (localStorage.length < 2) {
// localStorage is too short for OldSettings
// So this is likely the first time Easy Diffusion is running.
// Initialize vram_usage_level based on the available VRAM
function initGPUProfile(event) {
if ( "detail" in event
&& "active" in event.detail
&& "cuda:0" in event.detail.active
&& event.detail.active["cuda:0"].mem_total <4.5 )
{
vramUsageLevelField.value = "low"
vramUsageLevelField.dispatchEvent(new Event("change"))
}
document.removeEventListener("system_info_update", initGPUProfile)
}
document.addEventListener("system_info_update", initGPUProfile)
} else { } else {
CURRENTLY_LOADING_SETTINGS = true CURRENTLY_LOADING_SETTINGS = true
tryLoadOldSettings() tryLoadOldSettings()

View File

@ -37,6 +37,7 @@ function parseBoolean(stringValue) {
} }
} }
// keep in sync with `ui/easydiffusion/utils/save_utils.py`
const TASK_MAPPING = { const TASK_MAPPING = {
prompt: { prompt: {
name: "Prompt", name: "Prompt",
@ -78,6 +79,7 @@ const TASK_MAPPING = {
if (!widthField.value) { if (!widthField.value) {
widthField.value = oldVal widthField.value = oldVal
} }
widthField.dispatchEvent(new Event("change"))
}, },
readUI: () => parseInt(widthField.value), readUI: () => parseInt(widthField.value),
parse: (val) => parseInt(val), parse: (val) => parseInt(val),
@ -90,6 +92,7 @@ const TASK_MAPPING = {
if (!heightField.value) { if (!heightField.value) {
heightField.value = oldVal heightField.value = oldVal
} }
heightField.dispatchEvent(new Event("change"))
}, },
readUI: () => parseInt(heightField.value), readUI: () => parseInt(heightField.value),
parse: (val) => parseInt(val), parse: (val) => parseInt(val),
@ -171,6 +174,11 @@ const TASK_MAPPING = {
name: "Use Face Correction", name: "Use Face Correction",
setUI: (use_face_correction) => { setUI: (use_face_correction) => {
const oldVal = gfpganModelField.value const oldVal = gfpganModelField.value
console.log("use face correction", use_face_correction)
if (use_face_correction == null || use_face_correction == "None") {
gfpganModelField.disabled = true
useFaceCorrectionField.checked = false
} else {
gfpganModelField.value = getModelPath(use_face_correction, [".pth"]) gfpganModelField.value = getModelPath(use_face_correction, [".pth"])
if (gfpganModelField.value) { if (gfpganModelField.value) {
// Is a valid value for the field. // Is a valid value for the field.
@ -182,6 +190,7 @@ const TASK_MAPPING = {
gfpganModelField.value = oldVal gfpganModelField.value = oldVal
useFaceCorrectionField.checked = false useFaceCorrectionField.checked = false
} }
}
//useFaceCorrectionField.checked = parseBoolean(use_face_correction) //useFaceCorrectionField.checked = parseBoolean(use_face_correction)
}, },
@ -217,6 +226,14 @@ const TASK_MAPPING = {
readUI: () => upscaleAmountField.value, readUI: () => upscaleAmountField.value,
parse: (val) => val, parse: (val) => val,
}, },
latent_upscaler_steps: {
name: "Latent Upscaler Steps",
setUI: (latent_upscaler_steps) => {
latentUpscalerStepsField.value = latent_upscaler_steps
},
readUI: () => latentUpscalerStepsField.value,
parse: (val) => val,
},
sampler_name: { sampler_name: {
name: "Sampler", name: "Sampler",
setUI: (sampler_name) => { setUI: (sampler_name) => {
@ -240,6 +257,22 @@ const TASK_MAPPING = {
readUI: () => stableDiffusionModelField.value, readUI: () => stableDiffusionModelField.value,
parse: (val) => val, parse: (val) => val,
}, },
clip_skip: {
name: "Clip Skip",
setUI: (value) => {
clip_skip.checked = value
},
readUI: () => clip_skip.checked,
parse: (val) => Boolean(val),
},
tiling: {
name: "Tiling",
setUI: (val) => {
tilingField.value = val
},
readUI: () => tilingField.value,
parse: (val) => val,
},
use_vae_model: { use_vae_model: {
name: "VAE model", name: "VAE model",
setUI: (use_vae_model) => { setUI: (use_vae_model) => {
@ -402,6 +435,7 @@ function restoreTaskToUI(task, fieldsToSkip) {
if (!("original_prompt" in task.reqBody)) { if (!("original_prompt" in task.reqBody)) {
promptField.value = task.reqBody.prompt promptField.value = task.reqBody.prompt
} }
promptField.dispatchEvent(new Event("input"))
// properly reset checkboxes // properly reset checkboxes
if (!("use_face_correction" in task.reqBody)) { if (!("use_face_correction" in task.reqBody)) {

View File

@ -750,6 +750,7 @@
sampler_name: "string", sampler_name: "string",
use_stable_diffusion_model: "string", use_stable_diffusion_model: "string",
clip_skip: "boolean",
num_inference_steps: "number", num_inference_steps: "number",
guidance_scale: "number", guidance_scale: "number",
@ -763,6 +764,7 @@
const TASK_DEFAULTS = { const TASK_DEFAULTS = {
sampler_name: "plms", sampler_name: "plms",
use_stable_diffusion_model: "sd-v1-4", use_stable_diffusion_model: "sd-v1-4",
clip_skip: false,
num_inference_steps: 50, num_inference_steps: 50,
guidance_scale: 7.5, guidance_scale: 7.5,
negative_prompt: "", negative_prompt: "",
@ -787,9 +789,10 @@
use_hypernetwork_model: "string", use_hypernetwork_model: "string",
hypernetwork_strength: "number", hypernetwork_strength: "number",
output_lossless: "boolean", output_lossless: "boolean",
tiling: "string",
} }
// Higer values will result in... // Higher values will result in...
// pytorch_lightning/utilities/seed.py:60: UserWarning: X is not in bounds, numpy accepts from 0 to 4294967295 // pytorch_lightning/utilities/seed.py:60: UserWarning: X is not in bounds, numpy accepts from 0 to 4294967295
const MAX_SEED_VALUE = 4294967295 const MAX_SEED_VALUE = 4294967295

View File

@ -834,6 +834,7 @@ function pixelCompare(int1, int2) {
} }
// adapted from https://ben.akrin.com/canvas_fill/fill_04.html // adapted from https://ben.akrin.com/canvas_fill/fill_04.html
// May 2023 - look at using a library instead of custom code: https://github.com/shaneosullivan/example-canvas-fill
function flood_fill(editor, the_canvas_context, x, y, color) { function flood_fill(editor, the_canvas_context, x, y, color) {
pixel_stack = [{ x: x, y: y }] pixel_stack = [{ x: x, y: y }]
pixels = the_canvas_context.getImageData(0, 0, editor.width, editor.height) pixels = the_canvas_context.getImageData(0, 0, editor.width, editor.height)

View File

@ -63,15 +63,73 @@ const imageModal = (function() {
setZoomLevel(imageContainer.querySelector("img")?.classList?.contains("natural-zoom")) setZoomLevel(imageContainer.querySelector("img")?.classList?.contains("natural-zoom"))
) )
const state = { const initialState = () => ({
previous: undefined, previous: undefined,
next: undefined, next: undefined,
start: {
x: 0,
y: 0,
},
scroll: {
x: 0,
y: 0,
},
})
const state = initialState()
// Allow grabbing the image to scroll
const stopGrabbing = (e) => {
if(imageContainer.classList.contains("grabbing")) {
imageContainer.classList.remove("grabbing")
e?.preventDefault()
console.log(`stopGrabbing()`, e)
}
}
const addImageGrabbing = (image) => {
image?.addEventListener('mousedown', (e) => {
if (!image.classList.contains("natural-zoom")) {
e.stopPropagation()
e.stopImmediatePropagation()
e.preventDefault()
imageContainer.classList.add("grabbing")
state.start.x = e.pageX - imageContainer.offsetLeft
state.scroll.x = imageContainer.scrollLeft
state.start.y = e.pageY - imageContainer.offsetTop
state.scroll.y = imageContainer.scrollTop
}
})
image?.addEventListener('mouseup', stopGrabbing)
image?.addEventListener('mouseleave', stopGrabbing)
image?.addEventListener('mousemove', (e) => {
if(imageContainer.classList.contains("grabbing")) {
e.stopPropagation()
e.stopImmediatePropagation()
e.preventDefault()
// Might need to increase this multiplier based on the image size to window size ratio
// The default 1:1 is pretty slow
const multiplier = 1.0
const deltaX = e.pageX - imageContainer.offsetLeft - state.start.x
imageContainer.scrollLeft = state.scroll.x - (deltaX * multiplier)
const deltaY = e.pageY - imageContainer.offsetTop - state.start.y
imageContainer.scrollTop = state.scroll.y - (deltaY * multiplier)
}
})
} }
const clear = () => { const clear = () => {
imageContainer.innerHTML = "" imageContainer.innerHTML = ""
Object.keys(state).forEach((key) => delete state[key]) Object.entries(initialState()).forEach(([key, value]) => state[key] = value)
stopGrabbing()
} }
const close = () => { const close = () => {
@ -95,6 +153,7 @@ const imageModal = (function() {
const src = typeof options === "string" ? options : options.src const src = typeof options === "string" ? options : options.src
const imgElem = createElement("img", { src }, "natural-zoom") const imgElem = createElement("img", { src }, "natural-zoom")
addImageGrabbing(imgElem)
imageContainer.appendChild(imgElem) imageContainer.appendChild(imgElem)
modalElem.classList.add("active") modalElem.classList.add("active")
document.body.style.overflow = "hidden" document.body.style.overflow = "hidden"

View File

@ -246,7 +246,7 @@ function refreshInactiveTags(inactiveTags) {
overlays.forEach((i) => { overlays.forEach((i) => {
let modifierName = i.parentElement.getElementsByClassName("modifier-card-label")[0].getElementsByTagName("p")[0] let modifierName = i.parentElement.getElementsByClassName("modifier-card-label")[0].getElementsByTagName("p")[0]
.dataset.fullName .dataset.fullName
if (inactiveTags?.find((element) => element === modifierName) !== undefined) { if (inactiveTags?.find((element) => trimModifiers(element) === modifierName) !== undefined) {
i.parentElement.classList.add("modifier-toggle-inactive") i.parentElement.classList.add("modifier-toggle-inactive")
} }
}) })

View File

@ -13,6 +13,16 @@ const taskConfigSetup = {
num_inference_steps: "Inference Steps", num_inference_steps: "Inference Steps",
guidance_scale: "Guidance Scale", guidance_scale: "Guidance Scale",
use_stable_diffusion_model: "Model", use_stable_diffusion_model: "Model",
clip_skip: {
label: "Clip Skip",
visible: ({ reqBody }) => reqBody?.clip_skip,
value: ({ reqBody }) => "yes",
},
tiling: {
label: "Tiling",
visible: ({ reqBody }) => reqBody?.tiling != "none",
value: ({ reqBody }) => reqBody?.tiling,
},
use_vae_model: { use_vae_model: {
label: "VAE", label: "VAE",
visible: ({ reqBody }) => reqBody?.use_vae_model !== undefined && reqBody?.use_vae_model.trim() !== "", visible: ({ reqBody }) => reqBody?.use_vae_model !== undefined && reqBody?.use_vae_model.trim() !== "",
@ -81,7 +91,12 @@ let gfpganModelField = new ModelDropdown(document.querySelector("#gfpgan_model")
let useUpscalingField = document.querySelector("#use_upscale") let useUpscalingField = document.querySelector("#use_upscale")
let upscaleModelField = document.querySelector("#upscale_model") let upscaleModelField = document.querySelector("#upscale_model")
let upscaleAmountField = document.querySelector("#upscale_amount") let upscaleAmountField = document.querySelector("#upscale_amount")
let latentUpscalerSettings = document.querySelector("#latent_upscaler_settings")
let latentUpscalerStepsSlider = document.querySelector("#latent_upscaler_steps_slider")
let latentUpscalerStepsField = document.querySelector("#latent_upscaler_steps")
let stableDiffusionModelField = new ModelDropdown(document.querySelector("#stable_diffusion_model"), "stable-diffusion") let stableDiffusionModelField = new ModelDropdown(document.querySelector("#stable_diffusion_model"), "stable-diffusion")
let clipSkipField = document.querySelector("#clip_skip")
let tilingField = document.querySelector("#tiling")
let vaeModelField = new ModelDropdown(document.querySelector("#vae_model"), "vae", "None") let vaeModelField = new ModelDropdown(document.querySelector("#vae_model"), "vae", "None")
let hypernetworkModelField = new ModelDropdown(document.querySelector("#hypernetwork_model"), "hypernetwork", "None") let hypernetworkModelField = new ModelDropdown(document.querySelector("#hypernetwork_model"), "hypernetwork", "None")
let hypernetworkStrengthSlider = document.querySelector("#hypernetwork_strength_slider") let hypernetworkStrengthSlider = document.querySelector("#hypernetwork_strength_slider")
@ -233,7 +248,7 @@ function setServerStatus(event) {
break break
} }
if (SD.serverState.devices) { if (SD.serverState.devices) {
setDeviceInfo(SD.serverState.devices) document.dispatchEvent(new CustomEvent("system_info_update", { detail: SD.serverState.devices }))
} }
} }
@ -252,20 +267,11 @@ function shiftOrConfirm(e, prompt, fn) {
if (e.shiftKey || !confirmDangerousActionsField.checked) { if (e.shiftKey || !confirmDangerousActionsField.checked) {
fn(e) fn(e)
} else { } else {
$.confirm({ confirm(
theme: "modern",
title: prompt,
useBootstrap: false,
animateFromElement: false,
content:
'<small>Tip: To skip this dialog, use shift-click or disable the "Confirm dangerous actions" setting in the Settings tab.</small>', '<small>Tip: To skip this dialog, use shift-click or disable the "Confirm dangerous actions" setting in the Settings tab.</small>',
buttons: { prompt,
yes: () => { () => { fn(e) }
fn(e) )
},
cancel: () => {},
},
})
} }
} }
@ -287,6 +293,7 @@ function logError(msg, res, outputMsg) {
logMsg(msg, "error", outputMsg) logMsg(msg, "error", outputMsg)
console.log("request error", res) console.log("request error", res)
console.trace()
setStatus("request", "error", "error") setStatus("request", "error", "error")
} }
@ -778,11 +785,6 @@ function getTaskUpdater(task, reqBody, outputContainer) {
} }
msg += "</pre>" msg += "</pre>"
logError(msg, event, outputMsg) logError(msg, event, outputMsg)
} else {
let msg = `Unexpected Read Error:<br/><pre>Error:${
this.exception
}<br/>EventInfo: ${JSON.stringify(event, undefined, 4)}</pre>`
logError(msg, event, outputMsg)
} }
break break
} }
@ -879,15 +881,15 @@ function onTaskCompleted(task, reqBody, instance, outputContainer, stepUpdate) {
1. If you have set an initial image, please try reducing its dimension to ${MAX_INIT_IMAGE_DIMENSION}x${MAX_INIT_IMAGE_DIMENSION} or smaller.<br/> 1. If you have set an initial image, please try reducing its dimension to ${MAX_INIT_IMAGE_DIMENSION}x${MAX_INIT_IMAGE_DIMENSION} or smaller.<br/>
2. Try picking a lower level in the '<em>GPU Memory Usage</em>' setting (in the '<em>Settings</em>' tab).<br/> 2. Try picking a lower level in the '<em>GPU Memory Usage</em>' setting (in the '<em>Settings</em>' tab).<br/>
3. Try generating a smaller image.<br/>` 3. Try generating a smaller image.<br/>`
} else if (msg.toLowerCase().includes("DefaultCPUAllocator: not enough memory")) { } else if (msg.includes("DefaultCPUAllocator: not enough memory")) {
msg += `<br/><br/> msg += `<br/><br/>
Reason: Your computer is running out of system RAM! Reason: Your computer is running out of system RAM!
<br/> <br/><br/>
<b>Suggestions</b>: <b>Suggestions</b>:
<br/> <br/>
1. Try closing unnecessary programs and browser tabs.<br/> 1. Try closing unnecessary programs and browser tabs.<br/>
2. If that doesn't help, please increase your computer's virtual memory by following these steps for 2. If that doesn't help, please increase your computer's virtual memory by following these steps for
<a href="https://www.ibm.com/docs/en/opw/8.2.0?topic=tuning-optional-increasing-paging-file-size-windows-computers" target="_blank">Windows</a>, or <a href="https://www.ibm.com/docs/en/opw/8.2.0?topic=tuning-optional-increasing-paging-file-size-windows-computers" target="_blank">Windows</a> or
<a href="https://linuxhint.com/increase-swap-space-linux/" target="_blank">Linux</a>.<br/> <a href="https://linuxhint.com/increase-swap-space-linux/" target="_blank">Linux</a>.<br/>
3. Try restarting your computer.<br/>` 3. Try restarting your computer.<br/>`
} }
@ -1224,6 +1226,8 @@ function getCurrentUserRequest() {
sampler_name: samplerField.value, sampler_name: samplerField.value,
//render_device: undefined, // Set device affinity. Prefer this device, but wont activate. //render_device: undefined, // Set device affinity. Prefer this device, but wont activate.
use_stable_diffusion_model: stableDiffusionModelField.value, use_stable_diffusion_model: stableDiffusionModelField.value,
clip_skip: clipSkipField.checked,
tiling: tilingField.value,
use_vae_model: vaeModelField.value, use_vae_model: vaeModelField.value,
stream_progress_updates: true, stream_progress_updates: true,
stream_image_progress: numOutputsTotal > 50 ? false : streamImageProgressField.checked, stream_image_progress: numOutputsTotal > 50 ? false : streamImageProgressField.checked,
@ -1261,6 +1265,10 @@ function getCurrentUserRequest() {
if (useUpscalingField.checked) { if (useUpscalingField.checked) {
newTask.reqBody.use_upscale = upscaleModelField.value newTask.reqBody.use_upscale = upscaleModelField.value
newTask.reqBody.upscale_amount = upscaleAmountField.value newTask.reqBody.upscale_amount = upscaleAmountField.value
if (upscaleModelField.value === "latent_upscaler") {
newTask.reqBody.upscale_amount = "2"
newTask.reqBody.latent_upscaler_steps = latentUpscalerStepsField.value
}
} }
if (hypernetworkModelField.value) { if (hypernetworkModelField.value) {
newTask.reqBody.use_hypernetwork_model = hypernetworkModelField.value newTask.reqBody.use_hypernetwork_model = hypernetworkModelField.value
@ -1575,6 +1583,20 @@ useUpscalingField.addEventListener("change", function(e) {
upscaleAmountField.disabled = !this.checked upscaleAmountField.disabled = !this.checked
}) })
function onUpscaleModelChange() {
let upscale4x = document.querySelector("#upscale_amount_4x")
if (upscaleModelField.value === "latent_upscaler") {
upscale4x.disabled = true
upscaleAmountField.value = "2"
latentUpscalerSettings.classList.remove("displayNone")
} else {
upscale4x.disabled = false
latentUpscalerSettings.classList.add("displayNone")
}
}
upscaleModelField.addEventListener("change", onUpscaleModelChange)
onUpscaleModelChange()
makeImageBtn.addEventListener("click", makeImage) makeImageBtn.addEventListener("click", makeImage)
document.onkeydown = function(e) { document.onkeydown = function(e) {
@ -1584,6 +1606,27 @@ document.onkeydown = function(e) {
} }
} }
/********************* Latent Upscaler Steps **************************/
function updateLatentUpscalerSteps() {
latentUpscalerStepsField.value = latentUpscalerStepsSlider.value
latentUpscalerStepsField.dispatchEvent(new Event("change"))
}
function updateLatentUpscalerStepsSlider() {
if (latentUpscalerStepsField.value < 1) {
latentUpscalerStepsField.value = 1
} else if (latentUpscalerStepsField.value > 50) {
latentUpscalerStepsField.value = 50
}
latentUpscalerStepsSlider.value = latentUpscalerStepsField.value
latentUpscalerStepsSlider.dispatchEvent(new Event("change"))
}
latentUpscalerStepsSlider.addEventListener("input", updateLatentUpscalerSteps)
latentUpscalerStepsField.addEventListener("input", updateLatentUpscalerStepsSlider)
updateLatentUpscalerSteps()
/********************* Guidance **************************/ /********************* Guidance **************************/
function updateGuidanceScale() { function updateGuidanceScale() {
guidanceScaleField.value = guidanceScaleSlider.value / 10 guidanceScaleField.value = guidanceScaleSlider.value / 10

View File

@ -181,8 +181,8 @@ var PARAMETERS = [
{ {
id: "listen_to_network", id: "listen_to_network",
type: ParameterType.checkbox, type: ParameterType.checkbox,
label: "Make Stable Diffusion available on your network. Please restart the program after changing this.", label: "Make Stable Diffusion available on your network",
note: "Other devices on your network can access this web page", note: "Other devices on your network can access this web page. Please restart the program after changing this.",
icon: "fa-network-wired", icon: "fa-network-wired",
default: true, default: true,
saveInAppConfig: true, saveInAppConfig: true,
@ -191,7 +191,8 @@ var PARAMETERS = [
id: "listen_port", id: "listen_port",
type: ParameterType.custom, type: ParameterType.custom,
label: "Network port", label: "Network port",
note: "Port that this server listens to. The '9000' part in 'http://localhost:9000'. Please restart the program after changing this.", note:
"Port that this server listens to. The '9000' part in 'http://localhost:9000'. Please restart the program after changing this.",
icon: "fa-anchor", icon: "fa-anchor",
render: (parameter) => { render: (parameter) => {
return `<input id="${parameter.id}" name="${parameter.id}" size="6" value="9000" onkeypress="preventNonNumericalInput(event)">` return `<input id="${parameter.id}" name="${parameter.id}" size="6" value="9000" onkeypress="preventNonNumericalInput(event)">`
@ -388,15 +389,27 @@ async function getAppConfig() {
if (config.net && config.net.listen_port !== undefined) { if (config.net && config.net.listen_port !== undefined) {
listenPortField.value = config.net.listen_port listenPortField.value = config.net.listen_port
} }
if (config.test_diffusers === undefined || config.update_branch === "main") {
testDiffusers.checked = false const testDiffusersEnabled = config.test_diffusers && config.update_branch !== "main"
testDiffusers.checked = testDiffusersEnabled
if (!testDiffusersEnabled) {
document.querySelector("#lora_model_container").style.display = "none" document.querySelector("#lora_model_container").style.display = "none"
document.querySelector("#lora_alpha_container").style.display = "none" document.querySelector("#lora_alpha_container").style.display = "none"
document.querySelector("#tiling_container").style.display = "none"
document.querySelectorAll("#sampler_name option.diffusers-only").forEach((option) => {
option.style.display = "none"
})
} else { } else {
testDiffusers.checked = config.test_diffusers && config.update_branch !== "main" document.querySelector("#lora_model_container").style.display = ""
document.querySelector("#lora_model_container").style.display = testDiffusers.checked ? "" : "none" document.querySelector("#lora_alpha_container").style.display = loraModelField.value ? "" : "none"
document.querySelector("#lora_alpha_container").style.display = document.querySelector("#tiling_container").style.display = ""
testDiffusers.checked && loraModelField.value !== "" ? "" : "none"
document.querySelectorAll("#sampler_name option.k_diffusion-only").forEach((option) => {
option.disabled = true
})
document.querySelector("#clip_skip_config").classList.remove("displayNone")
} }
console.log("get config status response", config) console.log("get config status response", config)
@ -558,6 +571,16 @@ async function getSystemInfo() {
if (allDeviceIds.length === 0) { if (allDeviceIds.length === 0) {
useCPUField.checked = true useCPUField.checked = true
useCPUField.disabled = true // no compatible GPUs, so make the CPU mandatory useCPUField.disabled = true // no compatible GPUs, so make the CPU mandatory
getParameterSettingsEntry("use_cpu").addEventListener("click", function() {
alert(
"Sorry, we could not find a compatible graphics card! Easy Diffusion supports graphics cards with minimum 2 GB of RAM. " +
"Only NVIDIA cards are supported on Windows. NVIDIA and AMD cards are supported on Linux.<br/><br/>" +
"If you have a compatible graphics card, please try updating to the latest drivers.<br/><br/>" +
"Only the CPU can be used for generating images, without a compatible graphics card.",
"No compatible graphics card found!"
)
})
} }
autoPickGPUsField.checked = devices["config"] === "auto" autoPickGPUsField.checked = devices["config"] === "auto"
@ -576,7 +599,7 @@ async function getSystemInfo() {
$("#use_gpus").val(activeDeviceIds) $("#use_gpus").val(activeDeviceIds)
} }
setDeviceInfo(devices) document.dispatchEvent(new CustomEvent("system_info_update", { detail: devices }))
setHostInfo(res["hosts"]) setHostInfo(res["hosts"])
let force = false let force = false
if (res["enforce_output_dir"] !== undefined) { if (res["enforce_output_dir"] !== undefined) {
@ -647,3 +670,5 @@ saveSettingsBtn.addEventListener("click", function() {
saveSettingsBtn.classList.add("active") saveSettingsBtn.classList.add("active")
Promise.all([savePromise, asyncDelay(300)]).then(() => saveSettingsBtn.classList.remove("active")) Promise.all([savePromise, asyncDelay(300)]).then(() => saveSettingsBtn.classList.remove("active"))
}) })
document.addEventListener("system_info_update", (e) => setDeviceInfo(e.detail))

View File

@ -843,57 +843,83 @@ function createTab(request) {
/* TOAST NOTIFICATIONS */ /* TOAST NOTIFICATIONS */
function showToast(message, duration = 5000, error = false) { function showToast(message, duration = 5000, error = false) {
const toast = document.createElement("div"); const toast = document.createElement("div")
toast.classList.add("toast-notification"); toast.classList.add("toast-notification")
if (error === true) { if (error === true) {
toast.classList.add("toast-notification-error"); toast.classList.add("toast-notification-error")
} }
toast.innerHTML = message; toast.innerHTML = message
document.body.appendChild(toast); document.body.appendChild(toast)
// Set the position of the toast on the screen // Set the position of the toast on the screen
const toastCount = document.querySelectorAll(".toast-notification").length; const toastCount = document.querySelectorAll(".toast-notification").length
const toastHeight = toast.offsetHeight; const toastHeight = toast.offsetHeight
const previousToastsHeight = Array.from(document.querySelectorAll(".toast-notification")) const previousToastsHeight = Array.from(document.querySelectorAll(".toast-notification"))
.slice(0, -1) // exclude current toast .slice(0, -1) // exclude current toast
.reduce((totalHeight, toast) => totalHeight + toast.offsetHeight + 10, 0); // add 10 pixels for spacing .reduce((totalHeight, toast) => totalHeight + toast.offsetHeight + 10, 0) // add 10 pixels for spacing
toast.style.bottom = `${10 + previousToastsHeight}px`; toast.style.bottom = `${10 + previousToastsHeight}px`
toast.style.right = "10px"; toast.style.right = "10px"
// Delay the removal of the toast until animation has completed // Delay the removal of the toast until animation has completed
const removeToast = () => { const removeToast = () => {
toast.classList.add("hide"); toast.classList.add("hide")
const removeTimeoutId = setTimeout(() => { const removeTimeoutId = setTimeout(() => {
toast.remove(); toast.remove()
// Adjust the position of remaining toasts // Adjust the position of remaining toasts
const remainingToasts = document.querySelectorAll(".toast-notification"); const remainingToasts = document.querySelectorAll(".toast-notification")
const removedToastBottom = toast.getBoundingClientRect().bottom; const removedToastBottom = toast.getBoundingClientRect().bottom
remainingToasts.forEach((toast) => { remainingToasts.forEach((toast) => {
if (toast.getBoundingClientRect().bottom < removedToastBottom) { if (toast.getBoundingClientRect().bottom < removedToastBottom) {
toast.classList.add("slide-down"); toast.classList.add("slide-down")
} }
}); })
// Wait for the slide-down animation to complete // Wait for the slide-down animation to complete
setTimeout(() => { setTimeout(() => {
// Remove the slide-down class after the animation has completed // Remove the slide-down class after the animation has completed
const slidingToasts = document.querySelectorAll(".slide-down"); const slidingToasts = document.querySelectorAll(".slide-down")
slidingToasts.forEach((toast) => { slidingToasts.forEach((toast) => {
toast.classList.remove("slide-down"); toast.classList.remove("slide-down")
}); })
// Adjust the position of remaining toasts again, in case there are multiple toasts being removed at once // Adjust the position of remaining toasts again, in case there are multiple toasts being removed at once
const remainingToastsDown = document.querySelectorAll(".toast-notification"); const remainingToastsDown = document.querySelectorAll(".toast-notification")
let heightSoFar = 0; let heightSoFar = 0
remainingToastsDown.forEach((toast) => { remainingToastsDown.forEach((toast) => {
toast.style.bottom = `${10 + heightSoFar}px`; toast.style.bottom = `${10 + heightSoFar}px`
heightSoFar += toast.offsetHeight + 10; // add 10 pixels for spacing heightSoFar += toast.offsetHeight + 10 // add 10 pixels for spacing
}); })
}, 0); // The duration of the slide-down animation (in milliseconds) }, 0) // The duration of the slide-down animation (in milliseconds)
}, 500); }, 500)
}; }
// Remove the toast after specified duration // Remove the toast after specified duration
setTimeout(removeToast, duration); setTimeout(removeToast, duration)
}
function alert(msg, title) {
title = title || ""
$.alert({
theme: "modern",
title: title,
useBootstrap: false,
animateFromElement: false,
content: msg,
})
}
function confirm(msg, title, fn) {
title = title || ""
$.confirm({
theme: "modern",
title: title,
useBootstrap: false,
animateFromElement: false,
content: msg,
buttons: {
yes: fn,
cancel: () => {},
},
})
} }

View File

@ -23,7 +23,7 @@
img.addEventListener( img.addEventListener(
"load", "load",
function() { function() {
img.closest(".imageTaskContainer").scrollIntoView() img?.closest(".imageTaskContainer").scrollIntoView()
}, },
{ once: true } { once: true }
) )

View File

@ -403,6 +403,8 @@
// Batch main loop // Batch main loop
for (let i = 0; i < iterations; i++) { for (let i = 0; i < iterations; i++) {
let alpha = (start + i * step) / 100 let alpha = (start + i * step) / 100
if (isTabActive(tabSettingsBatch)) {
switch (document.querySelector("#merge-interpolation").value) { switch (document.querySelector("#merge-interpolation").value) {
case "SmoothStep": case "SmoothStep":
alpha = smoothstep(alpha) alpha = smoothstep(alpha)
@ -414,13 +416,15 @@
alpha = smootheststep(alpha) alpha = smootheststep(alpha)
break break
} }
}
addLogMessage(`merging batch job ${i + 1}/${iterations}, alpha = ${alpha.toFixed(5)}...`) addLogMessage(`merging batch job ${i + 1}/${iterations}, alpha = ${alpha.toFixed(5)}...`)
request["out_path"] = document.querySelector("#merge-filename").value request["out_path"] = document.querySelector("#merge-filename").value
request["out_path"] += "-" + alpha.toFixed(5) + "." + document.querySelector("#merge-format").value request["out_path"] += "-" + alpha.toFixed(5) + "." + document.querySelector("#merge-format").value
addLogMessage(`&nbsp;&nbsp;filename: ${request["out_path"]}`) addLogMessage(`&nbsp;&nbsp;filename: ${request["out_path"]}`)
request["ratio"] = alpha // sdkit documentation: "ratio - the ratio of the second model. 1 means only the second model will be used."
request["ratio"] = 1-alpha
let res = await fetch("/model/merge", { let res = await fetch("/model/merge", {
method: "POST", method: "POST",
headers: { "Content-Type": "application/json" }, headers: { "Content-Type": "application/json" },