diff --git a/CHANGES.md b/CHANGES.md index 9fe2cff0..2e45c279 100644 --- a/CHANGES.md +++ b/CHANGES.md @@ -4,6 +4,7 @@ ### Major Changes - **Nearly twice as fast** - significantly faster speed of image generation. Code contributions are welcome to make our project even faster: https://github.com/easydiffusion/sdkit/#is-it-fast - **Mac M1/M2 support** - Experimental support for Mac M1/M2. Thanks @michaelgallacher, @JeLuf and vishae. +- **AMD support for Linux** - Experimental support for AMD GPUs on Linux. Thanks @DianaNites and @JeLuf. - **Full support for Stable Diffusion 2.1 (including CPU)** - supports loading v1.4 or v2.0 or v2.1 models seamlessly. No need to enable "Test SD2", and no need to add `sd2_` to your SD 2.0 model file names. Works on CPU as well. - **Memory optimized Stable Diffusion 2.1** - you can now use Stable Diffusion 2.1 models, with the same low VRAM optimizations that we've always had for SD 1.4. Please note, the SD 2.0 and 2.1 models require more GPU and System RAM, as compared to the SD 1.4 and 1.5 models. - **11 new samplers!** - explore the new samplers, some of which can generate great images in less than 10 inference steps! We've added the Karras and UniPC samplers. Thanks @Schorny for the UniPC samplers. @@ -21,6 +22,16 @@ Our focus continues to remain on an easy installation experience, and an easy user-interface. While still remaining pretty powerful, in terms of features and speed. ### Detailed changelog +* 2.5.39 - 25 May 2023 - (beta-only) Seamless Tiling - make seamlessly tiled images, e.g. rock and grass textures. Thanks @JeLuf. +* 2.5.38 - 24 May 2023 - Better reporting of errors, and show an explanation if the user cannot disable the "Use CPU" setting. +* 2.5.38 - 23 May 2023 - Add Latent Upscaler as another option for upscaling images. Thanks @JeLuf for the implementation of the Latent Upscaler model. +* 2.5.37 - 19 May 2023 - (beta-only) Two more samplers: DDPM and DEIS. Also disables the samplers that aren't working yet in the Diffusers version. Thanks @ogmaresca. +* 2.5.37 - 19 May 2023 - (beta-only) Support CLIP-Skip. You can set this option under the models dropdown. Thanks @JeLuf. +* 2.5.37 - 19 May 2023 - (beta-only) More VRAM optimizations for all modes in diffusers. The VRAM usage for diffusers in "low" and "balanced" should now be equal or less than the non-diffusers version. Performs softmax in half precision, like sdkit does. +* 2.5.36 - 16 May 2023 - (beta-only) More VRAM optimizations for "balanced" VRAM usage mode. +* 2.5.36 - 11 May 2023 - (beta-only) More VRAM optimizations for "low" VRAM usage mode. +* 2.5.36 - 10 May 2023 - (beta-only) Bug fix for "meta" error when using a LoRA in 'low' VRAM usage mode. +* 2.5.35 - 8 May 2023 - Allow dragging a zoomed-in image (after opening an image with the "expand" button). Thanks @ogmaresca. * 2.5.35 - 3 May 2023 - (beta-only) First round of VRAM Optimizations for the "Test Diffusers" version. This change significantly reduces the amount of VRAM used by the diffusers version during image generation. The VRAM usage is still not equal to the "non-diffusers" version, but more optimizations are coming soon. * 2.5.34 - 22 Apr 2023 - Don't start the browser in an incognito new profile (on Windows). Thanks @JeLuf. * 2.5.33 - 21 Apr 2023 - Install PyTorch 2.0 on new installations (on Windows and Linux). diff --git a/PRIVACY.md b/PRIVACY.md new file mode 100644 index 00000000..6c997997 --- /dev/null +++ b/PRIVACY.md @@ -0,0 +1,9 @@ +// placeholder until a more formal and legal-sounding privacy policy document is written. but the information below is true. + +This is a summary of whether Easy Diffusion uses your data or tracks you: +* The short answer is - Easy Diffusion does *not* use your data, and does *not* track you. +* Easy Diffusion does not send your prompts or usage or analytics to anyone. There is no tracking. We don't even know how many people use Easy Diffusion, let alone their prompts. +* Easy Diffusion fetches updates to the code whenever it starts up. It does this by contacting GitHub directly, via SSL (secure connection). Only your computer and GitHub and [this repository](https://github.com/cmdr2/stable-diffusion-ui) are involved, and no third party is involved. Some countries intercepts SSL connections, that's not something we can do much about. GitHub does *not* share statistics (even with me) about how many people fetched code updates. +* Easy Diffusion fetches the models from huggingface.co and github.com, if they don't exist on your PC. For e.g. if the safety checker (NSFW) model doesn't exist, it'll try to download it. +* Easy Diffusion fetches code packages from pypi.org, which is the standard hosting service for all Python projects. That's where packages installed via `pip install` are stored. +* Occasionally, antivirus software are known to *incorrectly* flag and delete some model files, which will result in Easy Diffusion re-downloading `pytorch_model.bin`. This *incorrect deletion* affects other Stable Diffusion UIs as well, like Invoke AI - https://itch.io/post/7509488 diff --git a/README.md b/README.md index 3cb0bf8e..6a629e57 100644 --- a/README.md +++ b/README.md @@ -17,9 +17,11 @@ Click the download button for your operating system:

**Hardware requirements:** -- **Windows:** NVIDIA graphics card, or run on your CPU -- **Linux:** NVIDIA or AMD graphics card, or run on your CPU -- **Mac:** M1 or M2, or run on your CPU +- **Windows:** NVIDIA graphics card (minimum 2 GB RAM), or run on your CPU. +- **Linux:** NVIDIA or AMD graphics card (minimum 2 GB RAM), or run on your CPU. +- **Mac:** M1 or M2, or run on your CPU. +- Minimum 8 GB of system RAM. +- Atleast 25 GB of space on the hard disk. The installer will take care of whatever is needed. If you face any problems, you can join the friendly [Discord community](https://discord.com/invite/u9yhsFmEkB) and ask for assistance. @@ -58,7 +60,7 @@ Just delete the `EasyDiffusion` folder to uninstall all the downloaded packages. ### Image generation - **Supports**: "*Text to Image*" and "*Image to Image*". -- **19 Samplers**: `ddim`, `plms`, `heun`, `euler`, `euler_a`, `dpm2`, `dpm2_a`, `lms`, `dpm_solver_stability`, `dpmpp_2s_a`, `dpmpp_2m`, `dpmpp_sde`, `dpm_fast`, `dpm_adaptive`, `unipc_snr`, `unipc_tu`, `unipc_tq`, `unipc_snr_2`, `unipc_tu_2`. +- **21 Samplers**: `ddim`, `plms`, `heun`, `euler`, `euler_a`, `dpm2`, `dpm2_a`, `lms`, `dpm_solver_stability`, `dpmpp_2s_a`, `dpmpp_2m`, `dpmpp_sde`, `dpm_fast`, `dpm_adaptive`, `ddpm`, `deis`, `unipc_snr`, `unipc_tu`, `unipc_tq`, `unipc_snr_2`, `unipc_tu_2`. - **In-Painting**: Specify areas of your image to paint into. - **Simple Drawing Tool**: Draw basic images to guide the AI, without needing an external drawing program. - **Face Correction (GFPGAN)** @@ -84,7 +86,7 @@ Just delete the `EasyDiffusion` folder to uninstall all the downloaded packages. ### Performance and security - **Fast**: Creates a 512x512 image with euler_a in 5 seconds, on an NVIDIA 3060 12GB. -- **Low Memory Usage**: Create 512x512 images with less than 3 GB of GPU RAM, and 768x768 images with less than 4 GB of GPU RAM! +- **Low Memory Usage**: Create 512x512 images with less than 2 GB of GPU RAM, and 768x768 images with less than 3 GB of GPU RAM! - **Use CPU setting**: If you don't have a compatible graphics card, but still want to run it on your CPU. - **Multi-GPU support**: Automatically spreads your tasks across multiple GPUs (if available), for faster performance! - **Auto scan for malicious models**: Uses picklescan to prevent malicious models. @@ -113,14 +115,6 @@ Useful for judging (and stopping) an image quickly, without waiting for it to fi ![Screenshot of task queue](https://user-images.githubusercontent.com/844287/217043984-0b35f73b-1318-47cb-9eed-a2a91b430490.png) - -# System Requirements -1. Windows 10/11, or Linux. Experimental support for Mac is coming soon. -2. An NVIDIA graphics card, preferably with 4GB or more of VRAM. If you don't have a compatible graphics card, it'll automatically run in the slower "CPU Mode". -3. Minimum 8 GB of RAM and 25GB of disk space. - -You don't need to install or struggle with Python, Anaconda, Docker etc. The installer will take care of whatever is needed. - ---- # How to use? diff --git a/scripts/check_modules.py b/scripts/check_modules.py index 031f7d66..3686ca00 100644 --- a/scripts/check_modules.py +++ b/scripts/check_modules.py @@ -18,7 +18,7 @@ os_name = platform.system() modules_to_check = { "torch": ("1.11.0", "1.13.1", "2.0.0"), "torchvision": ("0.12.0", "0.14.1", "0.15.1"), - "sdkit": "1.0.87", + "sdkit": "1.0.98", "stable-diffusion-sdkit": "2.1.4", "rich": "12.6.0", "uvicorn": "0.19.0", @@ -130,10 +130,13 @@ def include_cuda_versions(module_versions: tuple) -> tuple: def is_amd_on_linux(): if os_name == "Linux": - with open("/proc/bus/pci/devices", "r") as f: - device_info = f.read() - if "amdgpu" in device_info and "nvidia" not in device_info: - return True + try: + with open("/proc/bus/pci/devices", "r") as f: + device_info = f.read() + if "amdgpu" in device_info and "nvidia" not in device_info: + return True + except: + return False return False diff --git a/scripts/get_config.py b/scripts/get_config.py index 02523364..9cdfb2fe 100644 --- a/scripts/get_config.py +++ b/scripts/get_config.py @@ -1,5 +1,6 @@ import os import argparse +import sys # The config file is in the same directory as this script config_directory = os.path.dirname(__file__) @@ -21,16 +22,16 @@ if os.path.isfile(config_yaml): try: config = yaml.safe_load(configfile) except Exception as e: - print(e) - exit() + print(e, file=sys.stderr) + config = {} elif os.path.isfile(config_json): import json with open(config_json, 'r') as configfile: try: config = json.load(configfile) except Exception as e: - print(e) - exit() + print(e, file=sys.stderr) + config = {} else: config = {} diff --git a/ui/easydiffusion/app.py b/ui/easydiffusion/app.py index b6318f01..3064e151 100644 --- a/ui/easydiffusion/app.py +++ b/ui/easydiffusion/app.py @@ -10,6 +10,8 @@ import warnings from easydiffusion import task_manager from easydiffusion.utils import log from rich.logging import RichHandler +from rich.console import Console +from rich.panel import Panel from sdkit.utils import log as sdkit_log # hack, so we can overwrite the log config # Remove all handlers associated with the root logger object. @@ -213,11 +215,19 @@ def open_browser(): ui = config.get("ui", {}) net = config.get("net", {}) port = net.get("listen_port", 9000) + if ui.get("open_browser_on_start", True): import webbrowser webbrowser.open(f"http://localhost:{port}") + Console().print(Panel( + "\n" + + "[white]Easy Diffusion is ready to serve requests.\n\n" + + "A new browser tab should have been opened by now.\n" + + f"If not, please open your web browser and navigate to [bold yellow underline]http://localhost:{port}/\n", + title="Easy Diffusion is ready", style="bold yellow on blue")) + def get_image_modifiers(): modifiers_json_path = os.path.join(SD_UI_DIR, "modifiers.json") diff --git a/ui/easydiffusion/device_manager.py b/ui/easydiffusion/device_manager.py index 59c07ea3..dc705927 100644 --- a/ui/easydiffusion/device_manager.py +++ b/ui/easydiffusion/device_manager.py @@ -98,8 +98,8 @@ def auto_pick_devices(currently_active_devices): continue mem_free, mem_total = torch.cuda.mem_get_info(device) - mem_free /= float(10 ** 9) - mem_total /= float(10 ** 9) + mem_free /= float(10**9) + mem_total /= float(10**9) device_name = torch.cuda.get_device_name(device) log.debug( f"{device} detected: {device_name} - Memory (free/total): {round(mem_free, 2)}Gb / {round(mem_total, 2)}Gb" @@ -165,6 +165,7 @@ def needs_to_force_full_precision(context): and ( " 1660" in device_name or " 1650" in device_name + or " 1630" in device_name or " t400" in device_name or " t550" in device_name or " t600" in device_name @@ -181,7 +182,7 @@ def get_max_vram_usage_level(device): else: return "high" - mem_total /= float(10 ** 9) + mem_total /= float(10**9) if mem_total < 4.5: return "low" elif mem_total < 6.5: @@ -223,10 +224,10 @@ def is_device_compatible(device): # Memory check try: _, mem_total = torch.cuda.mem_get_info(device) - mem_total /= float(10 ** 9) - if mem_total < 3.0: + mem_total /= float(10**9) + if mem_total < 1.9: if is_device_compatible.history.get(device) == None: - log.warn(f"GPU {device} with less than 3 GB of VRAM is not compatible with Stable Diffusion") + log.warn(f"GPU {device} with less than 2 GB of VRAM is not compatible with Stable Diffusion") is_device_compatible.history[device] = 1 return False except RuntimeError as e: diff --git a/ui/easydiffusion/model_manager.py b/ui/easydiffusion/model_manager.py index a0b2489a..2a8b57fd 100644 --- a/ui/easydiffusion/model_manager.py +++ b/ui/easydiffusion/model_manager.py @@ -53,15 +53,21 @@ def load_default_models(context: Context): scan_model=context.model_paths[model_type] != None and not context.model_paths[model_type].endswith(".safetensors"), ) + if model_type in context.model_load_errors: + del context.model_load_errors[model_type] except Exception as e: log.error(f"[red]Error while loading {model_type} model: {context.model_paths[model_type]}[/red]") log.exception(e) del context.model_paths[model_type] + context.model_load_errors[model_type] = str(e) # storing the entire Exception can lead to memory leaks + def unload_all(context: Context): for model_type in KNOWN_MODEL_TYPES: unload_model(context, model_type) + if model_type in context.model_load_errors: + del context.model_load_errors[model_type] def resolve_model_to_use(model_name: str = None, model_type: str = None): @@ -107,19 +113,17 @@ def resolve_model_to_use(model_name: str = None, model_type: str = None): def reload_models_if_necessary(context: Context, task_data: TaskData): - if hasattr(task_data, 'use_face_correction') and task_data.use_face_correction: - face_correction_model = "codeformer" if "codeformer" in task_data.use_face_correction.lower() else "gfpgan" - face_correction_value = task_data.use_face_correction - else: - face_correction_model = "gfpgan" - face_correction_value = None + face_fix_lower = task_data.use_face_correction.lower() if task_data.use_face_correction else "" + upscale_lower = task_data.use_upscale.lower() if task_data.use_upscale else "" model_paths_in_req = { "stable-diffusion": task_data.use_stable_diffusion_model, "vae": task_data.use_vae_model, "hypernetwork": task_data.use_hypernetwork_model, - face_correction_model: face_correction_value, - "realesrgan": task_data.use_upscale, + "codeformer": task_data.use_face_correction if "codeformer" in face_fix_lower else None, + "gfpgan": task_data.use_face_correction if "gfpgan" in face_fix_lower else None, + "realesrgan": task_data.use_upscale if "realesrgan" in upscale_lower else None, + "latent_upscaler": True if "latent_upscaler" in upscale_lower else None, "nsfw_checker": True if task_data.block_nsfw else None, "lora": task_data.use_lora_model, } @@ -129,14 +133,21 @@ def reload_models_if_necessary(context: Context, task_data: TaskData): if context.model_paths.get(model_type) != path } - if set_vram_optimizations(context): # reload SD + if set_vram_optimizations(context) or set_clip_skip(context, task_data): # reload SD models_to_reload["stable-diffusion"] = model_paths_in_req["stable-diffusion"] for model_type, model_path_in_req in models_to_reload.items(): context.model_paths[model_type] = model_path_in_req action_fn = unload_model if context.model_paths[model_type] is None else load_model - action_fn(context, model_type, scan_model=False) # we've scanned them already + try: + action_fn(context, model_type, scan_model=False) # we've scanned them already + if model_type in context.model_load_errors: + del context.model_load_errors[model_type] + except Exception as e: + log.exception(e) + if action_fn == load_model: + context.model_load_errors[model_type] = str(e) # storing the entire Exception can lead to memory leaks def resolve_model_paths(task_data: TaskData): @@ -149,10 +160,18 @@ def resolve_model_paths(task_data: TaskData): if task_data.use_face_correction: task_data.use_face_correction = resolve_model_to_use(task_data.use_face_correction, "gfpgan") - if task_data.use_upscale: + if task_data.use_upscale and "realesrgan" in task_data.use_upscale.lower(): task_data.use_upscale = resolve_model_to_use(task_data.use_upscale, "realesrgan") +def fail_if_models_did_not_load(context: Context): + for model_type in KNOWN_MODEL_TYPES: + if model_type in context.model_load_errors: + e = context.model_load_errors[model_type] + raise Exception(f"Could not load the {model_type} model! Reason: " + e) + # concat 'e', don't use in format string (injection attack) + + def set_vram_optimizations(context: Context): config = app.getConfig() vram_usage_level = config.get("vram_usage_level", "balanced") @@ -164,6 +183,16 @@ def set_vram_optimizations(context: Context): return False +def set_clip_skip(context: Context, task_data: TaskData): + clip_skip = task_data.clip_skip + + if clip_skip != context.clip_skip: + context.clip_skip = clip_skip + return True + + return False + + def make_model_folders(): for model_type in KNOWN_MODEL_TYPES: model_dir_path = os.path.join(app.MODELS_DIR, model_type) diff --git a/ui/easydiffusion/renderer.py b/ui/easydiffusion/renderer.py index e1176c8b..1ebd05ec 100644 --- a/ui/easydiffusion/renderer.py +++ b/ui/easydiffusion/renderer.py @@ -33,6 +33,7 @@ def init(device): context.stop_processing = False context.temp_images = {} context.partial_x_samples = None + context.model_load_errors = {} from easydiffusion import app @@ -72,7 +73,7 @@ def make_images( def print_task_info(req: GenerateImageRequest, task_data: TaskData): - req_str = pprint.pformat(get_printable_request(req)).replace("[", "\[") + req_str = pprint.pformat(get_printable_request(req, task_data)).replace("[", "\[") task_str = pprint.pformat(task_data.dict()).replace("[", "\[") log.info(f"request: {req_str}") log.info(f"task data: {task_str}") @@ -95,7 +96,7 @@ def make_images_internal( task_data.stream_image_progress_interval, ) gc(context) - filtered_images = filter_images(task_data, images, user_stopped) + filtered_images = filter_images(req, task_data, images, user_stopped) if task_data.save_to_disk_path is not None: save_images_to_disk(images, filtered_images, req, task_data) @@ -151,24 +152,38 @@ def generate_images_internal( return images, user_stopped -def filter_images(task_data: TaskData, images: list, user_stopped): +def filter_images(req: GenerateImageRequest, task_data: TaskData, images: list, user_stopped): if user_stopped: return images filters_to_apply = [] + filter_params = {} if task_data.block_nsfw: filters_to_apply.append("nsfw_checker") if task_data.use_face_correction and "codeformer" in task_data.use_face_correction.lower(): filters_to_apply.append("codeformer") elif task_data.use_face_correction and "gfpgan" in task_data.use_face_correction.lower(): filters_to_apply.append("gfpgan") - if task_data.use_upscale and "realesrgan" in task_data.use_upscale.lower(): - filters_to_apply.append("realesrgan") + if task_data.use_upscale: + if "realesrgan" in task_data.use_upscale.lower(): + filters_to_apply.append("realesrgan") + elif task_data.use_upscale == "latent_upscaler": + filters_to_apply.append("latent_upscaler") + + filter_params["latent_upscaler_options"] = { + "prompt": req.prompt, + "negative_prompt": req.negative_prompt, + "seed": req.seed, + "num_inference_steps": task_data.latent_upscaler_steps, + "guidance_scale": 0, + } + + filter_params["scale"] = task_data.upscale_amount if len(filters_to_apply) == 0: return images - return apply_filters(context, filters_to_apply, images, scale=task_data.upscale_amount) + return apply_filters(context, filters_to_apply, images, **filter_params) def construct_response(images: list, seeds: list, task_data: TaskData, base_seed: int): diff --git a/ui/easydiffusion/task_manager.py b/ui/easydiffusion/task_manager.py index c11acbec..a91cd9c6 100644 --- a/ui/easydiffusion/task_manager.py +++ b/ui/easydiffusion/task_manager.py @@ -336,6 +336,7 @@ def thread_render(device): current_state = ServerStates.LoadingModel model_manager.resolve_model_paths(task.task_data) model_manager.reload_models_if_necessary(renderer.context, task.task_data) + model_manager.fail_if_models_did_not_load(renderer.context) current_state = ServerStates.Rendering task.response = renderer.make_images( diff --git a/ui/easydiffusion/types.py b/ui/easydiffusion/types.py index 7462355f..e4426714 100644 --- a/ui/easydiffusion/types.py +++ b/ui/easydiffusion/types.py @@ -23,6 +23,7 @@ class GenerateImageRequest(BaseModel): sampler_name: str = None # "ddim", "plms", "heun", "euler", "euler_a", "dpm2", "dpm2_a", "lms" hypernetwork_strength: float = 0 lora_alpha: float = 0 + tiling: str = "none" # "none", "x", "y", "xy" class TaskData(BaseModel): @@ -32,8 +33,9 @@ class TaskData(BaseModel): vram_usage_level: str = "balanced" # or "low" or "medium" use_face_correction: str = None # or "GFPGANv1.3" - use_upscale: str = None # or "RealESRGAN_x4plus" or "RealESRGAN_x4plus_anime_6B" + use_upscale: str = None # or "RealESRGAN_x4plus" or "RealESRGAN_x4plus_anime_6B" or "latent_upscaler" upscale_amount: int = 4 # or 2 + latent_upscaler_steps: int = 10 use_stable_diffusion_model: str = "sd-v1-4" # use_stable_diffusion_config: str = "v1-inference" use_vae_model: str = None @@ -48,6 +50,7 @@ class TaskData(BaseModel): metadata_output_format: str = "txt" # or "json" stream_image_progress: bool = False stream_image_progress_interval: int = 5 + clip_skip: bool = False class MergeRequest(BaseModel): diff --git a/ui/easydiffusion/utils/save_utils.py b/ui/easydiffusion/utils/save_utils.py index a7043f27..ff2906a6 100644 --- a/ui/easydiffusion/utils/save_utils.py +++ b/ui/easydiffusion/utils/save_utils.py @@ -15,23 +15,26 @@ img_number_regex = re.compile("([0-9]{5,})") # keep in sync with `ui/media/js/dnd.js` TASK_TEXT_MAPPING = { "prompt": "Prompt", + "negative_prompt": "Negative Prompt", + "seed": "Seed", + "use_stable_diffusion_model": "Stable Diffusion model", + "clip_skip": "Clip Skip", + "use_vae_model": "VAE model", + "sampler_name": "Sampler", "width": "Width", "height": "Height", - "seed": "Seed", "num_inference_steps": "Steps", "guidance_scale": "Guidance Scale", "prompt_strength": "Prompt Strength", + "use_lora_model": "LoRA model", + "lora_alpha": "LoRA Strength", + "use_hypernetwork_model": "Hypernetwork model", + "hypernetwork_strength": "Hypernetwork Strength", + "tiling": "Seamless Tiling", "use_face_correction": "Use Face Correction", "use_upscale": "Use Upscaling", "upscale_amount": "Upscale By", - "sampler_name": "Sampler", - "negative_prompt": "Negative Prompt", - "use_stable_diffusion_model": "Stable Diffusion model", - "use_vae_model": "VAE model", - "use_hypernetwork_model": "Hypernetwork model", - "hypernetwork_strength": "Hypernetwork Strength", - "use_lora_model": "LoRA model", - "lora_alpha": "LoRA Strength", + "latent_upscaler_steps": "Latent Upscaler Steps" } time_placeholders = { @@ -168,41 +171,23 @@ def save_images_to_disk(images: list, filtered_images: list, req: GenerateImageR output_quality=task_data.output_quality, output_lossless=task_data.output_lossless, ) - if task_data.metadata_output_format.lower() in ["json", "txt", "embed"]: - save_dicts( - metadata_entries, - save_dir_path, - file_name=make_filter_filename, - output_format=task_data.metadata_output_format, - file_format=task_data.output_format, - ) + if task_data.metadata_output_format: + for metadata_output_format in task_data.metadata_output_format.split(","): + if metadata_output_format.lower() in ["json", "txt", "embed"]: + save_dicts( + metadata_entries, + save_dir_path, + file_name=make_filter_filename, + output_format=task_data.metadata_output_format, + file_format=task_data.output_format, + ) def get_metadata_entries_for_request(req: GenerateImageRequest, task_data: TaskData): - metadata = get_printable_request(req) - metadata.update( - { - "use_stable_diffusion_model": task_data.use_stable_diffusion_model, - "use_vae_model": task_data.use_vae_model, - "use_hypernetwork_model": task_data.use_hypernetwork_model, - "use_lora_model": task_data.use_lora_model, - "use_face_correction": task_data.use_face_correction, - "use_upscale": task_data.use_upscale, - } - ) - if metadata["use_upscale"] is not None: - metadata["upscale_amount"] = task_data.upscale_amount - if task_data.use_hypernetwork_model is None: - del metadata["hypernetwork_strength"] - if task_data.use_lora_model is None: - if "lora_alpha" in metadata: - del metadata["lora_alpha"] - app_config = app.getConfig() - if not app_config.get("test_diffusers", False) and "use_lora_model" in metadata: - del metadata["use_lora_model"] + metadata = get_printable_request(req, task_data) # if text, format it in the text format expected by the UI - is_txt_format = task_data.metadata_output_format.lower() == "txt" + is_txt_format = task_data.metadata_output_format and "txt" in task_data.metadata_output_format.lower().split(",") if is_txt_format: metadata = {TASK_TEXT_MAPPING[key]: val for key, val in metadata.items() if key in TASK_TEXT_MAPPING} @@ -213,12 +198,35 @@ def get_metadata_entries_for_request(req: GenerateImageRequest, task_data: TaskD return entries -def get_printable_request(req: GenerateImageRequest): - metadata = req.dict() - del metadata["init_image"] - del metadata["init_image_mask"] - if req.init_image is None: +def get_printable_request(req: GenerateImageRequest, task_data: TaskData): + req_metadata = req.dict() + task_data_metadata = task_data.dict() + + # Save the metadata in the order defined in TASK_TEXT_MAPPING + metadata = {} + for key in TASK_TEXT_MAPPING.keys(): + if key in req_metadata: + metadata[key] = req_metadata[key] + elif key in task_data_metadata: + metadata[key] = task_data_metadata[key] + + # Clean up the metadata + if req.init_image is None and "prompt_strength" in metadata: del metadata["prompt_strength"] + if task_data.use_upscale is None and "upscale_amount" in metadata: + del metadata["upscale_amount"] + if task_data.use_hypernetwork_model is None and "hypernetwork_strength" in metadata: + del metadata["hypernetwork_strength"] + if task_data.use_lora_model is None and "lora_alpha" in metadata: + del metadata["lora_alpha"] + if task_data.use_upscale != "latent_upscaler" and "latent_upscaler_steps" in metadata: + del metadata["latent_upscaler_steps"] + + app_config = app.getConfig() + if not app_config.get("test_diffusers", False): + for key in (x for x in ["use_lora_model", "lora_alpha", "clip_skip", "tiling", "latent_upscaler_steps"] if x in metadata): + del metadata[key] + return metadata diff --git a/ui/index.html b/ui/index.html index be522689..21ec2550 100644 --- a/ui/index.html +++ b/ui/index.html @@ -30,7 +30,7 @@

Easy Diffusion - v2.5.35 + v2.5.39

@@ -135,10 +135,13 @@ Click to learn more about custom models - + + + + + Click to learn more about Clip Skip + + Click to learn more about VAEs @@ -154,16 +157,18 @@ - + - - - - + + + + + + - - - + + + Click to learn more about samplers @@ -231,6 +236,15 @@
+ + + Click to learn more about Seamless Tiling + with +
+ +
  • diff --git a/ui/media/css/image-modal.css b/ui/media/css/image-modal.css index 1001807c..64096003 100644 --- a/ui/media/css/image-modal.css +++ b/ui/media/css/image-modal.css @@ -70,6 +70,14 @@ max-height: calc(100vh - (var(--popup-padding) * 2) - 4px); } +#viewFullSizeImgModal img:not(.natural-zoom) { + cursor: grab; +} + +#viewFullSizeImgModal .grabbing img:not(.natural-zoom) { + cursor: grabbing; +} + #viewFullSizeImgModal .content > div::-webkit-scrollbar-track, #viewFullSizeImgModal .content > div::-webkit-scrollbar-corner { background: rgba(0, 0, 0, .5) } diff --git a/ui/media/css/main.css b/ui/media/css/main.css index ba513237..8f4f49fa 100644 --- a/ui/media/css/main.css +++ b/ui/media/css/main.css @@ -1303,6 +1303,12 @@ body.wait-pause { display:none !important; } +#latent_upscaler_settings { + padding-top: 3pt; + padding-bottom: 3pt; + padding-left: 5pt; +} + /* TOAST NOTIFICATIONS */ .toast-notification { position: fixed; diff --git a/ui/media/js/auto-save.js b/ui/media/js/auto-save.js index 1e536247..bbcbf9a5 100644 --- a/ui/media/js/auto-save.js +++ b/ui/media/js/auto-save.js @@ -13,6 +13,7 @@ const SETTINGS_IDS_LIST = [ "num_outputs_total", "num_outputs_parallel", "stable_diffusion_model", + "clip_skip", "vae_model", "hypernetwork_model", "lora_model", @@ -24,6 +25,7 @@ const SETTINGS_IDS_LIST = [ "prompt_strength", "hypernetwork_strength", "lora_alpha", + "tiling", "output_format", "output_quality", "output_lossless", @@ -33,6 +35,7 @@ const SETTINGS_IDS_LIST = [ "gfpgan_model", "use_upscale", "upscale_amount", + "latent_upscaler_steps", "block_nsfw", "show_only_filtered_image", "upscale_model", @@ -168,6 +171,22 @@ function loadSettings() { } }) CURRENTLY_LOADING_SETTINGS = false + } else if (localStorage.length < 2) { + // localStorage is too short for OldSettings + // So this is likely the first time Easy Diffusion is running. + // Initialize vram_usage_level based on the available VRAM + function initGPUProfile(event) { + if ( "detail" in event + && "active" in event.detail + && "cuda:0" in event.detail.active + && event.detail.active["cuda:0"].mem_total <4.5 ) + { + vramUsageLevelField.value = "low" + vramUsageLevelField.dispatchEvent(new Event("change")) + } + document.removeEventListener("system_info_update", initGPUProfile) + } + document.addEventListener("system_info_update", initGPUProfile) } else { CURRENTLY_LOADING_SETTINGS = true tryLoadOldSettings() diff --git a/ui/media/js/dnd.js b/ui/media/js/dnd.js index 548b06ad..4e50b638 100644 --- a/ui/media/js/dnd.js +++ b/ui/media/js/dnd.js @@ -37,6 +37,7 @@ function parseBoolean(stringValue) { } } +// keep in sync with `ui/easydiffusion/utils/save_utils.py` const TASK_MAPPING = { prompt: { name: "Prompt", @@ -78,6 +79,7 @@ const TASK_MAPPING = { if (!widthField.value) { widthField.value = oldVal } + widthField.dispatchEvent(new Event("change")) }, readUI: () => parseInt(widthField.value), parse: (val) => parseInt(val), @@ -90,6 +92,7 @@ const TASK_MAPPING = { if (!heightField.value) { heightField.value = oldVal } + heightField.dispatchEvent(new Event("change")) }, readUI: () => parseInt(heightField.value), parse: (val) => parseInt(val), @@ -171,16 +174,22 @@ const TASK_MAPPING = { name: "Use Face Correction", setUI: (use_face_correction) => { const oldVal = gfpganModelField.value - gfpganModelField.value = getModelPath(use_face_correction, [".pth"]) - if (gfpganModelField.value) { - // Is a valid value for the field. - useFaceCorrectionField.checked = true - gfpganModelField.disabled = false - } else { - // Not a valid value, restore the old value and disable the filter. + console.log("use face correction", use_face_correction) + if (use_face_correction == null || use_face_correction == "None") { gfpganModelField.disabled = true - gfpganModelField.value = oldVal useFaceCorrectionField.checked = false + } else { + gfpganModelField.value = getModelPath(use_face_correction, [".pth"]) + if (gfpganModelField.value) { + // Is a valid value for the field. + useFaceCorrectionField.checked = true + gfpganModelField.disabled = false + } else { + // Not a valid value, restore the old value and disable the filter. + gfpganModelField.disabled = true + gfpganModelField.value = oldVal + useFaceCorrectionField.checked = false + } } //useFaceCorrectionField.checked = parseBoolean(use_face_correction) @@ -217,6 +226,14 @@ const TASK_MAPPING = { readUI: () => upscaleAmountField.value, parse: (val) => val, }, + latent_upscaler_steps: { + name: "Latent Upscaler Steps", + setUI: (latent_upscaler_steps) => { + latentUpscalerStepsField.value = latent_upscaler_steps + }, + readUI: () => latentUpscalerStepsField.value, + parse: (val) => val, + }, sampler_name: { name: "Sampler", setUI: (sampler_name) => { @@ -240,6 +257,22 @@ const TASK_MAPPING = { readUI: () => stableDiffusionModelField.value, parse: (val) => val, }, + clip_skip: { + name: "Clip Skip", + setUI: (value) => { + clip_skip.checked = value + }, + readUI: () => clip_skip.checked, + parse: (val) => Boolean(val), + }, + tiling: { + name: "Tiling", + setUI: (val) => { + tilingField.value = val + }, + readUI: () => tilingField.value, + parse: (val) => val, + }, use_vae_model: { name: "VAE model", setUI: (use_vae_model) => { @@ -402,6 +435,7 @@ function restoreTaskToUI(task, fieldsToSkip) { if (!("original_prompt" in task.reqBody)) { promptField.value = task.reqBody.prompt } + promptField.dispatchEvent(new Event("input")) // properly reset checkboxes if (!("use_face_correction" in task.reqBody)) { diff --git a/ui/media/js/engine.js b/ui/media/js/engine.js index f396d951..e60409f1 100644 --- a/ui/media/js/engine.js +++ b/ui/media/js/engine.js @@ -750,6 +750,7 @@ sampler_name: "string", use_stable_diffusion_model: "string", + clip_skip: "boolean", num_inference_steps: "number", guidance_scale: "number", @@ -763,6 +764,7 @@ const TASK_DEFAULTS = { sampler_name: "plms", use_stable_diffusion_model: "sd-v1-4", + clip_skip: false, num_inference_steps: 50, guidance_scale: 7.5, negative_prompt: "", @@ -787,9 +789,10 @@ use_hypernetwork_model: "string", hypernetwork_strength: "number", output_lossless: "boolean", + tiling: "string", } - // Higer values will result in... + // Higher values will result in... // pytorch_lightning/utilities/seed.py:60: UserWarning: X is not in bounds, numpy accepts from 0 to 4294967295 const MAX_SEED_VALUE = 4294967295 diff --git a/ui/media/js/image-editor.js b/ui/media/js/image-editor.js index af19daeb..e7de9f2b 100644 --- a/ui/media/js/image-editor.js +++ b/ui/media/js/image-editor.js @@ -834,6 +834,7 @@ function pixelCompare(int1, int2) { } // adapted from https://ben.akrin.com/canvas_fill/fill_04.html +// May 2023 - look at using a library instead of custom code: https://github.com/shaneosullivan/example-canvas-fill function flood_fill(editor, the_canvas_context, x, y, color) { pixel_stack = [{ x: x, y: y }] pixels = the_canvas_context.getImageData(0, 0, editor.width, editor.height) diff --git a/ui/media/js/image-modal.js b/ui/media/js/image-modal.js index 3a97c4d8..28c1eaf2 100644 --- a/ui/media/js/image-modal.js +++ b/ui/media/js/image-modal.js @@ -63,15 +63,73 @@ const imageModal = (function() { setZoomLevel(imageContainer.querySelector("img")?.classList?.contains("natural-zoom")) ) - const state = { + const initialState = () => ({ previous: undefined, next: undefined, + + start: { + x: 0, + y: 0, + }, + + scroll: { + x: 0, + y: 0, + }, + }) + + const state = initialState() + + // Allow grabbing the image to scroll + const stopGrabbing = (e) => { + if(imageContainer.classList.contains("grabbing")) { + imageContainer.classList.remove("grabbing") + e?.preventDefault() + console.log(`stopGrabbing()`, e) + } + } + + const addImageGrabbing = (image) => { + image?.addEventListener('mousedown', (e) => { + if (!image.classList.contains("natural-zoom")) { + e.stopPropagation() + e.stopImmediatePropagation() + e.preventDefault() + + imageContainer.classList.add("grabbing") + state.start.x = e.pageX - imageContainer.offsetLeft + state.scroll.x = imageContainer.scrollLeft + state.start.y = e.pageY - imageContainer.offsetTop + state.scroll.y = imageContainer.scrollTop + } + }) + + image?.addEventListener('mouseup', stopGrabbing) + image?.addEventListener('mouseleave', stopGrabbing) + image?.addEventListener('mousemove', (e) => { + if(imageContainer.classList.contains("grabbing")) { + e.stopPropagation() + e.stopImmediatePropagation() + e.preventDefault() + + // Might need to increase this multiplier based on the image size to window size ratio + // The default 1:1 is pretty slow + const multiplier = 1.0 + + const deltaX = e.pageX - imageContainer.offsetLeft - state.start.x + imageContainer.scrollLeft = state.scroll.x - (deltaX * multiplier) + const deltaY = e.pageY - imageContainer.offsetTop - state.start.y + imageContainer.scrollTop = state.scroll.y - (deltaY * multiplier) + } + }) } const clear = () => { imageContainer.innerHTML = "" - Object.keys(state).forEach((key) => delete state[key]) + Object.entries(initialState()).forEach(([key, value]) => state[key] = value) + + stopGrabbing() } const close = () => { @@ -95,6 +153,7 @@ const imageModal = (function() { const src = typeof options === "string" ? options : options.src const imgElem = createElement("img", { src }, "natural-zoom") + addImageGrabbing(imgElem) imageContainer.appendChild(imgElem) modalElem.classList.add("active") document.body.style.overflow = "hidden" diff --git a/ui/media/js/image-modifiers.js b/ui/media/js/image-modifiers.js index fd4ecaf1..69f31ab1 100644 --- a/ui/media/js/image-modifiers.js +++ b/ui/media/js/image-modifiers.js @@ -246,7 +246,7 @@ function refreshInactiveTags(inactiveTags) { overlays.forEach((i) => { let modifierName = i.parentElement.getElementsByClassName("modifier-card-label")[0].getElementsByTagName("p")[0] .dataset.fullName - if (inactiveTags?.find((element) => element === modifierName) !== undefined) { + if (inactiveTags?.find((element) => trimModifiers(element) === modifierName) !== undefined) { i.parentElement.classList.add("modifier-toggle-inactive") } }) diff --git a/ui/media/js/main.js b/ui/media/js/main.js index a54f6ecb..8628732b 100644 --- a/ui/media/js/main.js +++ b/ui/media/js/main.js @@ -13,6 +13,16 @@ const taskConfigSetup = { num_inference_steps: "Inference Steps", guidance_scale: "Guidance Scale", use_stable_diffusion_model: "Model", + clip_skip: { + label: "Clip Skip", + visible: ({ reqBody }) => reqBody?.clip_skip, + value: ({ reqBody }) => "yes", + }, + tiling: { + label: "Tiling", + visible: ({ reqBody }) => reqBody?.tiling != "none", + value: ({ reqBody }) => reqBody?.tiling, + }, use_vae_model: { label: "VAE", visible: ({ reqBody }) => reqBody?.use_vae_model !== undefined && reqBody?.use_vae_model.trim() !== "", @@ -81,7 +91,12 @@ let gfpganModelField = new ModelDropdown(document.querySelector("#gfpgan_model") let useUpscalingField = document.querySelector("#use_upscale") let upscaleModelField = document.querySelector("#upscale_model") let upscaleAmountField = document.querySelector("#upscale_amount") +let latentUpscalerSettings = document.querySelector("#latent_upscaler_settings") +let latentUpscalerStepsSlider = document.querySelector("#latent_upscaler_steps_slider") +let latentUpscalerStepsField = document.querySelector("#latent_upscaler_steps") let stableDiffusionModelField = new ModelDropdown(document.querySelector("#stable_diffusion_model"), "stable-diffusion") +let clipSkipField = document.querySelector("#clip_skip") +let tilingField = document.querySelector("#tiling") let vaeModelField = new ModelDropdown(document.querySelector("#vae_model"), "vae", "None") let hypernetworkModelField = new ModelDropdown(document.querySelector("#hypernetwork_model"), "hypernetwork", "None") let hypernetworkStrengthSlider = document.querySelector("#hypernetwork_strength_slider") @@ -233,7 +248,7 @@ function setServerStatus(event) { break } if (SD.serverState.devices) { - setDeviceInfo(SD.serverState.devices) + document.dispatchEvent(new CustomEvent("system_info_update", { detail: SD.serverState.devices })) } } @@ -252,20 +267,11 @@ function shiftOrConfirm(e, prompt, fn) { if (e.shiftKey || !confirmDangerousActionsField.checked) { fn(e) } else { - $.confirm({ - theme: "modern", - title: prompt, - useBootstrap: false, - animateFromElement: false, - content: - 'Tip: To skip this dialog, use shift-click or disable the "Confirm dangerous actions" setting in the Settings tab.', - buttons: { - yes: () => { - fn(e) - }, - cancel: () => {}, - }, - }) + confirm( + 'Tip: To skip this dialog, use shift-click or disable the "Confirm dangerous actions" setting in the Settings tab.', + prompt, + () => { fn(e) } + ) } } @@ -287,6 +293,7 @@ function logError(msg, res, outputMsg) { logMsg(msg, "error", outputMsg) console.log("request error", res) + console.trace() setStatus("request", "error", "error") } @@ -778,11 +785,6 @@ function getTaskUpdater(task, reqBody, outputContainer) { } msg += "" logError(msg, event, outputMsg) - } else { - let msg = `Unexpected Read Error:
    Error:${
    -                            this.exception
    -                        }
    EventInfo: ${JSON.stringify(event, undefined, 4)}
    ` - logError(msg, event, outputMsg) } break } @@ -879,15 +881,15 @@ function onTaskCompleted(task, reqBody, instance, outputContainer, stepUpdate) { 1. If you have set an initial image, please try reducing its dimension to ${MAX_INIT_IMAGE_DIMENSION}x${MAX_INIT_IMAGE_DIMENSION} or smaller.
    2. Try picking a lower level in the 'GPU Memory Usage' setting (in the 'Settings' tab).
    3. Try generating a smaller image.
    ` - } else if (msg.toLowerCase().includes("DefaultCPUAllocator: not enough memory")) { + } else if (msg.includes("DefaultCPUAllocator: not enough memory")) { msg += `

    Reason: Your computer is running out of system RAM! -
    +

    Suggestions:
    1. Try closing unnecessary programs and browser tabs.
    2. If that doesn't help, please increase your computer's virtual memory by following these steps for - Windows, or + Windows or Linux.
    3. Try restarting your computer.
    ` } @@ -1224,6 +1226,8 @@ function getCurrentUserRequest() { sampler_name: samplerField.value, //render_device: undefined, // Set device affinity. Prefer this device, but wont activate. use_stable_diffusion_model: stableDiffusionModelField.value, + clip_skip: clipSkipField.checked, + tiling: tilingField.value, use_vae_model: vaeModelField.value, stream_progress_updates: true, stream_image_progress: numOutputsTotal > 50 ? false : streamImageProgressField.checked, @@ -1261,6 +1265,10 @@ function getCurrentUserRequest() { if (useUpscalingField.checked) { newTask.reqBody.use_upscale = upscaleModelField.value newTask.reqBody.upscale_amount = upscaleAmountField.value + if (upscaleModelField.value === "latent_upscaler") { + newTask.reqBody.upscale_amount = "2" + newTask.reqBody.latent_upscaler_steps = latentUpscalerStepsField.value + } } if (hypernetworkModelField.value) { newTask.reqBody.use_hypernetwork_model = hypernetworkModelField.value @@ -1575,6 +1583,20 @@ useUpscalingField.addEventListener("change", function(e) { upscaleAmountField.disabled = !this.checked }) +function onUpscaleModelChange() { + let upscale4x = document.querySelector("#upscale_amount_4x") + if (upscaleModelField.value === "latent_upscaler") { + upscale4x.disabled = true + upscaleAmountField.value = "2" + latentUpscalerSettings.classList.remove("displayNone") + } else { + upscale4x.disabled = false + latentUpscalerSettings.classList.add("displayNone") + } +} +upscaleModelField.addEventListener("change", onUpscaleModelChange) +onUpscaleModelChange() + makeImageBtn.addEventListener("click", makeImage) document.onkeydown = function(e) { @@ -1584,6 +1606,27 @@ document.onkeydown = function(e) { } } +/********************* Latent Upscaler Steps **************************/ +function updateLatentUpscalerSteps() { + latentUpscalerStepsField.value = latentUpscalerStepsSlider.value + latentUpscalerStepsField.dispatchEvent(new Event("change")) +} + +function updateLatentUpscalerStepsSlider() { + if (latentUpscalerStepsField.value < 1) { + latentUpscalerStepsField.value = 1 + } else if (latentUpscalerStepsField.value > 50) { + latentUpscalerStepsField.value = 50 + } + + latentUpscalerStepsSlider.value = latentUpscalerStepsField.value + latentUpscalerStepsSlider.dispatchEvent(new Event("change")) +} + +latentUpscalerStepsSlider.addEventListener("input", updateLatentUpscalerSteps) +latentUpscalerStepsField.addEventListener("input", updateLatentUpscalerStepsSlider) +updateLatentUpscalerSteps() + /********************* Guidance **************************/ function updateGuidanceScale() { guidanceScaleField.value = guidanceScaleSlider.value / 10 diff --git a/ui/media/js/parameters.js b/ui/media/js/parameters.js index 75abecd7..373de58d 100644 --- a/ui/media/js/parameters.js +++ b/ui/media/js/parameters.js @@ -181,8 +181,8 @@ var PARAMETERS = [ { id: "listen_to_network", type: ParameterType.checkbox, - label: "Make Stable Diffusion available on your network. Please restart the program after changing this.", - note: "Other devices on your network can access this web page", + label: "Make Stable Diffusion available on your network", + note: "Other devices on your network can access this web page. Please restart the program after changing this.", icon: "fa-network-wired", default: true, saveInAppConfig: true, @@ -191,7 +191,8 @@ var PARAMETERS = [ id: "listen_port", type: ParameterType.custom, label: "Network port", - note: "Port that this server listens to. The '9000' part in 'http://localhost:9000'. Please restart the program after changing this.", + note: + "Port that this server listens to. The '9000' part in 'http://localhost:9000'. Please restart the program after changing this.", icon: "fa-anchor", render: (parameter) => { return `` @@ -388,15 +389,27 @@ async function getAppConfig() { if (config.net && config.net.listen_port !== undefined) { listenPortField.value = config.net.listen_port } - if (config.test_diffusers === undefined || config.update_branch === "main") { - testDiffusers.checked = false + + const testDiffusersEnabled = config.test_diffusers && config.update_branch !== "main" + testDiffusers.checked = testDiffusersEnabled + + if (!testDiffusersEnabled) { document.querySelector("#lora_model_container").style.display = "none" document.querySelector("#lora_alpha_container").style.display = "none" + document.querySelector("#tiling_container").style.display = "none" + + document.querySelectorAll("#sampler_name option.diffusers-only").forEach((option) => { + option.style.display = "none" + }) } else { - testDiffusers.checked = config.test_diffusers && config.update_branch !== "main" - document.querySelector("#lora_model_container").style.display = testDiffusers.checked ? "" : "none" - document.querySelector("#lora_alpha_container").style.display = - testDiffusers.checked && loraModelField.value !== "" ? "" : "none" + document.querySelector("#lora_model_container").style.display = "" + document.querySelector("#lora_alpha_container").style.display = loraModelField.value ? "" : "none" + document.querySelector("#tiling_container").style.display = "" + + document.querySelectorAll("#sampler_name option.k_diffusion-only").forEach((option) => { + option.disabled = true + }) + document.querySelector("#clip_skip_config").classList.remove("displayNone") } console.log("get config status response", config) @@ -558,6 +571,16 @@ async function getSystemInfo() { if (allDeviceIds.length === 0) { useCPUField.checked = true useCPUField.disabled = true // no compatible GPUs, so make the CPU mandatory + + getParameterSettingsEntry("use_cpu").addEventListener("click", function() { + alert( + "Sorry, we could not find a compatible graphics card! Easy Diffusion supports graphics cards with minimum 2 GB of RAM. " + + "Only NVIDIA cards are supported on Windows. NVIDIA and AMD cards are supported on Linux.

    " + + "If you have a compatible graphics card, please try updating to the latest drivers.

    " + + "Only the CPU can be used for generating images, without a compatible graphics card.", + "No compatible graphics card found!" + ) + }) } autoPickGPUsField.checked = devices["config"] === "auto" @@ -576,7 +599,7 @@ async function getSystemInfo() { $("#use_gpus").val(activeDeviceIds) } - setDeviceInfo(devices) + document.dispatchEvent(new CustomEvent("system_info_update", { detail: devices })) setHostInfo(res["hosts"]) let force = false if (res["enforce_output_dir"] !== undefined) { @@ -647,3 +670,5 @@ saveSettingsBtn.addEventListener("click", function() { saveSettingsBtn.classList.add("active") Promise.all([savePromise, asyncDelay(300)]).then(() => saveSettingsBtn.classList.remove("active")) }) + +document.addEventListener("system_info_update", (e) => setDeviceInfo(e.detail)) diff --git a/ui/media/js/utils.js b/ui/media/js/utils.js index d1578d8e..6ddb0ae6 100644 --- a/ui/media/js/utils.js +++ b/ui/media/js/utils.js @@ -843,57 +843,83 @@ function createTab(request) { /* TOAST NOTIFICATIONS */ function showToast(message, duration = 5000, error = false) { - const toast = document.createElement("div"); - toast.classList.add("toast-notification"); + const toast = document.createElement("div") + toast.classList.add("toast-notification") if (error === true) { - toast.classList.add("toast-notification-error"); + toast.classList.add("toast-notification-error") } - toast.innerHTML = message; - document.body.appendChild(toast); + toast.innerHTML = message + document.body.appendChild(toast) // Set the position of the toast on the screen - const toastCount = document.querySelectorAll(".toast-notification").length; - const toastHeight = toast.offsetHeight; + const toastCount = document.querySelectorAll(".toast-notification").length + const toastHeight = toast.offsetHeight const previousToastsHeight = Array.from(document.querySelectorAll(".toast-notification")) .slice(0, -1) // exclude current toast - .reduce((totalHeight, toast) => totalHeight + toast.offsetHeight + 10, 0); // add 10 pixels for spacing - toast.style.bottom = `${10 + previousToastsHeight}px`; - toast.style.right = "10px"; + .reduce((totalHeight, toast) => totalHeight + toast.offsetHeight + 10, 0) // add 10 pixels for spacing + toast.style.bottom = `${10 + previousToastsHeight}px` + toast.style.right = "10px" // Delay the removal of the toast until animation has completed const removeToast = () => { - toast.classList.add("hide"); + toast.classList.add("hide") const removeTimeoutId = setTimeout(() => { - toast.remove(); + toast.remove() // Adjust the position of remaining toasts - const remainingToasts = document.querySelectorAll(".toast-notification"); - const removedToastBottom = toast.getBoundingClientRect().bottom; - + const remainingToasts = document.querySelectorAll(".toast-notification") + const removedToastBottom = toast.getBoundingClientRect().bottom + remainingToasts.forEach((toast) => { if (toast.getBoundingClientRect().bottom < removedToastBottom) { - toast.classList.add("slide-down"); + toast.classList.add("slide-down") } - }); - + }) + // Wait for the slide-down animation to complete setTimeout(() => { // Remove the slide-down class after the animation has completed - const slidingToasts = document.querySelectorAll(".slide-down"); + const slidingToasts = document.querySelectorAll(".slide-down") slidingToasts.forEach((toast) => { - toast.classList.remove("slide-down"); - }); - + toast.classList.remove("slide-down") + }) + // Adjust the position of remaining toasts again, in case there are multiple toasts being removed at once - const remainingToastsDown = document.querySelectorAll(".toast-notification"); - let heightSoFar = 0; + const remainingToastsDown = document.querySelectorAll(".toast-notification") + let heightSoFar = 0 remainingToastsDown.forEach((toast) => { - toast.style.bottom = `${10 + heightSoFar}px`; - heightSoFar += toast.offsetHeight + 10; // add 10 pixels for spacing - }); - }, 0); // The duration of the slide-down animation (in milliseconds) - }, 500); - }; + toast.style.bottom = `${10 + heightSoFar}px` + heightSoFar += toast.offsetHeight + 10 // add 10 pixels for spacing + }) + }, 0) // The duration of the slide-down animation (in milliseconds) + }, 500) + } // Remove the toast after specified duration - setTimeout(removeToast, duration); + setTimeout(removeToast, duration) +} + +function alert(msg, title) { + title = title || "" + $.alert({ + theme: "modern", + title: title, + useBootstrap: false, + animateFromElement: false, + content: msg, + }) +} + +function confirm(msg, title, fn) { + title = title || "" + $.confirm({ + theme: "modern", + title: title, + useBootstrap: false, + animateFromElement: false, + content: msg, + buttons: { + yes: fn, + cancel: () => {}, + }, + }) } diff --git a/ui/plugins/ui/Autoscroll.plugin.js b/ui/plugins/ui/Autoscroll.plugin.js index 26969365..336e8b50 100644 --- a/ui/plugins/ui/Autoscroll.plugin.js +++ b/ui/plugins/ui/Autoscroll.plugin.js @@ -23,7 +23,7 @@ img.addEventListener( "load", function() { - img.closest(".imageTaskContainer").scrollIntoView() + img?.closest(".imageTaskContainer").scrollIntoView() }, { once: true } ) diff --git a/ui/plugins/ui/merge.plugin.js b/ui/plugins/ui/merge.plugin.js index 5ce97b2d..d3ddedbf 100644 --- a/ui/plugins/ui/merge.plugin.js +++ b/ui/plugins/ui/merge.plugin.js @@ -403,16 +403,19 @@ // Batch main loop for (let i = 0; i < iterations; i++) { let alpha = (start + i * step) / 100 - switch (document.querySelector("#merge-interpolation").value) { - case "SmoothStep": - alpha = smoothstep(alpha) - break - case "SmootherStep": - alpha = smootherstep(alpha) - break - case "SmoothestStep": - alpha = smootheststep(alpha) - break + + if (isTabActive(tabSettingsBatch)) { + switch (document.querySelector("#merge-interpolation").value) { + case "SmoothStep": + alpha = smoothstep(alpha) + break + case "SmootherStep": + alpha = smootherstep(alpha) + break + case "SmoothestStep": + alpha = smootheststep(alpha) + break + } } addLogMessage(`merging batch job ${i + 1}/${iterations}, alpha = ${alpha.toFixed(5)}...`) @@ -420,7 +423,8 @@ request["out_path"] += "-" + alpha.toFixed(5) + "." + document.querySelector("#merge-format").value addLogMessage(`  filename: ${request["out_path"]}`) - request["ratio"] = alpha + // sdkit documentation: "ratio - the ratio of the second model. 1 means only the second model will be used." + request["ratio"] = 1-alpha let res = await fetch("/model/merge", { method: "POST", headers: { "Content-Type": "application/json" },