diff --git a/CHANGES.md b/CHANGES.md index 9fe2cff0..2e45c279 100644 --- a/CHANGES.md +++ b/CHANGES.md @@ -4,6 +4,7 @@ ### Major Changes - **Nearly twice as fast** - significantly faster speed of image generation. Code contributions are welcome to make our project even faster: https://github.com/easydiffusion/sdkit/#is-it-fast - **Mac M1/M2 support** - Experimental support for Mac M1/M2. Thanks @michaelgallacher, @JeLuf and vishae. +- **AMD support for Linux** - Experimental support for AMD GPUs on Linux. Thanks @DianaNites and @JeLuf. - **Full support for Stable Diffusion 2.1 (including CPU)** - supports loading v1.4 or v2.0 or v2.1 models seamlessly. No need to enable "Test SD2", and no need to add `sd2_` to your SD 2.0 model file names. Works on CPU as well. - **Memory optimized Stable Diffusion 2.1** - you can now use Stable Diffusion 2.1 models, with the same low VRAM optimizations that we've always had for SD 1.4. Please note, the SD 2.0 and 2.1 models require more GPU and System RAM, as compared to the SD 1.4 and 1.5 models. - **11 new samplers!** - explore the new samplers, some of which can generate great images in less than 10 inference steps! We've added the Karras and UniPC samplers. Thanks @Schorny for the UniPC samplers. @@ -21,6 +22,16 @@ Our focus continues to remain on an easy installation experience, and an easy user-interface. While still remaining pretty powerful, in terms of features and speed. ### Detailed changelog +* 2.5.39 - 25 May 2023 - (beta-only) Seamless Tiling - make seamlessly tiled images, e.g. rock and grass textures. Thanks @JeLuf. +* 2.5.38 - 24 May 2023 - Better reporting of errors, and show an explanation if the user cannot disable the "Use CPU" setting. +* 2.5.38 - 23 May 2023 - Add Latent Upscaler as another option for upscaling images. Thanks @JeLuf for the implementation of the Latent Upscaler model. +* 2.5.37 - 19 May 2023 - (beta-only) Two more samplers: DDPM and DEIS. Also disables the samplers that aren't working yet in the Diffusers version. Thanks @ogmaresca. +* 2.5.37 - 19 May 2023 - (beta-only) Support CLIP-Skip. You can set this option under the models dropdown. Thanks @JeLuf. +* 2.5.37 - 19 May 2023 - (beta-only) More VRAM optimizations for all modes in diffusers. The VRAM usage for diffusers in "low" and "balanced" should now be equal or less than the non-diffusers version. Performs softmax in half precision, like sdkit does. +* 2.5.36 - 16 May 2023 - (beta-only) More VRAM optimizations for "balanced" VRAM usage mode. +* 2.5.36 - 11 May 2023 - (beta-only) More VRAM optimizations for "low" VRAM usage mode. +* 2.5.36 - 10 May 2023 - (beta-only) Bug fix for "meta" error when using a LoRA in 'low' VRAM usage mode. +* 2.5.35 - 8 May 2023 - Allow dragging a zoomed-in image (after opening an image with the "expand" button). Thanks @ogmaresca. * 2.5.35 - 3 May 2023 - (beta-only) First round of VRAM Optimizations for the "Test Diffusers" version. This change significantly reduces the amount of VRAM used by the diffusers version during image generation. The VRAM usage is still not equal to the "non-diffusers" version, but more optimizations are coming soon. * 2.5.34 - 22 Apr 2023 - Don't start the browser in an incognito new profile (on Windows). Thanks @JeLuf. * 2.5.33 - 21 Apr 2023 - Install PyTorch 2.0 on new installations (on Windows and Linux). diff --git a/PRIVACY.md b/PRIVACY.md new file mode 100644 index 00000000..6c997997 --- /dev/null +++ b/PRIVACY.md @@ -0,0 +1,9 @@ +// placeholder until a more formal and legal-sounding privacy policy document is written. but the information below is true. + +This is a summary of whether Easy Diffusion uses your data or tracks you: +* The short answer is - Easy Diffusion does *not* use your data, and does *not* track you. +* Easy Diffusion does not send your prompts or usage or analytics to anyone. There is no tracking. We don't even know how many people use Easy Diffusion, let alone their prompts. +* Easy Diffusion fetches updates to the code whenever it starts up. It does this by contacting GitHub directly, via SSL (secure connection). Only your computer and GitHub and [this repository](https://github.com/cmdr2/stable-diffusion-ui) are involved, and no third party is involved. Some countries intercepts SSL connections, that's not something we can do much about. GitHub does *not* share statistics (even with me) about how many people fetched code updates. +* Easy Diffusion fetches the models from huggingface.co and github.com, if they don't exist on your PC. For e.g. if the safety checker (NSFW) model doesn't exist, it'll try to download it. +* Easy Diffusion fetches code packages from pypi.org, which is the standard hosting service for all Python projects. That's where packages installed via `pip install` are stored. +* Occasionally, antivirus software are known to *incorrectly* flag and delete some model files, which will result in Easy Diffusion re-downloading `pytorch_model.bin`. This *incorrect deletion* affects other Stable Diffusion UIs as well, like Invoke AI - https://itch.io/post/7509488 diff --git a/README.md b/README.md index 3cb0bf8e..6a629e57 100644 --- a/README.md +++ b/README.md @@ -17,9 +17,11 @@ Click the download button for your operating system:
**Hardware requirements:** -- **Windows:** NVIDIA graphics card, or run on your CPU -- **Linux:** NVIDIA or AMD graphics card, or run on your CPU -- **Mac:** M1 or M2, or run on your CPU +- **Windows:** NVIDIA graphics card (minimum 2 GB RAM), or run on your CPU. +- **Linux:** NVIDIA or AMD graphics card (minimum 2 GB RAM), or run on your CPU. +- **Mac:** M1 or M2, or run on your CPU. +- Minimum 8 GB of system RAM. +- Atleast 25 GB of space on the hard disk. The installer will take care of whatever is needed. If you face any problems, you can join the friendly [Discord community](https://discord.com/invite/u9yhsFmEkB) and ask for assistance. @@ -58,7 +60,7 @@ Just delete the `EasyDiffusion` folder to uninstall all the downloaded packages. ### Image generation - **Supports**: "*Text to Image*" and "*Image to Image*". -- **19 Samplers**: `ddim`, `plms`, `heun`, `euler`, `euler_a`, `dpm2`, `dpm2_a`, `lms`, `dpm_solver_stability`, `dpmpp_2s_a`, `dpmpp_2m`, `dpmpp_sde`, `dpm_fast`, `dpm_adaptive`, `unipc_snr`, `unipc_tu`, `unipc_tq`, `unipc_snr_2`, `unipc_tu_2`. +- **21 Samplers**: `ddim`, `plms`, `heun`, `euler`, `euler_a`, `dpm2`, `dpm2_a`, `lms`, `dpm_solver_stability`, `dpmpp_2s_a`, `dpmpp_2m`, `dpmpp_sde`, `dpm_fast`, `dpm_adaptive`, `ddpm`, `deis`, `unipc_snr`, `unipc_tu`, `unipc_tq`, `unipc_snr_2`, `unipc_tu_2`. - **In-Painting**: Specify areas of your image to paint into. - **Simple Drawing Tool**: Draw basic images to guide the AI, without needing an external drawing program. - **Face Correction (GFPGAN)** @@ -84,7 +86,7 @@ Just delete the `EasyDiffusion` folder to uninstall all the downloaded packages. ### Performance and security - **Fast**: Creates a 512x512 image with euler_a in 5 seconds, on an NVIDIA 3060 12GB. -- **Low Memory Usage**: Create 512x512 images with less than 3 GB of GPU RAM, and 768x768 images with less than 4 GB of GPU RAM! +- **Low Memory Usage**: Create 512x512 images with less than 2 GB of GPU RAM, and 768x768 images with less than 3 GB of GPU RAM! - **Use CPU setting**: If you don't have a compatible graphics card, but still want to run it on your CPU. - **Multi-GPU support**: Automatically spreads your tasks across multiple GPUs (if available), for faster performance! - **Auto scan for malicious models**: Uses picklescan to prevent malicious models. @@ -113,14 +115,6 @@ Useful for judging (and stopping) an image quickly, without waiting for it to fi ![Screenshot of task queue](https://user-images.githubusercontent.com/844287/217043984-0b35f73b-1318-47cb-9eed-a2a91b430490.png) - -# System Requirements -1. Windows 10/11, or Linux. Experimental support for Mac is coming soon. -2. An NVIDIA graphics card, preferably with 4GB or more of VRAM. If you don't have a compatible graphics card, it'll automatically run in the slower "CPU Mode". -3. Minimum 8 GB of RAM and 25GB of disk space. - -You don't need to install or struggle with Python, Anaconda, Docker etc. The installer will take care of whatever is needed. - ---- # How to use? diff --git a/scripts/check_modules.py b/scripts/check_modules.py index 031f7d66..3686ca00 100644 --- a/scripts/check_modules.py +++ b/scripts/check_modules.py @@ -18,7 +18,7 @@ os_name = platform.system() modules_to_check = { "torch": ("1.11.0", "1.13.1", "2.0.0"), "torchvision": ("0.12.0", "0.14.1", "0.15.1"), - "sdkit": "1.0.87", + "sdkit": "1.0.98", "stable-diffusion-sdkit": "2.1.4", "rich": "12.6.0", "uvicorn": "0.19.0", @@ -130,10 +130,13 @@ def include_cuda_versions(module_versions: tuple) -> tuple: def is_amd_on_linux(): if os_name == "Linux": - with open("/proc/bus/pci/devices", "r") as f: - device_info = f.read() - if "amdgpu" in device_info and "nvidia" not in device_info: - return True + try: + with open("/proc/bus/pci/devices", "r") as f: + device_info = f.read() + if "amdgpu" in device_info and "nvidia" not in device_info: + return True + except: + return False return False diff --git a/scripts/get_config.py b/scripts/get_config.py index 02523364..9cdfb2fe 100644 --- a/scripts/get_config.py +++ b/scripts/get_config.py @@ -1,5 +1,6 @@ import os import argparse +import sys # The config file is in the same directory as this script config_directory = os.path.dirname(__file__) @@ -21,16 +22,16 @@ if os.path.isfile(config_yaml): try: config = yaml.safe_load(configfile) except Exception as e: - print(e) - exit() + print(e, file=sys.stderr) + config = {} elif os.path.isfile(config_json): import json with open(config_json, 'r') as configfile: try: config = json.load(configfile) except Exception as e: - print(e) - exit() + print(e, file=sys.stderr) + config = {} else: config = {} diff --git a/ui/easydiffusion/app.py b/ui/easydiffusion/app.py index b6318f01..3064e151 100644 --- a/ui/easydiffusion/app.py +++ b/ui/easydiffusion/app.py @@ -10,6 +10,8 @@ import warnings from easydiffusion import task_manager from easydiffusion.utils import log from rich.logging import RichHandler +from rich.console import Console +from rich.panel import Panel from sdkit.utils import log as sdkit_log # hack, so we can overwrite the log config # Remove all handlers associated with the root logger object. @@ -213,11 +215,19 @@ def open_browser(): ui = config.get("ui", {}) net = config.get("net", {}) port = net.get("listen_port", 9000) + if ui.get("open_browser_on_start", True): import webbrowser webbrowser.open(f"http://localhost:{port}") + Console().print(Panel( + "\n" + + "[white]Easy Diffusion is ready to serve requests.\n\n" + + "A new browser tab should have been opened by now.\n" + + f"If not, please open your web browser and navigate to [bold yellow underline]http://localhost:{port}/\n", + title="Easy Diffusion is ready", style="bold yellow on blue")) + def get_image_modifiers(): modifiers_json_path = os.path.join(SD_UI_DIR, "modifiers.json") diff --git a/ui/easydiffusion/device_manager.py b/ui/easydiffusion/device_manager.py index 59c07ea3..dc705927 100644 --- a/ui/easydiffusion/device_manager.py +++ b/ui/easydiffusion/device_manager.py @@ -98,8 +98,8 @@ def auto_pick_devices(currently_active_devices): continue mem_free, mem_total = torch.cuda.mem_get_info(device) - mem_free /= float(10 ** 9) - mem_total /= float(10 ** 9) + mem_free /= float(10**9) + mem_total /= float(10**9) device_name = torch.cuda.get_device_name(device) log.debug( f"{device} detected: {device_name} - Memory (free/total): {round(mem_free, 2)}Gb / {round(mem_total, 2)}Gb" @@ -165,6 +165,7 @@ def needs_to_force_full_precision(context): and ( " 1660" in device_name or " 1650" in device_name + or " 1630" in device_name or " t400" in device_name or " t550" in device_name or " t600" in device_name @@ -181,7 +182,7 @@ def get_max_vram_usage_level(device): else: return "high" - mem_total /= float(10 ** 9) + mem_total /= float(10**9) if mem_total < 4.5: return "low" elif mem_total < 6.5: @@ -223,10 +224,10 @@ def is_device_compatible(device): # Memory check try: _, mem_total = torch.cuda.mem_get_info(device) - mem_total /= float(10 ** 9) - if mem_total < 3.0: + mem_total /= float(10**9) + if mem_total < 1.9: if is_device_compatible.history.get(device) == None: - log.warn(f"GPU {device} with less than 3 GB of VRAM is not compatible with Stable Diffusion") + log.warn(f"GPU {device} with less than 2 GB of VRAM is not compatible with Stable Diffusion") is_device_compatible.history[device] = 1 return False except RuntimeError as e: diff --git a/ui/easydiffusion/model_manager.py b/ui/easydiffusion/model_manager.py index a0b2489a..2a8b57fd 100644 --- a/ui/easydiffusion/model_manager.py +++ b/ui/easydiffusion/model_manager.py @@ -53,15 +53,21 @@ def load_default_models(context: Context): scan_model=context.model_paths[model_type] != None and not context.model_paths[model_type].endswith(".safetensors"), ) + if model_type in context.model_load_errors: + del context.model_load_errors[model_type] except Exception as e: log.error(f"[red]Error while loading {model_type} model: {context.model_paths[model_type]}[/red]") log.exception(e) del context.model_paths[model_type] + context.model_load_errors[model_type] = str(e) # storing the entire Exception can lead to memory leaks + def unload_all(context: Context): for model_type in KNOWN_MODEL_TYPES: unload_model(context, model_type) + if model_type in context.model_load_errors: + del context.model_load_errors[model_type] def resolve_model_to_use(model_name: str = None, model_type: str = None): @@ -107,19 +113,17 @@ def resolve_model_to_use(model_name: str = None, model_type: str = None): def reload_models_if_necessary(context: Context, task_data: TaskData): - if hasattr(task_data, 'use_face_correction') and task_data.use_face_correction: - face_correction_model = "codeformer" if "codeformer" in task_data.use_face_correction.lower() else "gfpgan" - face_correction_value = task_data.use_face_correction - else: - face_correction_model = "gfpgan" - face_correction_value = None + face_fix_lower = task_data.use_face_correction.lower() if task_data.use_face_correction else "" + upscale_lower = task_data.use_upscale.lower() if task_data.use_upscale else "" model_paths_in_req = { "stable-diffusion": task_data.use_stable_diffusion_model, "vae": task_data.use_vae_model, "hypernetwork": task_data.use_hypernetwork_model, - face_correction_model: face_correction_value, - "realesrgan": task_data.use_upscale, + "codeformer": task_data.use_face_correction if "codeformer" in face_fix_lower else None, + "gfpgan": task_data.use_face_correction if "gfpgan" in face_fix_lower else None, + "realesrgan": task_data.use_upscale if "realesrgan" in upscale_lower else None, + "latent_upscaler": True if "latent_upscaler" in upscale_lower else None, "nsfw_checker": True if task_data.block_nsfw else None, "lora": task_data.use_lora_model, } @@ -129,14 +133,21 @@ def reload_models_if_necessary(context: Context, task_data: TaskData): if context.model_paths.get(model_type) != path } - if set_vram_optimizations(context): # reload SD + if set_vram_optimizations(context) or set_clip_skip(context, task_data): # reload SD models_to_reload["stable-diffusion"] = model_paths_in_req["stable-diffusion"] for model_type, model_path_in_req in models_to_reload.items(): context.model_paths[model_type] = model_path_in_req action_fn = unload_model if context.model_paths[model_type] is None else load_model - action_fn(context, model_type, scan_model=False) # we've scanned them already + try: + action_fn(context, model_type, scan_model=False) # we've scanned them already + if model_type in context.model_load_errors: + del context.model_load_errors[model_type] + except Exception as e: + log.exception(e) + if action_fn == load_model: + context.model_load_errors[model_type] = str(e) # storing the entire Exception can lead to memory leaks def resolve_model_paths(task_data: TaskData): @@ -149,10 +160,18 @@ def resolve_model_paths(task_data: TaskData): if task_data.use_face_correction: task_data.use_face_correction = resolve_model_to_use(task_data.use_face_correction, "gfpgan") - if task_data.use_upscale: + if task_data.use_upscale and "realesrgan" in task_data.use_upscale.lower(): task_data.use_upscale = resolve_model_to_use(task_data.use_upscale, "realesrgan") +def fail_if_models_did_not_load(context: Context): + for model_type in KNOWN_MODEL_TYPES: + if model_type in context.model_load_errors: + e = context.model_load_errors[model_type] + raise Exception(f"Could not load the {model_type} model! Reason: " + e) + # concat 'e', don't use in format string (injection attack) + + def set_vram_optimizations(context: Context): config = app.getConfig() vram_usage_level = config.get("vram_usage_level", "balanced") @@ -164,6 +183,16 @@ def set_vram_optimizations(context: Context): return False +def set_clip_skip(context: Context, task_data: TaskData): + clip_skip = task_data.clip_skip + + if clip_skip != context.clip_skip: + context.clip_skip = clip_skip + return True + + return False + + def make_model_folders(): for model_type in KNOWN_MODEL_TYPES: model_dir_path = os.path.join(app.MODELS_DIR, model_type) diff --git a/ui/easydiffusion/renderer.py b/ui/easydiffusion/renderer.py index e1176c8b..1ebd05ec 100644 --- a/ui/easydiffusion/renderer.py +++ b/ui/easydiffusion/renderer.py @@ -33,6 +33,7 @@ def init(device): context.stop_processing = False context.temp_images = {} context.partial_x_samples = None + context.model_load_errors = {} from easydiffusion import app @@ -72,7 +73,7 @@ def make_images( def print_task_info(req: GenerateImageRequest, task_data: TaskData): - req_str = pprint.pformat(get_printable_request(req)).replace("[", "\[") + req_str = pprint.pformat(get_printable_request(req, task_data)).replace("[", "\[") task_str = pprint.pformat(task_data.dict()).replace("[", "\[") log.info(f"request: {req_str}") log.info(f"task data: {task_str}") @@ -95,7 +96,7 @@ def make_images_internal( task_data.stream_image_progress_interval, ) gc(context) - filtered_images = filter_images(task_data, images, user_stopped) + filtered_images = filter_images(req, task_data, images, user_stopped) if task_data.save_to_disk_path is not None: save_images_to_disk(images, filtered_images, req, task_data) @@ -151,24 +152,38 @@ def generate_images_internal( return images, user_stopped -def filter_images(task_data: TaskData, images: list, user_stopped): +def filter_images(req: GenerateImageRequest, task_data: TaskData, images: list, user_stopped): if user_stopped: return images filters_to_apply = [] + filter_params = {} if task_data.block_nsfw: filters_to_apply.append("nsfw_checker") if task_data.use_face_correction and "codeformer" in task_data.use_face_correction.lower(): filters_to_apply.append("codeformer") elif task_data.use_face_correction and "gfpgan" in task_data.use_face_correction.lower(): filters_to_apply.append("gfpgan") - if task_data.use_upscale and "realesrgan" in task_data.use_upscale.lower(): - filters_to_apply.append("realesrgan") + if task_data.use_upscale: + if "realesrgan" in task_data.use_upscale.lower(): + filters_to_apply.append("realesrgan") + elif task_data.use_upscale == "latent_upscaler": + filters_to_apply.append("latent_upscaler") + + filter_params["latent_upscaler_options"] = { + "prompt": req.prompt, + "negative_prompt": req.negative_prompt, + "seed": req.seed, + "num_inference_steps": task_data.latent_upscaler_steps, + "guidance_scale": 0, + } + + filter_params["scale"] = task_data.upscale_amount if len(filters_to_apply) == 0: return images - return apply_filters(context, filters_to_apply, images, scale=task_data.upscale_amount) + return apply_filters(context, filters_to_apply, images, **filter_params) def construct_response(images: list, seeds: list, task_data: TaskData, base_seed: int): diff --git a/ui/easydiffusion/task_manager.py b/ui/easydiffusion/task_manager.py index c11acbec..a91cd9c6 100644 --- a/ui/easydiffusion/task_manager.py +++ b/ui/easydiffusion/task_manager.py @@ -336,6 +336,7 @@ def thread_render(device): current_state = ServerStates.LoadingModel model_manager.resolve_model_paths(task.task_data) model_manager.reload_models_if_necessary(renderer.context, task.task_data) + model_manager.fail_if_models_did_not_load(renderer.context) current_state = ServerStates.Rendering task.response = renderer.make_images( diff --git a/ui/easydiffusion/types.py b/ui/easydiffusion/types.py index 7462355f..e4426714 100644 --- a/ui/easydiffusion/types.py +++ b/ui/easydiffusion/types.py @@ -23,6 +23,7 @@ class GenerateImageRequest(BaseModel): sampler_name: str = None # "ddim", "plms", "heun", "euler", "euler_a", "dpm2", "dpm2_a", "lms" hypernetwork_strength: float = 0 lora_alpha: float = 0 + tiling: str = "none" # "none", "x", "y", "xy" class TaskData(BaseModel): @@ -32,8 +33,9 @@ class TaskData(BaseModel): vram_usage_level: str = "balanced" # or "low" or "medium" use_face_correction: str = None # or "GFPGANv1.3" - use_upscale: str = None # or "RealESRGAN_x4plus" or "RealESRGAN_x4plus_anime_6B" + use_upscale: str = None # or "RealESRGAN_x4plus" or "RealESRGAN_x4plus_anime_6B" or "latent_upscaler" upscale_amount: int = 4 # or 2 + latent_upscaler_steps: int = 10 use_stable_diffusion_model: str = "sd-v1-4" # use_stable_diffusion_config: str = "v1-inference" use_vae_model: str = None @@ -48,6 +50,7 @@ class TaskData(BaseModel): metadata_output_format: str = "txt" # or "json" stream_image_progress: bool = False stream_image_progress_interval: int = 5 + clip_skip: bool = False class MergeRequest(BaseModel): diff --git a/ui/easydiffusion/utils/save_utils.py b/ui/easydiffusion/utils/save_utils.py index a7043f27..ff2906a6 100644 --- a/ui/easydiffusion/utils/save_utils.py +++ b/ui/easydiffusion/utils/save_utils.py @@ -15,23 +15,26 @@ img_number_regex = re.compile("([0-9]{5,})") # keep in sync with `ui/media/js/dnd.js` TASK_TEXT_MAPPING = { "prompt": "Prompt", + "negative_prompt": "Negative Prompt", + "seed": "Seed", + "use_stable_diffusion_model": "Stable Diffusion model", + "clip_skip": "Clip Skip", + "use_vae_model": "VAE model", + "sampler_name": "Sampler", "width": "Width", "height": "Height", - "seed": "Seed", "num_inference_steps": "Steps", "guidance_scale": "Guidance Scale", "prompt_strength": "Prompt Strength", + "use_lora_model": "LoRA model", + "lora_alpha": "LoRA Strength", + "use_hypernetwork_model": "Hypernetwork model", + "hypernetwork_strength": "Hypernetwork Strength", + "tiling": "Seamless Tiling", "use_face_correction": "Use Face Correction", "use_upscale": "Use Upscaling", "upscale_amount": "Upscale By", - "sampler_name": "Sampler", - "negative_prompt": "Negative Prompt", - "use_stable_diffusion_model": "Stable Diffusion model", - "use_vae_model": "VAE model", - "use_hypernetwork_model": "Hypernetwork model", - "hypernetwork_strength": "Hypernetwork Strength", - "use_lora_model": "LoRA model", - "lora_alpha": "LoRA Strength", + "latent_upscaler_steps": "Latent Upscaler Steps" } time_placeholders = { @@ -168,41 +171,23 @@ def save_images_to_disk(images: list, filtered_images: list, req: GenerateImageR output_quality=task_data.output_quality, output_lossless=task_data.output_lossless, ) - if task_data.metadata_output_format.lower() in ["json", "txt", "embed"]: - save_dicts( - metadata_entries, - save_dir_path, - file_name=make_filter_filename, - output_format=task_data.metadata_output_format, - file_format=task_data.output_format, - ) + if task_data.metadata_output_format: + for metadata_output_format in task_data.metadata_output_format.split(","): + if metadata_output_format.lower() in ["json", "txt", "embed"]: + save_dicts( + metadata_entries, + save_dir_path, + file_name=make_filter_filename, + output_format=task_data.metadata_output_format, + file_format=task_data.output_format, + ) def get_metadata_entries_for_request(req: GenerateImageRequest, task_data: TaskData): - metadata = get_printable_request(req) - metadata.update( - { - "use_stable_diffusion_model": task_data.use_stable_diffusion_model, - "use_vae_model": task_data.use_vae_model, - "use_hypernetwork_model": task_data.use_hypernetwork_model, - "use_lora_model": task_data.use_lora_model, - "use_face_correction": task_data.use_face_correction, - "use_upscale": task_data.use_upscale, - } - ) - if metadata["use_upscale"] is not None: - metadata["upscale_amount"] = task_data.upscale_amount - if task_data.use_hypernetwork_model is None: - del metadata["hypernetwork_strength"] - if task_data.use_lora_model is None: - if "lora_alpha" in metadata: - del metadata["lora_alpha"] - app_config = app.getConfig() - if not app_config.get("test_diffusers", False) and "use_lora_model" in metadata: - del metadata["use_lora_model"] + metadata = get_printable_request(req, task_data) # if text, format it in the text format expected by the UI - is_txt_format = task_data.metadata_output_format.lower() == "txt" + is_txt_format = task_data.metadata_output_format and "txt" in task_data.metadata_output_format.lower().split(",") if is_txt_format: metadata = {TASK_TEXT_MAPPING[key]: val for key, val in metadata.items() if key in TASK_TEXT_MAPPING} @@ -213,12 +198,35 @@ def get_metadata_entries_for_request(req: GenerateImageRequest, task_data: TaskD return entries -def get_printable_request(req: GenerateImageRequest): - metadata = req.dict() - del metadata["init_image"] - del metadata["init_image_mask"] - if req.init_image is None: +def get_printable_request(req: GenerateImageRequest, task_data: TaskData): + req_metadata = req.dict() + task_data_metadata = task_data.dict() + + # Save the metadata in the order defined in TASK_TEXT_MAPPING + metadata = {} + for key in TASK_TEXT_MAPPING.keys(): + if key in req_metadata: + metadata[key] = req_metadata[key] + elif key in task_data_metadata: + metadata[key] = task_data_metadata[key] + + # Clean up the metadata + if req.init_image is None and "prompt_strength" in metadata: del metadata["prompt_strength"] + if task_data.use_upscale is None and "upscale_amount" in metadata: + del metadata["upscale_amount"] + if task_data.use_hypernetwork_model is None and "hypernetwork_strength" in metadata: + del metadata["hypernetwork_strength"] + if task_data.use_lora_model is None and "lora_alpha" in metadata: + del metadata["lora_alpha"] + if task_data.use_upscale != "latent_upscaler" and "latent_upscaler_steps" in metadata: + del metadata["latent_upscaler_steps"] + + app_config = app.getConfig() + if not app_config.get("test_diffusers", False): + for key in (x for x in ["use_lora_model", "lora_alpha", "clip_skip", "tiling", "latent_upscaler_steps"] if x in metadata): + del metadata[key] + return metadata diff --git a/ui/index.html b/ui/index.html index be522689..21ec2550 100644 --- a/ui/index.html +++ b/ui/index.html @@ -30,7 +30,7 @@Error:${ - this.exception - }` - logError(msg, event, outputMsg) } break } @@ -879,15 +881,15 @@ function onTaskCompleted(task, reqBody, instance, outputContainer, stepUpdate) { 1. If you have set an initial image, please try reducing its dimension to ${MAX_INIT_IMAGE_DIMENSION}x${MAX_INIT_IMAGE_DIMENSION} or smaller.
EventInfo: ${JSON.stringify(event, undefined, 4)}