diff --git a/CHANGES.md b/CHANGES.md index 66fae99b..141c0773 100644 --- a/CHANGES.md +++ b/CHANGES.md @@ -8,13 +8,13 @@ - **Full support for Stable Diffusion 2.1 (including CPU)** - supports loading v1.4 or v2.0 or v2.1 models seamlessly. No need to enable "Test SD2", and no need to add `sd2_` to your SD 2.0 model file names. Works on CPU as well. - **Memory optimized Stable Diffusion 2.1** - you can now use Stable Diffusion 2.1 models, with the same low VRAM optimizations that we've always had for SD 1.4. Please note, the SD 2.0 and 2.1 models require more GPU and System RAM, as compared to the SD 1.4 and 1.5 models. - **11 new samplers!** - explore the new samplers, some of which can generate great images in less than 10 inference steps! We've added the Karras and UniPC samplers. Thanks @Schorny for the UniPC samplers. -- **Model Merging** - You can now merge two models (`.ckpt` or `.safetensors`) and output `.ckpt` or `.safetensors` models, optionally in `fp16` precision. Details: https://github.com/cmdr2/stable-diffusion-ui/wiki/Model-Merging . Thanks @JeLuf. +- **Model Merging** - You can now merge two models (`.ckpt` or `.safetensors`) and output `.ckpt` or `.safetensors` models, optionally in `fp16` precision. Details: https://github.com/easydiffusion/easydiffusion/wiki/Model-Merging . Thanks @JeLuf. - **Fast loading/unloading of VAEs** - No longer needs to reload the entire Stable Diffusion model, each time you change the VAE - **Database of known models** - automatically picks the right configuration for known models. E.g. we automatically detect and apply "v" parameterization (required for some SD 2.0 models), and "fp32" attention precision (required for some SD 2.1 models). - **Color correction for img2img** - an option to preserve the color profile (histogram) of the initial image. This is especially useful if you're getting red-tinted images after inpainting/masking. - **Three GPU Memory Usage Settings** - `High` (fastest, maximum VRAM usage), `Balanced` (default - almost as fast, significantly lower VRAM usage), `Low` (slowest, very low VRAM usage). The `Low` setting is applied automatically for GPUs with less than 4 GB of VRAM. - **Find models in sub-folders** - This allows you to organize your models into sub-folders inside `models/stable-diffusion`, instead of keeping them all in a single folder. Thanks @patriceac and @ogmaresca. -- **Custom Modifier Categories** - Ability to create custom modifiers with thumbnails, and custom categories (and hierarchy of categories). Details: https://github.com/cmdr2/stable-diffusion-ui/wiki/Custom-Modifiers . Thanks @ogmaresca. +- **Custom Modifier Categories** - Ability to create custom modifiers with thumbnails, and custom categories (and hierarchy of categories). Details: https://github.com/easydiffusion/easydiffusion/wiki/Custom-Modifiers . Thanks @ogmaresca. - **Embed metadata, or save as TXT/JSON** - You can now embed the metadata directly into the images, or save them as text or json files (choose in the Settings tab). Thanks @patriceac. - **Major rewrite of the code** - Most of the codebase has been reorganized and rewritten, to make it more manageable and easier for new developers to contribute features. We've separated our core engine into a new project called `sdkit`, which allows anyone to easily integrate Stable Diffusion (and related modules like GFPGAN etc) into their programming projects (via a simple `pip install sdkit`): https://github.com/easydiffusion/sdkit/ - **Name change** - Last, and probably the least, the UI is now called "Easy Diffusion". It indicates the focus of this project - an easy way for people to play with Stable Diffusion. @@ -22,6 +22,24 @@ Our focus continues to remain on an easy installation experience, and an easy user-interface. While still remaining pretty powerful, in terms of features and speed. ### Detailed changelog +* 2.5.48 - 1 Aug 2023 - (beta-only) Full support for ControlNets. You can select a control image to guide the AI. You can pick a filter to pre-process the image, and one of the known (or custom) controlnet models. Supports `OpenPose`, `Canny`, `Straight Lines`, `Depth`, `Line Art`, `Scribble`, `Soft Edge`, `Shuffle` and `Segment`. +* 2.5.47 - 30 Jul 2023 - An option to use `Strict Mask Border` while inpainting, to avoid touching areas outside the mask. But this might show a slight outline of the mask, which you will have to touch up separately. +* 2.5.47 - 29 Jul 2023 - (beta-only) Fix long prompts with SDXL. +* 2.5.47 - 29 Jul 2023 - (beta-only) Fix red dots in some SDXL images. +* 2.5.47 - 29 Jul 2023 - Significantly faster `Fix Faces` and `Upscale` buttons (on the image). They no longer need to generate the image from scratch, instead they just upscale/fix the generated image in-place. +* 2.5.47 - 28 Jul 2023 - Lots of internal code reorganization, in preparation for supporting Controlnets. No user-facing changes. +* 2.5.46 - 27 Jul 2023 - (beta-only) Full support for SD-XL models (base and refiner)! +* 2.5.45 - 24 Jul 2023 - (beta-only) Hide the samplers that won't be supported in the new diffusers version. +* 2.5.45 - 22 Jul 2023 - (beta-only) Fix the recently-broken inpainting models. +* 2.5.45 - 16 Jul 2023 - (beta-only) Fix the image quality of LoRAs, which had degraded in v2.5.44. +* 2.5.44 - 15 Jul 2023 - (beta-only) Support for multiple LoRA files. +* 2.5.43 - 9 Jul 2023 - (beta-only) Support for loading Textual Inversion embeddings. You can find the option in the Image Settings panel. Thanks @JeLuf. +* 2.5.43 - 9 Jul 2023 - Improve the startup time of the UI. +* 2.5.42 - 4 Jul 2023 - Keyboard shortcuts for the Image Editor. Thanks @JeLuf. +* 2.5.42 - 28 Jun 2023 - Allow dropping images from folders to use as an Initial Image. +* 2.5.42 - 26 Jun 2023 - Show a popup for Image Modifiers, allowing a larger screen space, better UX on mobile screens, and more room for us to develop and improve the Image Modifiers panel. Thanks @Hakorr. +* 2.5.42 - 26 Jun 2023 - (beta-only) Show a welcome screen for users of the diffusers beta, with instructions on how to use the new prompt syntax, and known bugs. Thanks @JeLuf. +* 2.5.42 - 26 Jun 2023 - Use YAML files for config. You can now edit the `config.yaml` file (using a text editor, like Notepad). This file is present inside the Easy Diffusion folder, and is easier to read and edit (for humans) than JSON. Thanks @JeLuf. * 2.5.41 - 24 Jun 2023 - (beta-only) Fix broken inpainting in low VRAM usage mode. * 2.5.41 - 24 Jun 2023 - (beta-only) Fix a recent regression where the LoRA would not get applied when changing SD models. * 2.5.41 - 23 Jun 2023 - Fix a regression where latent upscaler stopped working on PCs without a graphics card. @@ -72,7 +90,7 @@ Our focus continues to remain on an easy installation experience, and an easy us * 2.5.24 - 11 Mar 2023 - Button to load an image mask from a file. * 2.5.24 - 10 Mar 2023 - Logo change. Image credit: @lazlo_vii. * 2.5.23 - 8 Mar 2023 - Experimental support for Mac M1/M2. Thanks @michaelgallacher, @JeLuf and vishae! -* 2.5.23 - 8 Mar 2023 - Ability to create custom modifiers with thumbnails, and custom categories (and hierarchy of categories). More details - https://github.com/cmdr2/stable-diffusion-ui/wiki/Custom-Modifiers . Thanks @ogmaresca. +* 2.5.23 - 8 Mar 2023 - Ability to create custom modifiers with thumbnails, and custom categories (and hierarchy of categories). More details - https://github.com/easydiffusion/easydiffusion/wiki/Custom-Modifiers . Thanks @ogmaresca. * 2.5.22 - 28 Feb 2023 - Minor styling changes to UI buttons, and the models dropdown. * 2.5.22 - 28 Feb 2023 - Lots of UI-related bug fixes. Thanks @patriceac. * 2.5.21 - 22 Feb 2023 - An option to control the size of the image thumbnails. You can use the `Display options` in the top-right corner to change this. Thanks @JeLuf. @@ -97,7 +115,7 @@ Our focus continues to remain on an easy installation experience, and an easy us * 2.5.14 - 3 Feb 2023 - Fix the 'Make Similar Images' button, which was producing incorrect images (weren't very similar). * 2.5.13 - 1 Feb 2023 - Fix the remaining GPU memory leaks, including a better fix (more comprehensive) for the change in 2.5.12 (27 Jan). * 2.5.12 - 27 Jan 2023 - Fix a memory leak, which made the UI unresponsive after an out-of-memory error. The allocated memory is now freed-up after an error. -* 2.5.11 - 25 Jan 2023 - UI for Merging Models. Thanks @JeLuf. More info: https://github.com/cmdr2/stable-diffusion-ui/wiki/Model-Merging +* 2.5.11 - 25 Jan 2023 - UI for Merging Models. Thanks @JeLuf. More info: https://github.com/easydiffusion/easydiffusion/wiki/Model-Merging * 2.5.10 - 24 Jan 2023 - Reduce the VRAM usage for img2img in 'balanced' mode (without reducing the rendering speed), to make it similar to v2.4 of this UI. * 2.5.9 - 23 Jan 2023 - Fix a bug where img2img would produce poorer-quality images for the same settings, as compared to version 2.4 of this UI. * 2.5.9 - 23 Jan 2023 - Reduce the VRAM usage for 'balanced' mode (without reducing the rendering speed), to make it similar to v2.4 of the UI. @@ -126,8 +144,8 @@ Our focus continues to remain on an easy installation experience, and an easy us - **Automatic scanning for malicious model files** - using `picklescan`, and support for `safetensor` model format. Thanks @JeLuf - **Image Editor** - for drawing simple images for guiding the AI. Thanks @mdiller - **Use pre-trained hypernetworks** - for improving the quality of images. Thanks @C0bra5 -- **Support for custom VAE models**. You can place your VAE files in the `models/vae` folder, and refresh the browser page to use them. More info: https://github.com/cmdr2/stable-diffusion-ui/wiki/VAE-Variational-Auto-Encoder -- **Experimental support for multiple GPUs!** It should work automatically. Just open one browser tab per GPU, and spread your tasks across your GPUs. For e.g. open our UI in two browser tabs if you have two GPUs. You can customize which GPUs it should use in the "Settings" tab, otherwise let it automatically pick the best GPUs. Thanks @madrang . More info: https://github.com/cmdr2/stable-diffusion-ui/wiki/Run-on-Multiple-GPUs +- **Support for custom VAE models**. You can place your VAE files in the `models/vae` folder, and refresh the browser page to use them. More info: https://github.com/easydiffusion/easydiffusion/wiki/VAE-Variational-Auto-Encoder +- **Experimental support for multiple GPUs!** It should work automatically. Just open one browser tab per GPU, and spread your tasks across your GPUs. For e.g. open our UI in two browser tabs if you have two GPUs. You can customize which GPUs it should use in the "Settings" tab, otherwise let it automatically pick the best GPUs. Thanks @madrang . More info: https://github.com/easydiffusion/easydiffusion/wiki/Run-on-Multiple-GPUs - **Cleaner UI design** - Show settings and help in new tabs, instead of dropdown popups (which were buggy). Thanks @mdiller - **Progress bar.** Thanks @mdiller - **Custom Image Modifiers** - You can now save your custom image modifiers! Your saved modifiers can include special characters like `{}, (), [], |` diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index c01d489a..bb6408c8 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -1,6 +1,6 @@ Hi there, these instructions are meant for the developers of this project. -If you only want to use the Stable Diffusion UI, you've downloaded the wrong file. In that case, please download and follow the instructions at https://github.com/cmdr2/stable-diffusion-ui#installation +If you only want to use the Stable Diffusion UI, you've downloaded the wrong file. In that case, please download and follow the instructions at https://github.com/easydiffusion/easydiffusion#installation Thanks @@ -13,7 +13,7 @@ If you would like to contribute to this project, there is a discord for discussi This is in-flux, but one way to get a development environment running for editing the UI of this project is: (swap `.sh` or `.bat` in instructions depending on your environment, and be sure to adjust any paths to match where you're working) -1) Install the project to a new location using the [usual installation process](https://github.com/cmdr2/stable-diffusion-ui#installation), e.g. to `/projects/stable-diffusion-ui-archive` +1) Install the project to a new location using the [usual installation process](https://github.com/easydiffusion/easydiffusion#installation), e.g. to `/projects/stable-diffusion-ui-archive` 2) Start the newly installed project, and check that you can view and generate images on `localhost:9000` 3) Next, please clone the project repository using `git clone` (e.g. to `/projects/stable-diffusion-ui-repo`) 4) Close the server (started in step 2), and edit `/projects/stable-diffusion-ui-archive/scripts/on_env_start.sh` (or `on_env_start.bat`) diff --git a/How to install and run.txt b/How to install and run.txt index 705c34f4..8e83ab7c 100644 --- a/How to install and run.txt +++ b/How to install and run.txt @@ -1,6 +1,6 @@ Congrats on downloading Stable Diffusion UI, version 2! -If you haven't downloaded Stable Diffusion UI yet, please download from https://github.com/cmdr2/stable-diffusion-ui#installation +If you haven't downloaded Stable Diffusion UI yet, please download from https://github.com/easydiffusion/easydiffusion#installation After downloading, to install please follow these instructions: @@ -16,9 +16,9 @@ To start the UI in the future, please run the same command mentioned above. If you have any problems, please: -1. Try the troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting +1. Try the troubleshooting steps at https://github.com/easydiffusion/easydiffusion/wiki/Troubleshooting 2. Or, seek help from the community at https://discord.com/invite/u9yhsFmEkB -3. Or, file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues +3. Or, file an issue at https://github.com/easydiffusion/easydiffusion/issues Thanks cmdr2 (and contributors to the project) diff --git a/PRIVACY.md b/PRIVACY.md index 6c997997..543a167d 100644 --- a/PRIVACY.md +++ b/PRIVACY.md @@ -3,7 +3,7 @@ This is a summary of whether Easy Diffusion uses your data or tracks you: * The short answer is - Easy Diffusion does *not* use your data, and does *not* track you. * Easy Diffusion does not send your prompts or usage or analytics to anyone. There is no tracking. We don't even know how many people use Easy Diffusion, let alone their prompts. -* Easy Diffusion fetches updates to the code whenever it starts up. It does this by contacting GitHub directly, via SSL (secure connection). Only your computer and GitHub and [this repository](https://github.com/cmdr2/stable-diffusion-ui) are involved, and no third party is involved. Some countries intercepts SSL connections, that's not something we can do much about. GitHub does *not* share statistics (even with me) about how many people fetched code updates. +* Easy Diffusion fetches updates to the code whenever it starts up. It does this by contacting GitHub directly, via SSL (secure connection). Only your computer and GitHub and [this repository](https://github.com/easydiffusion/easydiffusion) are involved, and no third party is involved. Some countries intercepts SSL connections, that's not something we can do much about. GitHub does *not* share statistics (even with me) about how many people fetched code updates. * Easy Diffusion fetches the models from huggingface.co and github.com, if they don't exist on your PC. For e.g. if the safety checker (NSFW) model doesn't exist, it'll try to download it. * Easy Diffusion fetches code packages from pypi.org, which is the standard hosting service for all Python projects. That's where packages installed via `pip install` are stored. * Occasionally, antivirus software are known to *incorrectly* flag and delete some model files, which will result in Easy Diffusion re-downloading `pytorch_model.bin`. This *incorrect deletion* affects other Stable Diffusion UIs as well, like Invoke AI - https://itch.io/post/7509488 diff --git a/README BEFORE YOU RUN THIS.txt b/README BEFORE YOU RUN THIS.txt index e9f81544..a989b835 100644 --- a/README BEFORE YOU RUN THIS.txt +++ b/README BEFORE YOU RUN THIS.txt @@ -3,6 +3,6 @@ Hi there, What you have downloaded is meant for the developers of this project, not for users. If you only want to use the Stable Diffusion UI, you've downloaded the wrong file. -Please download and follow the instructions at https://github.com/cmdr2/stable-diffusion-ui#installation +Please download and follow the instructions at https://github.com/easydiffusion/easydiffusion#installation Thanks \ No newline at end of file diff --git a/README.md b/README.md index 3a38f15d..8acafd76 100644 --- a/README.md +++ b/README.md @@ -3,7 +3,7 @@ Does not require technical knowledge, does not require pre-installed software. 1-click install, powerful features, friendly community. -[Installation guide](#installation) | [Troubleshooting guide](https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting) | [![Discord Server](https://img.shields.io/discord/1014774730907209781?label=Discord)](https://discord.com/invite/u9yhsFmEkB) (for support queries, and development discussions) +[Installation guide](#installation) | [Troubleshooting guide](https://github.com/easydiffusion/easydiffusion/wiki/Troubleshooting) | [![Discord Server](https://img.shields.io/discord/1014774730907209781?label=Discord)](https://discord.com/invite/u9yhsFmEkB) (for support queries, and development discussions) ![t2i](https://raw.githubusercontent.com/Stability-AI/stablediffusion/main/assets/stable-samples/txt2img/768/merged-0006.png) @@ -71,6 +71,7 @@ Just delete the `EasyDiffusion` folder to uninstall all the downloaded packages. - **Attention/Emphasis**: () in the prompt increases the model's attention to enclosed words, and [] decreases it. - **Weighted Prompts**: Use weights for specific words in your prompt to change their importance, e.g. `red:2.4 dragon:1.2`. - **Prompt Matrix**: Quickly create multiple variations of your prompt, e.g. `a photograph of an astronaut riding a horse | illustration | cinematic lighting`. +- **Prompt Set**: Quickly create multiple variations of your prompt, e.g. `a photograph of an astronaut on the {moon,earth}` - **1-click Upscale/Face Correction**: Upscale or correct an image after it has been generated. - **Make Similar Images**: Click to generate multiple variations of a generated image. - **NSFW Setting**: A setting in the UI to control *NSFW content*. @@ -83,7 +84,7 @@ Just delete the `EasyDiffusion` folder to uninstall all the downloaded packages. - **Use custom VAE models** - **Use pre-trained Hypernetworks** - **Use custom GFPGAN models** -- **UI Plugins**: Choose from a growing list of [community-generated UI plugins](https://github.com/cmdr2/stable-diffusion-ui/wiki/UI-Plugins), or write your own plugin to add features to the project! +- **UI Plugins**: Choose from a growing list of [community-generated UI plugins](https://github.com/easydiffusion/easydiffusion/wiki/UI-Plugins), or write your own plugin to add features to the project! ### Performance and security - **Fast**: Creates a 512x512 image with euler_a in 5 seconds, on an NVIDIA 3060 12GB. @@ -119,10 +120,10 @@ Useful for judging (and stopping) an image quickly, without waiting for it to fi ---- # How to use? -Please refer to our [guide](https://github.com/cmdr2/stable-diffusion-ui/wiki/How-to-Use) to understand how to use the features in this UI. +Please refer to our [guide](https://github.com/easydiffusion/easydiffusion/wiki/How-to-Use) to understand how to use the features in this UI. # Bugs reports and code contributions welcome -If there are any problems or suggestions, please feel free to ask on the [discord server](https://discord.com/invite/u9yhsFmEkB) or [file an issue](https://github.com/cmdr2/stable-diffusion-ui/issues). +If there are any problems or suggestions, please feel free to ask on the [discord server](https://discord.com/invite/u9yhsFmEkB) or [file an issue](https://github.com/easydiffusion/easydiffusion/issues). We could really use help on these aspects (click to view tasks that need your help): * [User Interface](https://github.com/users/cmdr2/projects/1/views/1) diff --git a/build.bat b/build.bat index 6e3f3f81..2c7890ee 100644 --- a/build.bat +++ b/build.bat @@ -2,7 +2,7 @@ @echo "Hi there, what you are running is meant for the developers of this project, not for users." & echo. @echo "If you only want to use the Stable Diffusion UI, you've downloaded the wrong file." -@echo "Please download and follow the instructions at https://github.com/cmdr2/stable-diffusion-ui#installation" & echo. +@echo "Please download and follow the instructions at https://github.com/easydiffusion/easydiffusion#installation" & echo. @echo "If you are actually a developer of this project, please type Y and press enter" & echo. set /p answer=Are you a developer of this project (Y/N)? @@ -15,6 +15,7 @@ mkdir dist\win\stable-diffusion-ui\scripts copy scripts\on_env_start.bat dist\win\stable-diffusion-ui\scripts\ copy scripts\bootstrap.bat dist\win\stable-diffusion-ui\scripts\ +copy scripts\config.yaml.sample dist\win\stable-diffusion-ui\scripts\config.yaml copy "scripts\Start Stable Diffusion UI.cmd" dist\win\stable-diffusion-ui\ copy LICENSE dist\win\stable-diffusion-ui\ copy "CreativeML Open RAIL-M License" dist\win\stable-diffusion-ui\ diff --git a/build.sh b/build.sh index f4538e5c..dfb8f420 100755 --- a/build.sh +++ b/build.sh @@ -2,7 +2,7 @@ printf "Hi there, what you are running is meant for the developers of this project, not for users.\n\n" printf "If you only want to use the Stable Diffusion UI, you've downloaded the wrong file.\n" -printf "Please download and follow the instructions at https://github.com/cmdr2/stable-diffusion-ui#installation\n\n" +printf "Please download and follow the instructions at https://github.com/easydiffusion/easydiffusion#installation \n\n" printf "If you are actually a developer of this project, please type Y and press enter\n\n" read -p "Are you a developer of this project (Y/N) " yn @@ -29,6 +29,7 @@ mkdir -p dist/linux-mac/stable-diffusion-ui/scripts cp scripts/on_env_start.sh dist/linux-mac/stable-diffusion-ui/scripts/ cp scripts/bootstrap.sh dist/linux-mac/stable-diffusion-ui/scripts/ cp scripts/functions.sh dist/linux-mac/stable-diffusion-ui/scripts/ +cp scripts/config.yaml.sample dist/linux-mac/stable-diffusion-ui/scripts/config.yaml cp scripts/start.sh dist/linux-mac/stable-diffusion-ui/ cp LICENSE dist/linux-mac/stable-diffusion-ui/ cp "CreativeML Open RAIL-M License" dist/linux-mac/stable-diffusion-ui/ diff --git a/scripts/Developer Console.cmd b/scripts/Developer Console.cmd index 921a9dca..e60cf05b 100644 --- a/scripts/Developer Console.cmd +++ b/scripts/Developer Console.cmd @@ -2,6 +2,8 @@ echo "Opening Stable Diffusion UI - Developer Console.." & echo. +cd /d %~dp0 + set PATH=C:\Windows\System32;%PATH% @rem set legacy and new installer's PATH, if they exist @@ -21,6 +23,8 @@ call git --version call where conda call conda --version +echo. +echo COMSPEC=%COMSPEC% echo. @rem activate the legacy environment (if present) and set PYTHONPATH @@ -37,6 +41,10 @@ call python --version echo PYTHONPATH=%PYTHONPATH% +if exist "%cd%\profile" ( + set HF_HOME=%cd%\profile\.cache\huggingface +) + @rem done echo. diff --git a/scripts/Start Stable Diffusion UI.cmd b/scripts/Start Stable Diffusion UI.cmd index 4f8555ea..9a4a6303 100644 --- a/scripts/Start Stable Diffusion UI.cmd +++ b/scripts/Start Stable Diffusion UI.cmd @@ -36,8 +36,9 @@ call git --version call where conda call conda --version +echo . +echo COMSPEC=%COMSPEC% @rem Download the rest of the installer and UI call scripts\on_env_start.bat - @pause diff --git a/scripts/bootstrap.bat b/scripts/bootstrap.bat index d3cdd19f..8c1069c8 100644 --- a/scripts/bootstrap.bat +++ b/scripts/bootstrap.bat @@ -11,7 +11,7 @@ setlocal enabledelayedexpansion set MAMBA_ROOT_PREFIX=%cd%\installer_files\mamba set INSTALL_ENV_DIR=%cd%\installer_files\env set LEGACY_INSTALL_ENV_DIR=%cd%\installer -set MICROMAMBA_DOWNLOAD_URL=https://github.com/cmdr2/stable-diffusion-ui/releases/download/v1.1/micromamba.exe +set MICROMAMBA_DOWNLOAD_URL=https://github.com/easydiffusion/easydiffusion/releases/download/v1.1/micromamba.exe set umamba_exists=F set OLD_APPDATA=%APPDATA% diff --git a/scripts/check_modules.py b/scripts/check_modules.py index 4cbf261f..04d1309b 100644 --- a/scripts/check_modules.py +++ b/scripts/check_modules.py @@ -18,12 +18,13 @@ os_name = platform.system() modules_to_check = { "torch": ("1.11.0", "1.13.1", "2.0.0"), "torchvision": ("0.12.0", "0.14.1", "0.15.1"), - "sdkit": "1.0.112", + "sdkit": "1.0.165", "stable-diffusion-sdkit": "2.1.4", "rich": "12.6.0", "uvicorn": "0.19.0", "fastapi": "0.85.1", "pycloudflared": "0.2.0", + "ruamel.yaml": "0.17.21", # "xformers": "0.0.16", } modules_to_log = ["torch", "torchvision", "sdkit", "stable-diffusion-sdkit"] @@ -148,9 +149,9 @@ def fail(module_name): print( f"""Error installing {module_name}. Sorry about that, please try to: 1. Run this installer again. -2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting +2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/easydiffusion/easydiffusion/wiki/Troubleshooting 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB -4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues +4. If that doesn't solve the problem, please file an issue at https://github.com/easydiffusion/easydiffusion/issues Thanks!""" ) exit(1) diff --git a/scripts/config.yaml.sample b/scripts/config.yaml.sample new file mode 100644 index 00000000..9c2cc4a6 --- /dev/null +++ b/scripts/config.yaml.sample @@ -0,0 +1,24 @@ +# Change listen_port if port 9000 is already in use on your system +# Set listen_to_network to true to make Easy Diffusion accessibble on your local network +net: + listen_port: 9000 + listen_to_network: false + +# Multi GPU setup +render_devices: auto + +# Set open_browser_on_start to false to disable opening a new browser tab on each restart +ui: + open_browser_on_start: true + +# set update_branch to main to use the stable version, or to beta to use the experimental +# beta version. +update_branch: main + +# Set force_save_path to enforce an auto save path. Clients will not be able to change or +# disable auto save when this option is set. Please adapt the path in the examples to your +# needs. +# Windows: +# force_save_path: C:\\Easy Diffusion Images\\ +# Linux: +# force_save_path: /data/easy-diffusion-images/ diff --git a/scripts/functions.sh b/scripts/functions.sh index 495e9950..477b7743 100644 --- a/scripts/functions.sh +++ b/scripts/functions.sh @@ -15,9 +15,9 @@ fail() { Error downloading Stable Diffusion UI. Sorry about that, please try to: 1. Run this installer again. - 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting + 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/easydiffusion/easydiffusion/wiki/Troubleshooting 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB - 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues + 4. If that doesn't solve the problem, please file an issue at https://github.com/easydiffusion/easydiffusion/issues Thanks! diff --git a/scripts/get_config.py b/scripts/get_config.py index a468d906..0bcc90a1 100644 --- a/scripts/get_config.py +++ b/scripts/get_config.py @@ -1,10 +1,11 @@ import os import argparse import sys +import shutil # The config file is in the same directory as this script config_directory = os.path.dirname(__file__) -# config_yaml = os.path.join(config_directory, "config.yaml") +config_yaml = os.path.join(config_directory, "..", "config.yaml") config_json = os.path.join(config_directory, "config.json") parser = argparse.ArgumentParser(description='Get values from config file') @@ -15,25 +16,30 @@ parser.add_argument('key', metavar='key', nargs='+', args = parser.parse_args() +config = None -# if os.path.isfile(config_yaml): -# import yaml -# with open(config_yaml, 'r') as configfile: -# try: -# config = yaml.safe_load(configfile) -# except Exception as e: -# print(e, file=sys.stderr) -# config = {} -# el -if os.path.isfile(config_json): +# migrate the old config yaml location +config_legacy_yaml = os.path.join(config_directory, "config.yaml") +if os.path.isfile(config_legacy_yaml): + shutil.move(config_legacy_yaml, config_yaml) + +if os.path.isfile(config_yaml): + from ruamel.yaml import YAML + yaml = YAML(typ='safe') + with open(config_yaml, 'r') as configfile: + try: + config = yaml.load(configfile) + except Exception as e: + print(e, file=sys.stderr) +elif os.path.isfile(config_json): import json with open(config_json, 'r') as configfile: try: config = json.load(configfile) except Exception as e: print(e, file=sys.stderr) - config = {} -else: + +if config is None: config = {} for k in args.key: diff --git a/scripts/on_env_start.bat b/scripts/on_env_start.bat index bc92d0e9..43f7e2b7 100644 --- a/scripts/on_env_start.bat +++ b/scripts/on_env_start.bat @@ -55,10 +55,10 @@ if "%update_branch%"=="" ( @echo. & echo "Downloading Easy Diffusion..." & echo. @echo "Using the %update_branch% channel" & echo. - @call git clone -b "%update_branch%" https://github.com/cmdr2/stable-diffusion-ui.git sd-ui-files && ( + @call git clone -b "%update_branch%" https://github.com/easydiffusion/easydiffusion.git sd-ui-files && ( @echo sd_ui_git_cloned >> scripts\install_status.txt ) || ( - @echo "Error downloading Easy Diffusion. Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" + @echo "Error downloading Easy Diffusion. Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/easydiffusion/easydiffusion/wiki/Troubleshooting" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/easydiffusion/easydiffusion/issues" & echo "Thanks!" pause @exit /b ) @@ -68,6 +68,7 @@ if "%update_branch%"=="" ( @copy sd-ui-files\scripts\on_sd_start.bat scripts\ /Y @copy sd-ui-files\scripts\check_modules.py scripts\ /Y @copy sd-ui-files\scripts\get_config.py scripts\ /Y +@copy sd-ui-files\scripts\config.yaml.sample scripts\ /Y @copy "sd-ui-files\scripts\Start Stable Diffusion UI.cmd" . /Y @copy "sd-ui-files\scripts\Developer Console.cmd" . /Y diff --git a/scripts/on_env_start.sh b/scripts/on_env_start.sh index 366b5dd1..02428ce5 100755 --- a/scripts/on_env_start.sh +++ b/scripts/on_env_start.sh @@ -38,7 +38,7 @@ else printf "\n\nDownloading Easy Diffusion..\n\n" printf "Using the $update_branch channel\n\n" - if git clone -b "$update_branch" https://github.com/cmdr2/stable-diffusion-ui.git sd-ui-files ; then + if git clone -b "$update_branch" https://github.com/easydiffusion/easydiffusion.git sd-ui-files ; then echo sd_ui_git_cloned >> scripts/install_status.txt else fail "git clone failed" @@ -51,6 +51,7 @@ cp sd-ui-files/scripts/on_sd_start.sh scripts/ cp sd-ui-files/scripts/bootstrap.sh scripts/ cp sd-ui-files/scripts/check_modules.py scripts/ cp sd-ui-files/scripts/get_config.py scripts/ +cp sd-ui-files/scripts/config.yaml.sample scripts/ cp sd-ui-files/scripts/start.sh . cp sd-ui-files/scripts/developer_console.sh . cp sd-ui-files/scripts/functions.sh scripts/ diff --git a/scripts/on_sd_start.bat b/scripts/on_sd_start.bat index f92b9f6f..eddae6b8 100644 --- a/scripts/on_sd_start.bat +++ b/scripts/on_sd_start.bat @@ -6,6 +6,7 @@ @copy sd-ui-files\scripts\on_env_start.bat scripts\ /Y @copy sd-ui-files\scripts\check_modules.py scripts\ /Y @copy sd-ui-files\scripts\get_config.py scripts\ /Y +@copy sd-ui-files\scripts\config.yaml.sample scripts\ /Y if exist "%cd%\profile" ( set HF_HOME=%cd%\profile\.cache\huggingface @@ -26,7 +27,7 @@ if exist "%cd%\stable-diffusion\env" ( @rem activate the installer env call conda activate @if "%ERRORLEVEL%" NEQ "0" ( - @echo. & echo "Error activating conda for Easy Diffusion. Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo. + @echo. & echo "Error activating conda for Easy Diffusion. Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/easydiffusion/easydiffusion/wiki/Troubleshooting" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/easydiffusion/easydiffusion/issues" & echo "Thanks!" & echo. pause exit /b ) @@ -68,7 +69,7 @@ if "%ERRORLEVEL%" NEQ "0" ( call WHERE uvicorn > .tmp @>nul findstr /m "uvicorn" .tmp @if "%ERRORLEVEL%" NEQ "0" ( - @echo. & echo "UI packages not found! Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo. + @echo. & echo "UI packages not found! Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/easydiffusion/easydiffusion/wiki/Troubleshooting" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/easydiffusion/easydiffusion/issues" & echo "Thanks!" & echo. pause exit /b ) @@ -103,18 +104,21 @@ call python --version @FOR /F "tokens=* USEBACKQ" %%F IN (`python scripts\get_config.py --default=False net listen_to_network`) DO ( if "%%F" EQU "True" ( - @SET ED_BIND_IP=0.0.0.0 + @FOR /F "tokens=* USEBACKQ" %%G IN (`python scripts\get_config.py --default=0.0.0.0 net bind_ip`) DO ( + @SET ED_BIND_IP=%%G + ) ) else ( @SET ED_BIND_IP=127.0.0.1 ) ) + @cd stable-diffusion @rem set any overrides set HF_HUB_DISABLE_SYMLINKS_WARNING=true -@uvicorn main:server_api --app-dir "%SD_UI_PATH%" --port %ED_BIND_PORT% --host %ED_BIND_IP% --log-level error +@python -m uvicorn main:server_api --app-dir "%SD_UI_PATH%" --port %ED_BIND_PORT% --host %ED_BIND_IP% --log-level error @pause diff --git a/scripts/on_sd_start.sh b/scripts/on_sd_start.sh index be5161d4..e366bd2a 100755 --- a/scripts/on_sd_start.sh +++ b/scripts/on_sd_start.sh @@ -5,6 +5,7 @@ cp sd-ui-files/scripts/on_env_start.sh scripts/ cp sd-ui-files/scripts/bootstrap.sh scripts/ cp sd-ui-files/scripts/check_modules.py scripts/ cp sd-ui-files/scripts/get_config.py scripts/ +cp sd-ui-files/scripts/config.yaml.sample scripts/ source ./scripts/functions.sh @@ -71,7 +72,7 @@ export SD_UI_PATH=`pwd`/ui export ED_BIND_PORT="$( python scripts/get_config.py --default=9000 net listen_port )" case "$( python scripts/get_config.py --default=False net listen_to_network )" in "True") - export ED_BIND_IP=0.0.0.0 + export ED_BIND_IP=$( python scripts/get_config.py --default=0.0.0.0 net bind_ip) ;; "False") export ED_BIND_IP=127.0.0.1 diff --git a/ui/easydiffusion/app.py b/ui/easydiffusion/app.py index 3beabbb7..e181f9b8 100644 --- a/ui/easydiffusion/app.py +++ b/ui/easydiffusion/app.py @@ -1,9 +1,13 @@ import json import logging import os +import shutil import socket import sys import traceback +import copy +from ruamel.yaml import YAML + import urllib import warnings @@ -28,6 +32,8 @@ logging.basicConfig( SD_DIR = os.getcwd() +ROOT_DIR = os.path.abspath(os.path.join(SD_DIR, "..")) + SD_UI_DIR = os.getenv("SD_UI_PATH", None) CONFIG_DIR = os.path.abspath(os.path.join(SD_UI_DIR, "..", "scripts")) @@ -92,57 +98,108 @@ def init(): # https://pytorch.org/docs/stable/storage.html warnings.filterwarnings("ignore", category=UserWarning, message="TypedStorage is deprecated") + +def init_render_threads(): load_server_plugins() update_render_threads() def getConfig(default_val=APP_CONFIG_DEFAULTS): - try: - config_json_path = os.path.join(CONFIG_DIR, "config.json") + config_yaml_path = os.path.join(CONFIG_DIR, "..", "config.yaml") - # compatibility with upcoming yaml changes, switching from beta to main - config_yaml_path = os.path.join(CONFIG_DIR, "..", "config.yaml") + # migrate the old config yaml location + config_legacy_yaml = os.path.join(CONFIG_DIR, "config.yaml") + if os.path.isfile(config_legacy_yaml): + shutil.move(config_legacy_yaml, config_yaml_path) - # migrate the old config yaml location - config_legacy_yaml = os.path.join(CONFIG_DIR, "config.yaml") - if os.path.isfile(config_legacy_yaml): - shutil.move(config_legacy_yaml, config_yaml_path) + def set_config_on_startup(config: dict): + if getConfig.__test_diffusers_on_startup is None: + getConfig.__test_diffusers_on_startup = config.get("test_diffusers", False) + config["config_on_startup"] = {"test_diffusers": getConfig.__test_diffusers_on_startup} - if os.path.exists(config_yaml_path): - try: - import yaml + if os.path.isfile(config_yaml_path): + try: + yaml = YAML() + with open(config_yaml_path, "r", encoding="utf-8") as f: + config = yaml.load(f) + if "net" not in config: + config["net"] = {} + if os.getenv("SD_UI_BIND_PORT") is not None: + config["net"]["listen_port"] = int(os.getenv("SD_UI_BIND_PORT")) + else: + config["net"]["listen_port"] = 9000 + if os.getenv("SD_UI_BIND_IP") is not None: + config["net"]["listen_to_network"] = os.getenv("SD_UI_BIND_IP") == "0.0.0.0" + else: + config["net"]["listen_to_network"] = True - with open(config_yaml_path, "r", encoding="utf-8") as f: - config = yaml.safe_load(f) + set_config_on_startup(config) - setConfig(config) # save to config.json - os.remove(config_yaml_path) # delete the yaml file - except: - log.warn(traceback.format_exc()) - config = default_val - elif not os.path.exists(config_json_path): - config = default_val - else: + return config + except Exception as e: + log.warn(traceback.format_exc()) + set_config_on_startup(default_val) + return default_val + else: + try: + config_json_path = os.path.join(CONFIG_DIR, "config.json") + if not os.path.exists(config_json_path): + return default_val + + log.info("Converting old json config file to yaml") with open(config_json_path, "r", encoding="utf-8") as f: config = json.load(f) - if "net" not in config: - config["net"] = {} - if os.getenv("SD_UI_BIND_PORT") is not None: - config["net"]["listen_port"] = int(os.getenv("SD_UI_BIND_PORT")) - if os.getenv("SD_UI_BIND_IP") is not None: - config["net"]["listen_to_network"] = os.getenv("SD_UI_BIND_IP") == "0.0.0.0" - return config - except Exception: - log.warn(traceback.format_exc()) - return default_val + # Save config in new format + setConfig(config) + + with open(config_json_path + ".txt", "w") as f: + f.write("Moved to config.yaml inside the Easy Diffusion folder. You can open it in any text editor.") + os.remove(config_json_path) + + return getConfig(default_val) + except Exception as e: + log.warn(traceback.format_exc()) + set_config_on_startup(default_val) + return default_val + + +getConfig.__test_diffusers_on_startup = None def setConfig(config): - try: # config.json - config_json_path = os.path.join(CONFIG_DIR, "config.json") - with open(config_json_path, "w", encoding="utf-8") as f: - json.dump(config, f) + try: # config.yaml + config_yaml_path = os.path.join(CONFIG_DIR, "..", "config.yaml") + yaml = YAML() + + if not hasattr(config, "_yaml_comment"): + config_yaml_sample_path = os.path.join(CONFIG_DIR, "config.yaml.sample") + + if os.path.exists(config_yaml_sample_path): + with open(config_yaml_sample_path, "r", encoding="utf-8") as f: + commented_config = yaml.load(f) + + for k in config: + commented_config[k] = config[k] + + config = commented_config + yaml.indent(mapping=2, sequence=4, offset=2) + + if "config_on_startup" in config: + del config["config_on_startup"] + + try: + f = open(config_yaml_path + ".tmp", "w", encoding="utf-8") + yaml.dump(config, f) + finally: + f.close() # do this explicitly to avoid NUL bytes (possible rare bug when using 'with') + + # verify that the new file is valid, and only then overwrite the old config file + # helps prevent the rare NUL bytes error from corrupting the config file + yaml = YAML() + with open(config_yaml_path + ".tmp", "r", encoding="utf-8") as f: + yaml.load(f) + shutil.move(config_yaml_path + ".tmp", config_yaml_path) except: log.error(traceback.format_exc()) @@ -178,10 +235,12 @@ def update_render_threads(): def getUIPlugins(): plugins = [] + file_names = set() for plugins_dir, dir_prefix in UI_PLUGINS_SOURCES: for file in os.listdir(plugins_dir): - if file.endswith(".plugin.js"): + if file.endswith(".plugin.js") and file not in file_names: plugins.append(f"/plugins/{dir_prefix}/{file}") + file_names.add(file) return plugins @@ -240,6 +299,8 @@ def open_browser(): if ui.get("open_browser_on_start", True): import webbrowser + log.info("Opening browser..") + webbrowser.open(f"http://localhost:{port}") Console().print( @@ -258,7 +319,7 @@ def fail_and_die(fail_type: str, data: str): suggestions = [ "Run this installer again.", "If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB", - "If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues", + "If that doesn't solve the problem, please file an issue at https://github.com/easydiffusion/easydiffusion/issues", ] if fail_type == "model_download": diff --git a/ui/easydiffusion/model_manager.py b/ui/easydiffusion/model_manager.py index de2c10ac..845e9126 100644 --- a/ui/easydiffusion/model_manager.py +++ b/ui/easydiffusion/model_manager.py @@ -2,12 +2,14 @@ import os import shutil from glob import glob import traceback +from typing import Union from easydiffusion import app -from easydiffusion.types import TaskData +from easydiffusion.types import ModelsData from easydiffusion.utils import log from sdkit import Context from sdkit.models import load_model, scan_model, unload_model, download_model, get_model_info_from_db +from sdkit.models.model_loader.controlnet_filters import filters as cn_filters from sdkit.utils import hash_file_quick KNOWN_MODEL_TYPES = [ @@ -18,6 +20,8 @@ KNOWN_MODEL_TYPES = [ "realesrgan", "lora", "codeformer", + "embeddings", + "controlnet", ] MODEL_EXTENSIONS = { "stable-diffusion": [".ckpt", ".safetensors"], @@ -27,6 +31,8 @@ MODEL_EXTENSIONS = { "realesrgan": [".pth"], "lora": [".ckpt", ".safetensors"], "codeformer": [".pth"], + "embeddings": [".pt", ".bin", ".safetensors"], + "controlnet": [".pth", ".safetensors"], } DEFAULT_MODELS = { "stable-diffusion": [ @@ -52,11 +58,15 @@ def init(): make_model_folders() migrate_legacy_model_location() # if necessary download_default_models_if_necessary() - getModels() # run this once, to cache the picklescan results def load_default_models(context: Context): - set_vram_optimizations(context) + from easydiffusion import runtime + + runtime.set_vram_optimizations(context) + + config = app.getConfig() + context.embeddings_path = os.path.join(app.MODELS_DIR, "embeddings") # init default model paths for model_type in MODELS_TO_LOAD_ON_START: @@ -90,7 +100,14 @@ def unload_all(context: Context): del context.model_load_errors[model_type] -def resolve_model_to_use(model_name: str = None, model_type: str = None, fail_if_not_found: bool = True): +def resolve_model_to_use(model_name: Union[str, list] = None, model_type: str = None, fail_if_not_found: bool = True): + model_names = model_name if isinstance(model_name, list) else [model_name] + model_paths = [resolve_model_to_use_single(m, model_type, fail_if_not_found) for m in model_names] + + return model_paths[0] if len(model_paths) == 1 else model_paths + + +def resolve_model_to_use_single(model_name: str = None, model_type: str = None, fail_if_not_found: bool = True): model_extensions = MODEL_EXTENSIONS.get(model_type, []) default_models = DEFAULT_MODELS.get(model_type, []) config = app.getConfig() @@ -127,43 +144,32 @@ def resolve_model_to_use(model_name: str = None, model_type: str = None, fail_if raise Exception(f"Could not find the desired model {model_name}! Is it present in the {model_dir} folder?") -def reload_models_if_necessary(context: Context, task_data: TaskData): - face_fix_lower = task_data.use_face_correction.lower() if task_data.use_face_correction else "" - upscale_lower = task_data.use_upscale.lower() if task_data.use_upscale else "" - - model_paths_in_req = { - "stable-diffusion": task_data.use_stable_diffusion_model, - "vae": task_data.use_vae_model, - "hypernetwork": task_data.use_hypernetwork_model, - "codeformer": task_data.use_face_correction if "codeformer" in face_fix_lower else None, - "gfpgan": task_data.use_face_correction if "gfpgan" in face_fix_lower else None, - "realesrgan": task_data.use_upscale if "realesrgan" in upscale_lower else None, - "latent_upscaler": True if "latent_upscaler" in upscale_lower else None, - "nsfw_checker": True if task_data.block_nsfw else None, - "lora": task_data.use_lora_model, - } +def reload_models_if_necessary(context: Context, models_data: ModelsData, models_to_force_reload: list = []): models_to_reload = { model_type: path - for model_type, path in model_paths_in_req.items() - if context.model_paths.get(model_type) != path + for model_type, path in models_data.model_paths.items() + if context.model_paths.get(model_type) != path or (path is not None and context.models.get(model_type) is None) } - if task_data.codeformer_upscale_faces: + if models_data.model_paths.get("codeformer"): if "realesrgan" not in models_to_reload and "realesrgan" not in context.models: default_realesrgan = DEFAULT_MODELS["realesrgan"][0]["file_name"] models_to_reload["realesrgan"] = resolve_model_to_use(default_realesrgan, "realesrgan") elif "realesrgan" in models_to_reload and models_to_reload["realesrgan"] is None: del models_to_reload["realesrgan"] # don't unload realesrgan - if set_vram_optimizations(context) or set_clip_skip(context, task_data): # reload SD - models_to_reload["stable-diffusion"] = model_paths_in_req["stable-diffusion"] + for model_type in models_to_force_reload: + if model_type not in models_data.model_paths: + continue + models_to_reload[model_type] = models_data.model_paths[model_type] for model_type, model_path_in_req in models_to_reload.items(): context.model_paths[model_type] = model_path_in_req action_fn = unload_model if context.model_paths[model_type] is None else load_model + extra_params = models_data.model_params.get(model_type, {}) try: - action_fn(context, model_type, scan_model=False) # we've scanned them already + action_fn(context, model_type, scan_model=False, **extra_params) # we've scanned them already if model_type in context.model_load_errors: del context.model_load_errors[model_type] except Exception as e: @@ -172,24 +178,22 @@ def reload_models_if_necessary(context: Context, task_data: TaskData): context.model_load_errors[model_type] = str(e) # storing the entire Exception can lead to memory leaks -def resolve_model_paths(task_data: TaskData): - task_data.use_stable_diffusion_model = resolve_model_to_use( - task_data.use_stable_diffusion_model, model_type="stable-diffusion" - ) - task_data.use_vae_model = resolve_model_to_use(task_data.use_vae_model, model_type="vae") - task_data.use_hypernetwork_model = resolve_model_to_use(task_data.use_hypernetwork_model, model_type="hypernetwork") - task_data.use_lora_model = resolve_model_to_use(task_data.use_lora_model, model_type="lora") - - if task_data.use_face_correction: - if "gfpgan" in task_data.use_face_correction.lower(): - model_type = "gfpgan" - elif "codeformer" in task_data.use_face_correction.lower(): - model_type = "codeformer" +def resolve_model_paths(models_data: ModelsData): + model_paths = models_data.model_paths + for model_type in model_paths: + skip_models = cn_filters + ["latent_upscaler", "nsfw_checker"] + if model_type in skip_models: # doesn't use model paths + continue + if model_type == "codeformer": download_if_necessary("codeformer", "codeformer.pth", "codeformer-0.1.0") + elif model_type == "controlnet": + model_id = model_paths[model_type] + model_info = get_model_info_from_db(model_type=model_type, model_id=model_id) + if model_info: + filename = model_info.get("url", "").split("/")[-1] + download_if_necessary("controlnet", filename, model_id, skip_if_others_exist=False) - task_data.use_face_correction = resolve_model_to_use(task_data.use_face_correction, model_type) - if task_data.use_upscale and "realesrgan" in task_data.use_upscale.lower(): - task_data.use_upscale = resolve_model_to_use(task_data.use_upscale, "realesrgan") + model_paths[model_type] = resolve_model_to_use(model_paths[model_type], model_type=model_type) def fail_if_models_did_not_load(context: Context): @@ -211,28 +215,17 @@ def download_default_models_if_necessary(): print(model_type, "model(s) found.") -def download_if_necessary(model_type: str, file_name: str, model_id: str): +def download_if_necessary(model_type: str, file_name: str, model_id: str, skip_if_others_exist=True): model_path = os.path.join(app.MODELS_DIR, model_type, file_name) expected_hash = get_model_info_from_db(model_type=model_type, model_id=model_id)["quick_hash"] - other_models_exist = any_model_exists(model_type) + other_models_exist = any_model_exists(model_type) and skip_if_others_exist known_model_exists = os.path.exists(model_path) known_model_is_corrupt = known_model_exists and hash_file_quick(model_path) != expected_hash if known_model_is_corrupt or (not other_models_exist and not known_model_exists): print("> download", model_type, model_id) - download_model(model_type, model_id, download_base_dir=app.MODELS_DIR) - - -def set_vram_optimizations(context: Context): - config = app.getConfig() - vram_usage_level = config.get("vram_usage_level", "balanced") - - if vram_usage_level != context.vram_usage_level: - context.vram_usage_level = vram_usage_level - return True - - return False + download_model(model_type, model_id, download_base_dir=app.MODELS_DIR, download_config_if_available=False) def migrate_legacy_model_location(): @@ -255,16 +248,6 @@ def any_model_exists(model_type: str) -> bool: return False -def set_clip_skip(context: Context, task_data: TaskData): - clip_skip = task_data.clip_skip - - if clip_skip != context.clip_skip: - context.clip_skip = clip_skip - return True - - return False - - def make_model_folders(): for model_type in KNOWN_MODEL_TYPES: model_dir_path = os.path.join(app.MODELS_DIR, model_type) @@ -310,14 +293,29 @@ def is_malicious_model(file_path): return False -def getModels(): +def getModels(scan_for_malicious: bool = True): models = { "options": { - "stable-diffusion": ["sd-v1-4"], + "stable-diffusion": [{"sd-v1-4": "SD 1.4"}], "vae": [], "hypernetwork": [], "lora": [], - "codeformer": ["codeformer"], + "codeformer": [{"codeformer": "CodeFormer"}], + "embeddings": [], + "controlnet": [ + {"control_v11p_sd15_canny": "Canny (*)"}, + {"control_v11p_sd15_openpose": "OpenPose (*)"}, + {"control_v11p_sd15_normalbae": "Normal BAE (*)"}, + {"control_v11f1p_sd15_depth": "Depth (*)"}, + {"control_v11p_sd15_scribble": "Scribble"}, + {"control_v11p_sd15_softedge": "Soft Edge"}, + {"control_v11p_sd15_inpaint": "Inpaint"}, + {"control_v11p_sd15_lineart": "Line Art"}, + {"control_v11p_sd15s2_lineart_anime": "Line Art Anime"}, + {"control_v11p_sd15_mlsd": "Straight Lines"}, + {"control_v11p_sd15_seg": "Segment"}, + {"control_v11e_sd15_shuffle": "Shuffle"}, + ], }, } @@ -326,9 +324,9 @@ def getModels(): class MaliciousModelException(Exception): "Raised when picklescan reports a problem with a model" - def scan_directory(directory, suffixes, directoriesFirst: bool = True): + def scan_directory(directory, suffixes, directoriesFirst: bool = True, default_entries=[]): + tree = list(default_entries) nonlocal models_scanned - tree = [] for entry in sorted( os.scandir(directory), key=lambda entry: (entry.is_file() == directoriesFirst, entry.name.lower()), @@ -343,10 +341,18 @@ def getModels(): mod_time = known_models[entry.path] if entry.path in known_models else -1 if mod_time != mtime: models_scanned += 1 - if is_malicious_model(entry.path): + if scan_for_malicious and is_malicious_model(entry.path): raise MaliciousModelException(entry.path) - known_models[entry.path] = mtime - tree.append(entry.name[: -len(matching_suffix)]) + if scan_for_malicious: + known_models[entry.path] = mtime + model_id = entry.name[: -len(matching_suffix)] + model_exists = False + for m in tree: # allows default "named" models, like CodeFormer and known ControlNet models + if (isinstance(m, str) and model_id == m) or (isinstance(m, dict) and model_id in m): + model_exists = True + break + if not model_exists: + tree.append(model_id) elif entry.is_dir(): scan = scan_directory(entry.path, suffixes, directoriesFirst=False) @@ -363,19 +369,23 @@ def getModels(): os.makedirs(models_dir) try: - models["options"][model_type] = scan_directory(models_dir, model_extensions) + default_tree = models["options"].get(model_type, []) + models["options"][model_type] = scan_directory(models_dir, model_extensions, default_entries=default_tree) except MaliciousModelException as e: - models["scan-error"] = e + models["scan-error"] = str(e) - log.info(f"[green]Scanning all model folders for models...[/]") + if scan_for_malicious: + log.info(f"[green]Scanning all model folders for models...[/]") # custom models listModels(model_type="stable-diffusion") listModels(model_type="vae") listModels(model_type="hypernetwork") listModels(model_type="gfpgan") listModels(model_type="lora") + listModels(model_type="embeddings") + listModels(model_type="controlnet") - if models_scanned > 0: + if scan_for_malicious and models_scanned > 0: log.info(f"[green]Scanned {models_scanned} models. Nothing infected[/]") return models diff --git a/ui/easydiffusion/package_manager.py b/ui/easydiffusion/package_manager.py new file mode 100644 index 00000000..c246c54d --- /dev/null +++ b/ui/easydiffusion/package_manager.py @@ -0,0 +1,98 @@ +import sys +import os +import platform +from importlib.metadata import version as pkg_version + +from sdkit.utils import log + +from easydiffusion import app + +# future home of scripts/check_modules.py + +manifest = { + "tensorrt": { + "install": [ + "nvidia-cudnn --extra-index-url=https://pypi.ngc.nvidia.com --trusted-host pypi.ngc.nvidia.com", + "tensorrt-libs --extra-index-url=https://pypi.ngc.nvidia.com --trusted-host pypi.ngc.nvidia.com", + "tensorrt --extra-index-url=https://pypi.ngc.nvidia.com --trusted-host pypi.ngc.nvidia.com", + ], + "uninstall": ["tensorrt"], + # TODO also uninstall tensorrt-libs and nvidia-cudnn, but do it upon restarting (avoid 'file in use' error) + } +} +installing = [] + +# remove this once TRT releases on pypi +if platform.system() == "Windows": + trt_dir = os.path.join(app.ROOT_DIR, "tensorrt") + if os.path.exists(trt_dir): + files = os.listdir(trt_dir) + + packages = manifest["tensorrt"]["install"] + packages = tuple(p.replace("-", "_") for p in packages) + + wheels = [] + for p in packages: + p = p.split(" ")[0] + f = next((f for f in files if f.startswith(p) and f.endswith((".whl", ".tar.gz"))), None) + if f: + wheels.append(os.path.join(trt_dir, f)) + + manifest["tensorrt"]["install"] = wheels + + +def get_installed_packages() -> list: + return {module_name: version(module_name) for module_name in manifest if is_installed(module_name)} + + +def is_installed(module_name) -> bool: + return version(module_name) is not None + + +def install(module_name): + if is_installed(module_name): + log.info(f"{module_name} has already been installed!") + return + if module_name in installing: + log.info(f"{module_name} is already installing!") + return + + if module_name not in manifest: + raise RuntimeError(f"Can't install unknown package: {module_name}!") + + commands = manifest[module_name]["install"] + commands = [f"python -m pip install --upgrade {cmd}" for cmd in commands] + + installing.append(module_name) + + try: + for cmd in commands: + print(">", cmd) + if os.system(cmd) != 0: + raise RuntimeError(f"Error while running {cmd}. Please check the logs in the command-line.") + finally: + installing.remove(module_name) + + +def uninstall(module_name): + if not is_installed(module_name): + log.info(f"{module_name} hasn't been installed!") + return + + if module_name not in manifest: + raise RuntimeError(f"Can't uninstall unknown package: {module_name}!") + + commands = manifest[module_name]["uninstall"] + commands = [f"python -m pip uninstall -y {cmd}" for cmd in commands] + + for cmd in commands: + print(">", cmd) + if os.system(cmd) != 0: + raise RuntimeError(f"Error while running {cmd}. Please check the logs in the command-line.") + + +def version(module_name: str) -> str: + try: + return pkg_version(module_name) + except: + return None diff --git a/ui/easydiffusion/renderer.py b/ui/easydiffusion/renderer.py deleted file mode 100644 index a57dfc6c..00000000 --- a/ui/easydiffusion/renderer.py +++ /dev/null @@ -1,279 +0,0 @@ -import json -import pprint -import queue -import time - -from easydiffusion import device_manager -from easydiffusion.types import GenerateImageRequest -from easydiffusion.types import Image as ResponseImage -from easydiffusion.types import Response, TaskData, UserInitiatedStop -from easydiffusion.model_manager import DEFAULT_MODELS, resolve_model_to_use -from easydiffusion.utils import get_printable_request, log, save_images_to_disk -from sdkit import Context -from sdkit.filter import apply_filters -from sdkit.generate import generate_images -from sdkit.models import load_model -from sdkit.utils import ( - diffusers_latent_samples_to_images, - gc, - img_to_base64_str, - img_to_buffer, - latent_samples_to_images, - get_device_usage, -) - -context = Context() # thread-local -""" -runtime data (bound locally to this thread), for e.g. device, references to loaded models, optimization flags etc -""" - - -def init(device): - """ - Initializes the fields that will be bound to this runtime's context, and sets the current torch device - """ - context.stop_processing = False - context.temp_images = {} - context.partial_x_samples = None - context.model_load_errors = {} - context.enable_codeformer = True - - from easydiffusion import app - - app_config = app.getConfig() - context.test_diffusers = ( - app_config.get("test_diffusers", False) and app_config.get("update_branch", "main") != "main" - ) - - log.info("Device usage during initialization:") - get_device_usage(device, log_info=True, process_usage_only=False) - - device_manager.device_init(context, device) - - -def make_images( - req: GenerateImageRequest, - task_data: TaskData, - data_queue: queue.Queue, - task_temp_images: list, - step_callback, -): - context.stop_processing = False - print_task_info(req, task_data) - - images, seeds = make_images_internal(req, task_data, data_queue, task_temp_images, step_callback) - - res = Response( - req, - task_data, - images=construct_response(images, seeds, task_data, base_seed=req.seed), - ) - res = res.json() - data_queue.put(json.dumps(res)) - log.info("Task completed") - - return res - - -def print_task_info(req: GenerateImageRequest, task_data: TaskData): - req_str = pprint.pformat(get_printable_request(req, task_data)).replace("[", "\[") - task_str = pprint.pformat(task_data.dict()).replace("[", "\[") - log.info(f"request: {req_str}") - log.info(f"task data: {task_str}") - - -def make_images_internal( - req: GenerateImageRequest, - task_data: TaskData, - data_queue: queue.Queue, - task_temp_images: list, - step_callback, -): - images, user_stopped = generate_images_internal( - req, - task_data, - data_queue, - task_temp_images, - step_callback, - task_data.stream_image_progress, - task_data.stream_image_progress_interval, - ) - gc(context) - filtered_images = filter_images(req, task_data, images, user_stopped) - - if task_data.save_to_disk_path is not None: - save_images_to_disk(images, filtered_images, req, task_data) - - seeds = [*range(req.seed, req.seed + len(images))] - if task_data.show_only_filtered_image or filtered_images is images: - return filtered_images, seeds - else: - return images + filtered_images, seeds + seeds - - -def generate_images_internal( - req: GenerateImageRequest, - task_data: TaskData, - data_queue: queue.Queue, - task_temp_images: list, - step_callback, - stream_image_progress: bool, - stream_image_progress_interval: int, -): - context.temp_images.clear() - - callback = make_step_callback( - req, - task_data, - data_queue, - task_temp_images, - step_callback, - stream_image_progress, - stream_image_progress_interval, - ) - - try: - if req.init_image is not None and not context.test_diffusers: - req.sampler_name = "ddim" - - images = generate_images(context, callback=callback, **req.dict()) - user_stopped = False - except UserInitiatedStop: - images = [] - user_stopped = True - if context.partial_x_samples is not None: - if context.test_diffusers: - images = diffusers_latent_samples_to_images(context, context.partial_x_samples) - else: - images = latent_samples_to_images(context, context.partial_x_samples) - finally: - if hasattr(context, "partial_x_samples") and context.partial_x_samples is not None: - if not context.test_diffusers: - del context.partial_x_samples - context.partial_x_samples = None - - return images, user_stopped - - -def filter_images(req: GenerateImageRequest, task_data: TaskData, images: list, user_stopped): - if user_stopped: - return images - - if task_data.block_nsfw: - images = apply_filters(context, "nsfw_checker", images) - - if task_data.use_face_correction and "codeformer" in task_data.use_face_correction.lower(): - default_realesrgan = DEFAULT_MODELS["realesrgan"][0]["file_name"] - prev_realesrgan_path = None - if task_data.codeformer_upscale_faces and default_realesrgan not in context.model_paths["realesrgan"]: - prev_realesrgan_path = context.model_paths["realesrgan"] - context.model_paths["realesrgan"] = resolve_model_to_use(default_realesrgan, "realesrgan") - load_model(context, "realesrgan") - - try: - images = apply_filters( - context, - "codeformer", - images, - upscale_faces=task_data.codeformer_upscale_faces, - codeformer_fidelity=task_data.codeformer_fidelity, - ) - finally: - if prev_realesrgan_path: - context.model_paths["realesrgan"] = prev_realesrgan_path - load_model(context, "realesrgan") - elif task_data.use_face_correction and "gfpgan" in task_data.use_face_correction.lower(): - images = apply_filters(context, "gfpgan", images) - - if task_data.use_upscale: - if "realesrgan" in task_data.use_upscale.lower(): - images = apply_filters(context, "realesrgan", images, scale=task_data.upscale_amount) - elif task_data.use_upscale == "latent_upscaler": - images = apply_filters( - context, - "latent_upscaler", - images, - scale=task_data.upscale_amount, - latent_upscaler_options={ - "prompt": req.prompt, - "negative_prompt": req.negative_prompt, - "seed": req.seed, - "num_inference_steps": task_data.latent_upscaler_steps, - "guidance_scale": 0, - }, - ) - - return images - - -def construct_response(images: list, seeds: list, task_data: TaskData, base_seed: int): - return [ - ResponseImage( - data=img_to_base64_str( - img, - task_data.output_format, - task_data.output_quality, - task_data.output_lossless, - ), - seed=seed, - ) - for img, seed in zip(images, seeds) - ] - - -def make_step_callback( - req: GenerateImageRequest, - task_data: TaskData, - data_queue: queue.Queue, - task_temp_images: list, - step_callback, - stream_image_progress: bool, - stream_image_progress_interval: int, -): - n_steps = req.num_inference_steps if req.init_image is None else int(req.num_inference_steps * req.prompt_strength) - last_callback_time = -1 - - def update_temp_img(x_samples, task_temp_images: list): - partial_images = [] - - if context.test_diffusers: - images = diffusers_latent_samples_to_images(context, x_samples) - else: - images = latent_samples_to_images(context, x_samples) - - if task_data.block_nsfw: - images = apply_filters(context, "nsfw_checker", images) - - for i, img in enumerate(images): - buf = img_to_buffer(img, output_format="JPEG") - - context.temp_images[f"{task_data.request_id}/{i}"] = buf - task_temp_images[i] = buf - partial_images.append({"path": f"/image/tmp/{task_data.request_id}/{i}"}) - del images - return partial_images - - def on_image_step(x_samples, i, *args): - nonlocal last_callback_time - - if context.test_diffusers: - context.partial_x_samples = (x_samples, args[0]) - else: - context.partial_x_samples = x_samples - - step_time = time.time() - last_callback_time if last_callback_time != -1 else -1 - last_callback_time = time.time() - - progress = {"step": i, "step_time": step_time, "total_steps": n_steps} - - if stream_image_progress and stream_image_progress_interval > 0 and i % stream_image_progress_interval == 0: - progress["output"] = update_temp_img(context.partial_x_samples, task_temp_images) - - data_queue.put(json.dumps(progress)) - - step_callback() - - if context.stop_processing: - raise UserInitiatedStop("User requested that we stop processing") - - return on_image_step diff --git a/ui/easydiffusion/runtime.py b/ui/easydiffusion/runtime.py new file mode 100644 index 00000000..4098ee8e --- /dev/null +++ b/ui/easydiffusion/runtime.py @@ -0,0 +1,53 @@ +""" +A runtime that runs on a specific device (in a thread). + +It can run various tasks like image generation, image filtering, model merge etc by using that thread-local context. + +This creates an `sdkit.Context` that's bound to the device specified while calling the `init()` function. +""" + +from easydiffusion import device_manager +from easydiffusion.utils import log +from sdkit import Context +from sdkit.utils import get_device_usage + +context = Context() # thread-local +""" +runtime data (bound locally to this thread), for e.g. device, references to loaded models, optimization flags etc +""" + + +def init(device): + """ + Initializes the fields that will be bound to this runtime's context, and sets the current torch device + """ + context.stop_processing = False + context.temp_images = {} + context.partial_x_samples = None + context.model_load_errors = {} + context.enable_codeformer = True + + from easydiffusion import app + + app_config = app.getConfig() + context.test_diffusers = ( + app_config.get("test_diffusers", False) and app_config.get("update_branch", "main") != "main" + ) + + log.info("Device usage during initialization:") + get_device_usage(device, log_info=True, process_usage_only=False) + + device_manager.device_init(context, device) + + +def set_vram_optimizations(context: Context): + from easydiffusion import app + + config = app.getConfig() + vram_usage_level = config.get("vram_usage_level", "balanced") + + if vram_usage_level != context.vram_usage_level: + context.vram_usage_level = vram_usage_level + return True + + return False diff --git a/ui/easydiffusion/server.py b/ui/easydiffusion/server.py index d8940bb5..a8f848fd 100644 --- a/ui/easydiffusion/server.py +++ b/ui/easydiffusion/server.py @@ -8,8 +8,17 @@ import os import traceback from typing import List, Union -from easydiffusion import app, model_manager, task_manager -from easydiffusion.types import GenerateImageRequest, MergeRequest, TaskData +from easydiffusion import app, model_manager, task_manager, package_manager +from easydiffusion.tasks import RenderTask, FilterTask +from easydiffusion.types import ( + GenerateImageRequest, + FilterImageRequest, + MergeRequest, + TaskData, + ModelsData, + OutputFormatData, + convert_legacy_render_req_to_new, +) from easydiffusion.utils import log from fastapi import FastAPI, HTTPException from fastapi.staticfiles import StaticFiles @@ -86,8 +95,8 @@ def init(): return set_app_config_internal(req) @server_api.get("/get/{key:path}") - def read_web_data(key: str = None): - return read_web_data_internal(key) + def read_web_data(key: str = None, scan_for_malicious: bool = True): + return read_web_data_internal(key, scan_for_malicious=scan_for_malicious) @server_api.get("/ping") # Get server and optionally session status. def ping(session_id: str = None): @@ -97,6 +106,10 @@ def init(): def render(req: dict): return render_internal(req) + @server_api.post("/filter") + def render(req: dict): + return filter_internal(req) + @server_api.post("/model/merge") def model_merge(req: dict): print(req) @@ -122,6 +135,10 @@ def init(): def stop_cloudflare_tunnel(req: dict): return stop_cloudflare_tunnel_internal(req) + @server_api.post("/package/{package_name:str}") + def modify_package(package_name: str, req: dict): + return modify_package_internal(package_name, req) + @server_api.get("/") def read_root(): return FileResponse(os.path.join(app.SD_UI_DIR, "index.html"), headers=NOCACHE_HEADERS) @@ -179,7 +196,7 @@ def update_render_devices_in_config(config, render_devices): config["render_devices"] = render_devices -def read_web_data_internal(key: str = None): +def read_web_data_internal(key: str = None, **kwargs): if not key: # /get without parameters, stable-diffusion easter egg. raise HTTPException(status_code=418, detail="StableDiffusion is drawing a teapot!") # HTTP418 I'm a teapot elif key == "app_config": @@ -198,7 +215,8 @@ def read_web_data_internal(key: str = None): system_info["devices"]["config"] = config.get("render_devices", "auto") return JSONResponse(system_info, headers=NOCACHE_HEADERS) elif key == "models": - return JSONResponse(model_manager.getModels(), headers=NOCACHE_HEADERS) + scan_for_malicious = kwargs.get("scan_for_malicious", True) + return JSONResponse(model_manager.getModels(scan_for_malicious), headers=NOCACHE_HEADERS) elif key == "modifiers": return JSONResponse(app.get_image_modifiers(), headers=NOCACHE_HEADERS) elif key == "ui_plugins": @@ -212,24 +230,36 @@ def ping_internal(session_id: str = None): if task_manager.current_state_error: raise HTTPException(status_code=500, detail=str(task_manager.current_state_error)) raise HTTPException(status_code=500, detail="Render thread is dead.") + if task_manager.current_state_error and not isinstance(task_manager.current_state_error, StopAsyncIteration): raise HTTPException(status_code=500, detail=str(task_manager.current_state_error)) + # Alive response = {"status": str(task_manager.current_state)} + if session_id: session = task_manager.get_cached_session(session_id, update_ttl=True) response["tasks"] = {id(t): t.status for t in session.tasks} + response["devices"] = task_manager.get_devices() + response["packages_installed"] = package_manager.get_installed_packages() + response["packages_installing"] = package_manager.installing + if cloudflare.address != None: response["cloudflare"] = cloudflare.address + return JSONResponse(response, headers=NOCACHE_HEADERS) def render_internal(req: dict): try: + req = convert_legacy_render_req_to_new(req) + # separate out the request data into rendering and task-specific data render_req: GenerateImageRequest = GenerateImageRequest.parse_obj(req) task_data: TaskData = TaskData.parse_obj(req) + models_data: ModelsData = ModelsData.parse_obj(req) + output_format: OutputFormatData = OutputFormatData.parse_obj(req) # Overwrite user specified save path config = app.getConfig() @@ -239,28 +269,53 @@ def render_internal(req: dict): render_req.init_image_mask = req.get("mask") # hack: will rename this in the HTTP API in a future revision app.save_to_config( - task_data.use_stable_diffusion_model, - task_data.use_vae_model, - task_data.use_hypernetwork_model, + models_data.model_paths.get("stable-diffusion"), + models_data.model_paths.get("vae"), + models_data.model_paths.get("hypernetwork"), task_data.vram_usage_level, ) # enqueue the task - new_task = task_manager.render(render_req, task_data) + task = RenderTask(render_req, task_data, models_data, output_format) + return enqueue_task(task) + except HTTPException as e: + raise e + except Exception as e: + log.error(traceback.format_exc()) + raise HTTPException(status_code=500, detail=str(e)) + + +def filter_internal(req: dict): + try: + session_id = req.get("session_id", "session") + filter_req: FilterImageRequest = FilterImageRequest.parse_obj(req) + models_data: ModelsData = ModelsData.parse_obj(req) + output_format: OutputFormatData = OutputFormatData.parse_obj(req) + + # enqueue the task + task = FilterTask(filter_req, session_id, models_data, output_format) + return enqueue_task(task) + except HTTPException as e: + raise e + except Exception as e: + log.error(traceback.format_exc()) + raise HTTPException(status_code=500, detail=str(e)) + + +def enqueue_task(task): + try: + task_manager.enqueue_task(task) response = { "status": str(task_manager.current_state), "queue": len(task_manager.tasks_queue), - "stream": f"/image/stream/{id(new_task)}", - "task": id(new_task), + "stream": f"/image/stream/{task.id}", + "task": task.id, } return JSONResponse(response, headers=NOCACHE_HEADERS) except ChildProcessError as e: # Render thread is dead raise HTTPException(status_code=500, detail=f"Rendering thread has died.") # HTTP500 Internal Server Error except ConnectionRefusedError as e: # Unstarted task pending limit reached, deny queueing too many. raise HTTPException(status_code=503, detail=str(e)) # HTTP503 Service Unavailable - except Exception as e: - log.error(traceback.format_exc()) - raise HTTPException(status_code=500, detail=str(e)) def model_merge_internal(req: dict): @@ -334,7 +389,8 @@ def get_image_internal(task_id: int, img_id: int): except KeyError as e: raise HTTPException(status_code=500, detail=str(e)) -#---- Cloudflare Tunnel ---- + +# ---- Cloudflare Tunnel ---- class CloudflareTunnel: def __init__(self): config = app.getConfig() @@ -357,23 +413,41 @@ class CloudflareTunnel: else: return None + cloudflare = CloudflareTunnel() + def start_cloudflare_tunnel_internal(req: dict): - try: - cloudflare.start() - log.info(f"- Started cloudflare tunnel. Using address: {cloudflare.address}") - return JSONResponse({"address":cloudflare.address}) - except Exception as e: - log.error(str(e)) - log.error(traceback.format_exc()) - return HTTPException(status_code=500, detail=str(e)) + try: + cloudflare.start() + log.info(f"- Started cloudflare tunnel. Using address: {cloudflare.address}") + return JSONResponse({"address": cloudflare.address}) + except Exception as e: + log.error(str(e)) + log.error(traceback.format_exc()) + return HTTPException(status_code=500, detail=str(e)) + def stop_cloudflare_tunnel_internal(req: dict): - try: - cloudflare.stop() - except Exception as e: - log.error(str(e)) - log.error(traceback.format_exc()) - return HTTPException(status_code=500, detail=str(e)) + try: + cloudflare.stop() + except Exception as e: + log.error(str(e)) + log.error(traceback.format_exc()) + return HTTPException(status_code=500, detail=str(e)) + +def modify_package_internal(package_name: str, req: dict): + try: + cmd = req["command"] + if cmd not in ("install", "uninstall"): + raise RuntimeError(f"Unknown command: {cmd}") + + cmd = getattr(package_manager, cmd) + cmd(package_name) + + return JSONResponse({"status": "OK"}, headers=NOCACHE_HEADERS) + except Exception as e: + log.error(str(e)) + log.error(traceback.format_exc()) + return HTTPException(status_code=500, detail=str(e)) diff --git a/ui/easydiffusion/task_manager.py b/ui/easydiffusion/task_manager.py index a91cd9c6..699b4494 100644 --- a/ui/easydiffusion/task_manager.py +++ b/ui/easydiffusion/task_manager.py @@ -17,7 +17,7 @@ from typing import Any, Hashable import torch from easydiffusion import device_manager -from easydiffusion.types import GenerateImageRequest, TaskData +from easydiffusion.tasks import Task from easydiffusion.utils import log from sdkit.utils import gc @@ -27,6 +27,7 @@ LOCK_TIMEOUT = 15 # Maximum locking time in seconds before failing a task. # It's better to get an exception than a deadlock... ALWAYS use timeout in critical paths. DEVICE_START_TIMEOUT = 60 # seconds - Maximum time to wait for a render device to init. +MAX_OVERLOAD_ALLOWED_RATIO = 2 # i.e. 2x pending tasks compared to the number of render threads class SymbolClass(type): # Print nicely formatted Symbol names. @@ -58,46 +59,6 @@ class ServerStates: pass -class RenderTask: # Task with output queue and completion lock. - def __init__(self, req: GenerateImageRequest, task_data: TaskData): - task_data.request_id = id(self) - self.render_request: GenerateImageRequest = req # Initial Request - self.task_data: TaskData = task_data - self.response: Any = None # Copy of the last reponse - self.render_device = None # Select the task affinity. (Not used to change active devices). - self.temp_images: list = [None] * req.num_outputs * (1 if task_data.show_only_filtered_image else 2) - self.error: Exception = None - self.lock: threading.Lock = threading.Lock() # Locks at task start and unlocks when task is completed - self.buffer_queue: queue.Queue = queue.Queue() # Queue of JSON string segments - - async def read_buffer_generator(self): - try: - while not self.buffer_queue.empty(): - res = self.buffer_queue.get(block=False) - self.buffer_queue.task_done() - yield res - except queue.Empty as e: - yield - - @property - def status(self): - if self.lock.locked(): - return "running" - if isinstance(self.error, StopAsyncIteration): - return "stopped" - if self.error: - return "error" - if not self.buffer_queue.empty(): - return "buffer" - if self.response: - return "completed" - return "pending" - - @property - def is_pending(self): - return bool(not self.response and not self.error) - - # Temporary cache to allow to query tasks results for a short time after they are completed. class DataCache: def __init__(self): @@ -123,8 +84,8 @@ class DataCache: # Remove Items for key in to_delete: (_, val) = self._base[key] - if isinstance(val, RenderTask): - log.debug(f"RenderTask {key} expired. Data removed.") + if isinstance(val, Task): + log.debug(f"Task {key} expired. Data removed.") elif isinstance(val, SessionState): log.debug(f"Session {key} expired. Data removed.") else: @@ -220,8 +181,8 @@ class SessionState: tasks.append(task) return tasks - def put(self, task, ttl=TASK_TTL): - task_id = id(task) + def put(self, task: Task, ttl=TASK_TTL): + task_id = task.id self._tasks_ids.append(task_id) if not task_cache.put(task_id, task, ttl): return False @@ -230,11 +191,16 @@ class SessionState: return True +def keep_task_alive(task: Task): + task_cache.keep(task.id, TASK_TTL) + session_cache.keep(task.session_id, TASK_TTL) + + def thread_get_next_task(): - from easydiffusion import renderer + from easydiffusion import runtime if not manager_lock.acquire(blocking=True, timeout=LOCK_TIMEOUT): - log.warn(f"Render thread on device: {renderer.context.device} failed to acquire manager lock.") + log.warn(f"Render thread on device: {runtime.context.device} failed to acquire manager lock.") return None if len(tasks_queue) <= 0: manager_lock.release() @@ -242,7 +208,7 @@ def thread_get_next_task(): task = None try: # Select a render task. for queued_task in tasks_queue: - if queued_task.render_device and renderer.context.device != queued_task.render_device: + if queued_task.render_device and runtime.context.device != queued_task.render_device: # Is asking for a specific render device. if is_alive(queued_task.render_device) > 0: continue # requested device alive, skip current one. @@ -251,7 +217,7 @@ def thread_get_next_task(): queued_task.error = Exception(queued_task.render_device + " is not currently active.") task = queued_task break - if not queued_task.render_device and renderer.context.device == "cpu" and is_alive() > 1: + if not queued_task.render_device and runtime.context.device == "cpu" and is_alive() > 1: # not asking for any specific devices, cpu want to grab task but other render devices are alive. continue # Skip Tasks, don't run on CPU unless there is nothing else or user asked for it. task = queued_task @@ -266,19 +232,19 @@ def thread_get_next_task(): def thread_render(device): global current_state, current_state_error - from easydiffusion import model_manager, renderer + from easydiffusion import model_manager, runtime try: - renderer.init(device) + runtime.init(device) weak_thread_data[threading.current_thread()] = { - "device": renderer.context.device, - "device_name": renderer.context.device_name, + "device": runtime.context.device, + "device_name": runtime.context.device_name, "alive": True, } current_state = ServerStates.LoadingModel - model_manager.load_default_models(renderer.context) + model_manager.load_default_models(runtime.context) current_state = ServerStates.Online except Exception as e: @@ -290,8 +256,8 @@ def thread_render(device): session_cache.clean() task_cache.clean() if not weak_thread_data[threading.current_thread()]["alive"]: - log.info(f"Shutting down thread for device {renderer.context.device}") - model_manager.unload_all(renderer.context) + log.info(f"Shutting down thread for device {runtime.context.device}") + model_manager.unload_all(runtime.context) return if isinstance(current_state_error, SystemExit): current_state = ServerStates.Unavailable @@ -311,62 +277,31 @@ def thread_render(device): task.response = {"status": "failed", "detail": str(task.error)} task.buffer_queue.put(json.dumps(task.response)) continue - log.info(f"Session {task.task_data.session_id} starting task {id(task)} on {renderer.context.device_name}") + log.info(f"Session {task.session_id} starting task {task.id} on {runtime.context.device_name}") if not task.lock.acquire(blocking=False): raise Exception("Got locked task from queue.") try: + task.run() - def step_callback(): - global current_state_error - - task_cache.keep(id(task), TASK_TTL) - session_cache.keep(task.task_data.session_id, TASK_TTL) - - if ( - isinstance(current_state_error, SystemExit) - or isinstance(current_state_error, StopAsyncIteration) - or isinstance(task.error, StopAsyncIteration) - ): - renderer.context.stop_processing = True - if isinstance(current_state_error, StopAsyncIteration): - task.error = current_state_error - current_state_error = None - log.info(f"Session {task.task_data.session_id} sent cancel signal for task {id(task)}") - - current_state = ServerStates.LoadingModel - model_manager.resolve_model_paths(task.task_data) - model_manager.reload_models_if_necessary(renderer.context, task.task_data) - model_manager.fail_if_models_did_not_load(renderer.context) - - current_state = ServerStates.Rendering - task.response = renderer.make_images( - task.render_request, - task.task_data, - task.buffer_queue, - task.temp_images, - step_callback, - ) # Before looping back to the generator, mark cache as still alive. - task_cache.keep(id(task), TASK_TTL) - session_cache.keep(task.task_data.session_id, TASK_TTL) + keep_task_alive(task) except Exception as e: task.error = str(e) task.response = {"status": "failed", "detail": str(task.error)} task.buffer_queue.put(json.dumps(task.response)) log.error(traceback.format_exc()) finally: - gc(renderer.context) + gc(runtime.context) task.lock.release() - task_cache.keep(id(task), TASK_TTL) - session_cache.keep(task.task_data.session_id, TASK_TTL) + + keep_task_alive(task) + if isinstance(task.error, StopAsyncIteration): - log.info(f"Session {task.task_data.session_id} task {id(task)} cancelled!") + log.info(f"Session {task.session_id} task {task.id} cancelled!") elif task.error is not None: - log.info(f"Session {task.task_data.session_id} task {id(task)} failed!") + log.info(f"Session {task.session_id} task {task.id} failed!") else: - log.info( - f"Session {task.task_data.session_id} task {id(task)} completed by {renderer.context.device_name}." - ) + log.info(f"Session {task.session_id} task {task.id} completed by {runtime.context.device_name}.") current_state = ServerStates.Online @@ -438,6 +373,12 @@ def get_devices(): finally: manager_lock.release() + # temp until TRT releases + import os + from easydiffusion import app + + devices["enable_trt"] = os.path.exists(os.path.join(app.ROOT_DIR, "tensorrt")) + return devices @@ -548,28 +489,27 @@ def shutdown_event(): # Signal render thread to close on shutdown current_state_error = SystemExit("Application shutting down.") -def render(render_req: GenerateImageRequest, task_data: TaskData): +def enqueue_task(task: Task): current_thread_count = is_alive() if current_thread_count <= 0: # Render thread is dead raise ChildProcessError("Rendering thread has died.") # Alive, check if task in cache - session = get_cached_session(task_data.session_id, update_ttl=True) + session = get_cached_session(task.session_id, update_ttl=True) pending_tasks = list(filter(lambda t: t.is_pending, session.tasks)) - if current_thread_count < len(pending_tasks): + if len(pending_tasks) > current_thread_count * MAX_OVERLOAD_ALLOWED_RATIO: raise ConnectionRefusedError( - f"Session {task_data.session_id} already has {len(pending_tasks)} pending tasks out of {current_thread_count}." + f"Session {task.session_id} already has {len(pending_tasks)} pending tasks, with {current_thread_count} workers." ) - new_task = RenderTask(render_req, task_data) - if session.put(new_task, TASK_TTL): + if session.put(task, TASK_TTL): # Use twice the normal timeout for adding user requests. # Tries to force session.put to fail before tasks_queue.put would. if manager_lock.acquire(blocking=True, timeout=LOCK_TIMEOUT * 2): try: - tasks_queue.append(new_task) + tasks_queue.append(task) idle_event.set() - return new_task + return task finally: manager_lock.release() raise RuntimeError("Failed to add task to cache.") diff --git a/ui/easydiffusion/tasks/__init__.py b/ui/easydiffusion/tasks/__init__.py new file mode 100644 index 00000000..1d295da8 --- /dev/null +++ b/ui/easydiffusion/tasks/__init__.py @@ -0,0 +1,3 @@ +from .task import Task +from .render_images import RenderTask +from .filter_images import FilterTask diff --git a/ui/easydiffusion/tasks/filter_images.py b/ui/easydiffusion/tasks/filter_images.py new file mode 100644 index 00000000..c4e674d7 --- /dev/null +++ b/ui/easydiffusion/tasks/filter_images.py @@ -0,0 +1,110 @@ +import json +import pprint + +from sdkit.filter import apply_filters +from sdkit.models import load_model +from sdkit.utils import img_to_base64_str, log + +from easydiffusion import model_manager, runtime +from easydiffusion.types import FilterImageRequest, FilterImageResponse, ModelsData, OutputFormatData + +from .task import Task + + +class FilterTask(Task): + "For applying filters to input images" + + def __init__( + self, req: FilterImageRequest, session_id: str, models_data: ModelsData, output_format: OutputFormatData + ): + super().__init__(session_id) + + self.request = req + self.models_data = models_data + self.output_format = output_format + + # convert to multi-filter format, if necessary + if isinstance(req.filter, str): + req.filter_params = {req.filter: req.filter_params} + req.filter = [req.filter] + + if not isinstance(req.image, list): + req.image = [req.image] + + def run(self): + "Runs the image filtering task on the assigned thread" + + context = runtime.context + + model_manager.resolve_model_paths(self.models_data) + model_manager.reload_models_if_necessary(context, self.models_data) + model_manager.fail_if_models_did_not_load(context) + + print_task_info(self.request, self.models_data, self.output_format) + + images = filter_images(context, self.request.image, self.request.filter, self.request.filter_params) + + output_format = self.output_format + images = [ + img_to_base64_str( + img, output_format.output_format, output_format.output_quality, output_format.output_lossless + ) + for img in images + ] + + res = FilterImageResponse(self.request, self.models_data, images=images) + res = res.json() + self.buffer_queue.put(json.dumps(res)) + log.info("Filter task completed") + + self.response = res + + +def filter_images(context, images, filters, filter_params={}): + filters = filters if isinstance(filters, list) else [filters] + + for filter_name in filters: + params = filter_params.get(filter_name, {}) + + previous_state = before_filter(context, filter_name, params) + + try: + images = apply_filters(context, filter_name, images, **params) + finally: + after_filter(context, filter_name, params, previous_state) + + return images + + +def before_filter(context, filter_name, filter_params): + if filter_name == "codeformer": + from easydiffusion.model_manager import DEFAULT_MODELS, resolve_model_to_use + + default_realesrgan = DEFAULT_MODELS["realesrgan"][0]["file_name"] + prev_realesrgan_path = None + + upscale_faces = filter_params.get("upscale_faces", False) + if upscale_faces and default_realesrgan not in context.model_paths["realesrgan"]: + prev_realesrgan_path = context.model_paths.get("realesrgan") + context.model_paths["realesrgan"] = resolve_model_to_use(default_realesrgan, "realesrgan") + load_model(context, "realesrgan") + + return prev_realesrgan_path + + +def after_filter(context, filter_name, filter_params, previous_state): + if filter_name == "codeformer": + prev_realesrgan_path = previous_state + if prev_realesrgan_path: + context.model_paths["realesrgan"] = prev_realesrgan_path + load_model(context, "realesrgan") + + +def print_task_info(req: FilterImageRequest, models_data: ModelsData, output_format: OutputFormatData): + req_str = pprint.pformat({"filter": req.filter, "filter_params": req.filter_params}).replace("[", "\[") + models_data = pprint.pformat(models_data.dict()).replace("[", "\[") + output_format = pprint.pformat(output_format.dict()).replace("[", "\[") + + log.info(f"request: {req_str}") + log.info(f"models data: {models_data}") + log.info(f"output format: {output_format}") diff --git a/ui/easydiffusion/tasks/render_images.py b/ui/easydiffusion/tasks/render_images.py new file mode 100644 index 00000000..bdf6e3ac --- /dev/null +++ b/ui/easydiffusion/tasks/render_images.py @@ -0,0 +1,340 @@ +import json +import pprint +import queue +import time + +from easydiffusion import model_manager, runtime +from easydiffusion.types import GenerateImageRequest, ModelsData, OutputFormatData +from easydiffusion.types import Image as ResponseImage +from easydiffusion.types import GenerateImageResponse, TaskData, UserInitiatedStop +from easydiffusion.utils import get_printable_request, log, save_images_to_disk +from sdkit.generate import generate_images +from sdkit.utils import ( + diffusers_latent_samples_to_images, + gc, + img_to_base64_str, + img_to_buffer, + latent_samples_to_images, + log, +) + +from .task import Task +from .filter_images import filter_images + + +class RenderTask(Task): + "For image generation" + + def __init__( + self, req: GenerateImageRequest, task_data: TaskData, models_data: ModelsData, output_format: OutputFormatData + ): + super().__init__(task_data.session_id) + + task_data.request_id = self.id + self.render_request: GenerateImageRequest = req # Initial Request + self.task_data: TaskData = task_data + self.models_data = models_data + self.output_format = output_format + self.temp_images: list = [None] * req.num_outputs * (1 if task_data.show_only_filtered_image else 2) + + def run(self): + "Runs the image generation task on the assigned thread" + + from easydiffusion import task_manager + + context = runtime.context + + def step_callback(): + task_manager.keep_task_alive(self) + task_manager.current_state = task_manager.ServerStates.Rendering + + if isinstance(task_manager.current_state_error, (SystemExit, StopAsyncIteration)) or isinstance( + self.error, StopAsyncIteration + ): + context.stop_processing = True + if isinstance(task_manager.current_state_error, StopAsyncIteration): + self.error = task_manager.current_state_error + task_manager.current_state_error = None + log.info(f"Session {self.session_id} sent cancel signal for task {self.id}") + + task_manager.current_state = task_manager.ServerStates.LoadingModel + model_manager.resolve_model_paths(self.models_data) + + models_to_force_reload = [] + if ( + runtime.set_vram_optimizations(context) + or self.has_param_changed(context, "clip_skip") + or self.trt_needs_reload(context) + ): + models_to_force_reload.append("stable-diffusion") + + model_manager.reload_models_if_necessary(context, self.models_data, models_to_force_reload) + model_manager.fail_if_models_did_not_load(context) + + task_manager.current_state = task_manager.ServerStates.Rendering + self.response = make_images( + context, + self.render_request, + self.task_data, + self.models_data, + self.output_format, + self.buffer_queue, + self.temp_images, + step_callback, + ) + + def has_param_changed(self, context, param_name): + if not context.test_diffusers: + return False + if "stable-diffusion" not in context.models or "params" not in context.models["stable-diffusion"]: + return True + + model = context.models["stable-diffusion"] + new_val = self.models_data.model_params.get("stable-diffusion", {}).get(param_name, False) + return model["params"].get(param_name) != new_val + + def trt_needs_reload(self, context): + if not context.test_diffusers: + return False + if "stable-diffusion" not in context.models or "params" not in context.models["stable-diffusion"]: + return True + + model = context.models["stable-diffusion"] + + # curr_convert_to_trt = model["params"].get("convert_to_tensorrt") + new_convert_to_trt = self.models_data.model_params.get("stable-diffusion", {}).get("convert_to_tensorrt", False) + + pipe = model["default"] + is_trt_loaded = hasattr(pipe.unet, "_allocate_trt_buffers") or hasattr( + pipe.unet, "_allocate_trt_buffers_backup" + ) + if new_convert_to_trt and not is_trt_loaded: + return True + + curr_build_config = model["params"].get("trt_build_config") + new_build_config = self.models_data.model_params.get("stable-diffusion", {}).get("trt_build_config", {}) + + return new_convert_to_trt and curr_build_config != new_build_config + + +def make_images( + context, + req: GenerateImageRequest, + task_data: TaskData, + models_data: ModelsData, + output_format: OutputFormatData, + data_queue: queue.Queue, + task_temp_images: list, + step_callback, +): + context.stop_processing = False + print_task_info(req, task_data, models_data, output_format) + + images, seeds = make_images_internal( + context, req, task_data, models_data, output_format, data_queue, task_temp_images, step_callback + ) + + res = GenerateImageResponse( + req, task_data, models_data, output_format, images=construct_response(images, seeds, output_format) + ) + res = res.json() + data_queue.put(json.dumps(res)) + log.info("Task completed") + + return res + + +def print_task_info( + req: GenerateImageRequest, task_data: TaskData, models_data: ModelsData, output_format: OutputFormatData +): + req_str = pprint.pformat(get_printable_request(req, task_data, output_format)).replace("[", "\[") + task_str = pprint.pformat(task_data.dict()).replace("[", "\[") + models_data = pprint.pformat(models_data.dict()).replace("[", "\[") + output_format = pprint.pformat(output_format.dict()).replace("[", "\[") + + log.info(f"request: {req_str}") + log.info(f"task data: {task_str}") + # log.info(f"models data: {models_data}") + log.info(f"output format: {output_format}") + + +def make_images_internal( + context, + req: GenerateImageRequest, + task_data: TaskData, + models_data: ModelsData, + output_format: OutputFormatData, + data_queue: queue.Queue, + task_temp_images: list, + step_callback, +): + images, user_stopped = generate_images_internal( + context, + req, + task_data, + models_data, + data_queue, + task_temp_images, + step_callback, + task_data.stream_image_progress, + task_data.stream_image_progress_interval, + ) + + gc(context) + + filters, filter_params = task_data.filters, task_data.filter_params + filtered_images = filter_images(context, images, filters, filter_params) if not user_stopped else images + + if task_data.save_to_disk_path is not None: + save_images_to_disk(images, filtered_images, req, task_data, output_format) + + seeds = [*range(req.seed, req.seed + len(images))] + if task_data.show_only_filtered_image or filtered_images is images: + return filtered_images, seeds + else: + return images + filtered_images, seeds + seeds + + +def generate_images_internal( + context, + req: GenerateImageRequest, + task_data: TaskData, + models_data: ModelsData, + data_queue: queue.Queue, + task_temp_images: list, + step_callback, + stream_image_progress: bool, + stream_image_progress_interval: int, +): + context.temp_images.clear() + + callback = make_step_callback( + context, + req, + task_data, + data_queue, + task_temp_images, + step_callback, + stream_image_progress, + stream_image_progress_interval, + ) + + try: + if req.init_image is not None and not context.test_diffusers: + req.sampler_name = "ddim" + + req.width, req.height = map(lambda x: x - x % 8, (req.width, req.height)) # clamp to 8 + + if req.control_image and task_data.control_filter_to_apply: + req.control_image = filter_images(context, req.control_image, task_data.control_filter_to_apply)[0] + + if context.test_diffusers: + pipe = context.models["stable-diffusion"]["default"] + if hasattr(pipe.unet, "_allocate_trt_buffers_backup"): + setattr(pipe.unet, "_allocate_trt_buffers", pipe.unet._allocate_trt_buffers_backup) + delattr(pipe.unet, "_allocate_trt_buffers_backup") + + if hasattr(pipe.unet, "_allocate_trt_buffers"): + convert_to_trt = models_data.model_params["stable-diffusion"].get("convert_to_tensorrt", False) + if convert_to_trt: + pipe.unet.forward = pipe.unet._trt_forward + # pipe.vae.decoder.forward = pipe.vae.decoder._trt_forward + log.info(f"Setting unet.forward to TensorRT") + else: + log.info(f"Not using TensorRT for unet.forward") + pipe.unet.forward = pipe.unet._non_trt_forward + # pipe.vae.decoder.forward = pipe.vae.decoder._non_trt_forward + setattr(pipe.unet, "_allocate_trt_buffers_backup", pipe.unet._allocate_trt_buffers) + delattr(pipe.unet, "_allocate_trt_buffers") + + images = generate_images(context, callback=callback, **req.dict()) + user_stopped = False + except UserInitiatedStop: + images = [] + user_stopped = True + if context.partial_x_samples is not None: + if context.test_diffusers: + images = diffusers_latent_samples_to_images(context, context.partial_x_samples) + else: + images = latent_samples_to_images(context, context.partial_x_samples) + finally: + if hasattr(context, "partial_x_samples") and context.partial_x_samples is not None: + if not context.test_diffusers: + del context.partial_x_samples + context.partial_x_samples = None + + return images, user_stopped + + +def construct_response(images: list, seeds: list, output_format: OutputFormatData): + return [ + ResponseImage( + data=img_to_base64_str( + img, + output_format.output_format, + output_format.output_quality, + output_format.output_lossless, + ), + seed=seed, + ) + for img, seed in zip(images, seeds) + ] + + +def make_step_callback( + context, + req: GenerateImageRequest, + task_data: TaskData, + data_queue: queue.Queue, + task_temp_images: list, + step_callback, + stream_image_progress: bool, + stream_image_progress_interval: int, +): + n_steps = req.num_inference_steps if req.init_image is None else int(req.num_inference_steps * req.prompt_strength) + last_callback_time = -1 + + def update_temp_img(x_samples, task_temp_images: list): + partial_images = [] + + if context.test_diffusers: + images = diffusers_latent_samples_to_images(context, x_samples) + else: + images = latent_samples_to_images(context, x_samples) + + if task_data.block_nsfw: + images = filter_images(context, images, "nsfw_checker") + + for i, img in enumerate(images): + buf = img_to_buffer(img, output_format="JPEG") + + context.temp_images[f"{task_data.request_id}/{i}"] = buf + task_temp_images[i] = buf + partial_images.append({"path": f"/image/tmp/{task_data.request_id}/{i}"}) + del images + return partial_images + + def on_image_step(x_samples, i, *args): + nonlocal last_callback_time + + if context.test_diffusers: + context.partial_x_samples = (x_samples, args[0]) + else: + context.partial_x_samples = x_samples + + step_time = time.time() - last_callback_time if last_callback_time != -1 else -1 + last_callback_time = time.time() + + progress = {"step": i, "step_time": step_time, "total_steps": n_steps} + + if stream_image_progress and stream_image_progress_interval > 0 and i % stream_image_progress_interval == 0: + progress["output"] = update_temp_img(context.partial_x_samples, task_temp_images) + + data_queue.put(json.dumps(progress)) + + step_callback() + + if context.stop_processing: + raise UserInitiatedStop("User requested that we stop processing") + + return on_image_step diff --git a/ui/easydiffusion/tasks/task.py b/ui/easydiffusion/tasks/task.py new file mode 100644 index 00000000..4454efe6 --- /dev/null +++ b/ui/easydiffusion/tasks/task.py @@ -0,0 +1,47 @@ +from threading import Lock +from queue import Queue, Empty as EmptyQueueException +from typing import Any + + +class Task: + "Task with output queue and completion lock" + + def __init__(self, session_id): + self.id = id(self) + self.session_id = session_id + self.render_device = None # Select the task affinity. (Not used to change active devices). + self.error: Exception = None + self.lock: Lock = Lock() # Locks at task start and unlocks when task is completed + self.buffer_queue: Queue = Queue() # Queue of JSON string segments + self.response: Any = None # Copy of the last reponse + + async def read_buffer_generator(self): + try: + while not self.buffer_queue.empty(): + res = self.buffer_queue.get(block=False) + self.buffer_queue.task_done() + yield res + except EmptyQueueException as e: + yield + + @property + def status(self): + if self.lock.locked(): + return "running" + if isinstance(self.error, StopAsyncIteration): + return "stopped" + if self.error: + return "error" + if not self.buffer_queue.empty(): + return "buffer" + if self.response: + return "completed" + return "pending" + + @property + def is_pending(self): + return bool(not self.response and not self.error) + + def run(self): + "Override this to implement the task's behavior" + pass diff --git a/ui/easydiffusion/types.py b/ui/easydiffusion/types.py index abf8db29..fe936ca2 100644 --- a/ui/easydiffusion/types.py +++ b/ui/easydiffusion/types.py @@ -1,4 +1,4 @@ -from typing import Any +from typing import Any, List, Dict, Union from pydantic import BaseModel @@ -17,36 +17,68 @@ class GenerateImageRequest(BaseModel): init_image: Any = None init_image_mask: Any = None + control_image: Any = None + control_alpha: Union[float, List[float]] = None prompt_strength: float = 0.8 preserve_init_image_color_profile = False + strict_mask_border = False sampler_name: str = None # "ddim", "plms", "heun", "euler", "euler_a", "dpm2", "dpm2_a", "lms" hypernetwork_strength: float = 0 - lora_alpha: float = 0 + lora_alpha: Union[float, List[float]] = 0 tiling: str = "none" # "none", "x", "y", "xy" +class FilterImageRequest(BaseModel): + image: Any = None + filter: Union[str, List[str]] = None + filter_params: dict = {} + + +class ModelsData(BaseModel): + """ + Contains the information related to the models involved in a request. + + - To load a model: set the relative path(s) to the model in `model_paths`. No effect if already loaded. + - To unload a model: set the model to `None` in `model_paths`. No effect if already unloaded. + + Models that aren't present in `model_paths` will not be changed. + """ + + model_paths: Dict[str, Union[str, None, List[str]]] = None + "model_type to string path, or list of string paths" + + model_params: Dict[str, Dict[str, Any]] = {} + "model_type to dict of parameters" + + +class OutputFormatData(BaseModel): + output_format: str = "jpeg" # or "png" or "webp" + output_quality: int = 75 + output_lossless: bool = False + + class TaskData(BaseModel): request_id: str = None session_id: str = "session" save_to_disk_path: str = None vram_usage_level: str = "balanced" # or "low" or "medium" - use_face_correction: str = None # or "GFPGANv1.3" - use_upscale: str = None # or "RealESRGAN_x4plus" or "RealESRGAN_x4plus_anime_6B" or "latent_upscaler" + use_face_correction: Union[str, List[str]] = None # or "GFPGANv1.3" + use_upscale: Union[str, List[str]] = None upscale_amount: int = 4 # or 2 latent_upscaler_steps: int = 10 - use_stable_diffusion_model: str = "sd-v1-4" - # use_stable_diffusion_config: str = "v1-inference" - use_vae_model: str = None - use_hypernetwork_model: str = None - use_lora_model: str = None + use_stable_diffusion_model: Union[str, List[str]] = "sd-v1-4" + use_vae_model: Union[str, List[str]] = None + use_hypernetwork_model: Union[str, List[str]] = None + use_lora_model: Union[str, List[str]] = None + use_controlnet_model: Union[str, List[str]] = None + filters: List[str] = [] + filter_params: Dict[str, Dict[str, Any]] = {} + control_filter_to_apply: Union[str, List[str]] = None show_only_filtered_image: bool = False block_nsfw: bool = False - output_format: str = "jpeg" # or "png" or "webp" - output_quality: int = 75 - output_lossless: bool = False metadata_output_format: str = "txt" # or "json" stream_image_progress: bool = False stream_image_progress_interval: int = 5 @@ -81,24 +113,39 @@ class Image: } -class Response: +class GenerateImageResponse: render_request: GenerateImageRequest task_data: TaskData + models_data: ModelsData images: list - def __init__(self, render_request: GenerateImageRequest, task_data: TaskData, images: list): + def __init__( + self, + render_request: GenerateImageRequest, + task_data: TaskData, + models_data: ModelsData, + output_format: OutputFormatData, + images: list, + ): self.render_request = render_request self.task_data = task_data + self.models_data = models_data + self.output_format = output_format self.images = images def json(self): del self.render_request.init_image del self.render_request.init_image_mask + del self.render_request.control_image + + task_data = self.task_data.dict() + task_data.update(self.output_format.dict()) res = { "status": "succeeded", "render_request": self.render_request.dict(), - "task_data": self.task_data.dict(), + "task_data": task_data, + # "models_data": self.models_data.dict(), # haven't migrated the UI to the new format (yet) "output": [], } @@ -108,5 +155,111 @@ class Response: return res +class FilterImageResponse: + request: FilterImageRequest + models_data: ModelsData + images: list + + def __init__(self, request: FilterImageRequest, models_data: ModelsData, images: list): + self.request = request + self.models_data = models_data + self.images = images + + def json(self): + del self.request.image + + res = { + "status": "succeeded", + "request": self.request.dict(), + "models_data": self.models_data.dict(), + "output": [], + } + + for image in self.images: + res["output"].append(image) + + return res + + class UserInitiatedStop(Exception): pass + + +def convert_legacy_render_req_to_new(old_req: dict): + new_req = dict(old_req) + + # new keys + model_paths = new_req["model_paths"] = {} + model_params = new_req["model_params"] = {} + filters = new_req["filters"] = [] + filter_params = new_req["filter_params"] = {} + + # move the model info + model_paths["stable-diffusion"] = old_req.get("use_stable_diffusion_model") + model_paths["vae"] = old_req.get("use_vae_model") + model_paths["hypernetwork"] = old_req.get("use_hypernetwork_model") + model_paths["lora"] = old_req.get("use_lora_model") + model_paths["controlnet"] = old_req.get("use_controlnet_model") + + model_paths["gfpgan"] = old_req.get("use_face_correction", "") + model_paths["gfpgan"] = model_paths["gfpgan"] if "gfpgan" in model_paths["gfpgan"].lower() else None + + model_paths["codeformer"] = old_req.get("use_face_correction", "") + model_paths["codeformer"] = model_paths["codeformer"] if "codeformer" in model_paths["codeformer"].lower() else None + + model_paths["realesrgan"] = old_req.get("use_upscale", "") + model_paths["realesrgan"] = model_paths["realesrgan"] if "realesrgan" in model_paths["realesrgan"].lower() else None + + model_paths["latent_upscaler"] = old_req.get("use_upscale", "") + model_paths["latent_upscaler"] = ( + model_paths["latent_upscaler"] if "latent_upscaler" in model_paths["latent_upscaler"].lower() else None + ) + if "control_filter_to_apply" in old_req: + filter_model = old_req["control_filter_to_apply"] + model_paths[filter_model] = filter_model + + if old_req.get("block_nsfw"): + model_paths["nsfw_checker"] = "nsfw_checker" + + # move the model params + if model_paths["stable-diffusion"]: + model_params["stable-diffusion"] = { + "clip_skip": bool(old_req.get("clip_skip", False)), + "convert_to_tensorrt": bool(old_req.get("convert_to_tensorrt", False)), + "trt_build_config": old_req.get( + "trt_build_config", {"batch_size_range": (1, 1), "dimensions_range": [(768, 1024)]} + ), + } + + # move the filter params + if model_paths["realesrgan"]: + filter_params["realesrgan"] = {"scale": int(old_req.get("upscale_amount", 4))} + if model_paths["latent_upscaler"]: + filter_params["latent_upscaler"] = { + "prompt": old_req["prompt"], + "negative_prompt": old_req.get("negative_prompt"), + "seed": int(old_req.get("seed", 42)), + "num_inference_steps": int(old_req.get("latent_upscaler_steps", 10)), + "guidance_scale": 0, + } + if model_paths["codeformer"]: + filter_params["codeformer"] = { + "upscale_faces": bool(old_req.get("codeformer_upscale_faces", True)), + "codeformer_fidelity": float(old_req.get("codeformer_fidelity", 0.5)), + } + + # set the filters + if old_req.get("block_nsfw"): + filters.append("nsfw_checker") + + if model_paths["codeformer"]: + filters.append("codeformer") + elif model_paths["gfpgan"]: + filters.append("gfpgan") + + if model_paths["realesrgan"]: + filters.append("realesrgan") + elif model_paths["latent_upscaler"]: + filters.append("latent_upscaler") + + return new_req diff --git a/ui/easydiffusion/utils/save_utils.py b/ui/easydiffusion/utils/save_utils.py index ff2906a6..89dae991 100644 --- a/ui/easydiffusion/utils/save_utils.py +++ b/ui/easydiffusion/utils/save_utils.py @@ -1,11 +1,13 @@ import os import re import time +import regex + from datetime import datetime from functools import reduce from easydiffusion import app -from easydiffusion.types import GenerateImageRequest, TaskData +from easydiffusion.types import GenerateImageRequest, TaskData, OutputFormatData from numpy import base_repr from sdkit.utils import save_dicts, save_images @@ -19,6 +21,8 @@ TASK_TEXT_MAPPING = { "seed": "Seed", "use_stable_diffusion_model": "Stable Diffusion model", "clip_skip": "Clip Skip", + "use_controlnet_model": "ControlNet model", + "control_filter_to_apply": "ControlNet Filter", "use_vae_model": "VAE model", "sampler_name": "Sampler", "width": "Width", @@ -30,11 +34,12 @@ TASK_TEXT_MAPPING = { "lora_alpha": "LoRA Strength", "use_hypernetwork_model": "Hypernetwork model", "hypernetwork_strength": "Hypernetwork Strength", + "use_embedding_models": "Embedding models", "tiling": "Seamless Tiling", "use_face_correction": "Use Face Correction", "use_upscale": "Use Upscaling", "upscale_amount": "Upscale By", - "latent_upscaler_steps": "Latent Upscaler Steps" + "latent_upscaler_steps": "Latent Upscaler Steps", } time_placeholders = { @@ -111,12 +116,14 @@ def format_file_name( return filename_regex.sub("_", format) -def save_images_to_disk(images: list, filtered_images: list, req: GenerateImageRequest, task_data: TaskData): +def save_images_to_disk( + images: list, filtered_images: list, req: GenerateImageRequest, task_data: TaskData, output_format: OutputFormatData +): now = time.time() app_config = app.getConfig() folder_format = app_config.get("folder_format", "$id") save_dir_path = os.path.join(task_data.save_to_disk_path, format_folder_name(folder_format, req, task_data)) - metadata_entries = get_metadata_entries_for_request(req, task_data) + metadata_entries = get_metadata_entries_for_request(req, task_data, output_format) file_number = calculate_img_number(save_dir_path, task_data) make_filename = make_filename_callback( app_config.get("filename_format", "$p_$tsb64"), @@ -131,9 +138,9 @@ def save_images_to_disk(images: list, filtered_images: list, req: GenerateImageR filtered_images, save_dir_path, file_name=make_filename, - output_format=task_data.output_format, - output_quality=task_data.output_quality, - output_lossless=task_data.output_lossless, + output_format=output_format.output_format, + output_quality=output_format.output_quality, + output_lossless=output_format.output_lossless, ) if task_data.metadata_output_format: for metadata_output_format in task_data.metadata_output_format.split(","): @@ -143,7 +150,7 @@ def save_images_to_disk(images: list, filtered_images: list, req: GenerateImageR save_dir_path, file_name=make_filename, output_format=metadata_output_format, - file_format=task_data.output_format, + file_format=output_format.output_format, ) else: make_filter_filename = make_filename_callback( @@ -159,17 +166,17 @@ def save_images_to_disk(images: list, filtered_images: list, req: GenerateImageR images, save_dir_path, file_name=make_filename, - output_format=task_data.output_format, - output_quality=task_data.output_quality, - output_lossless=task_data.output_lossless, + output_format=output_format.output_format, + output_quality=output_format.output_quality, + output_lossless=output_format.output_lossless, ) save_images( filtered_images, save_dir_path, file_name=make_filter_filename, - output_format=task_data.output_format, - output_quality=task_data.output_quality, - output_lossless=task_data.output_lossless, + output_format=output_format.output_format, + output_quality=output_format.output_quality, + output_lossless=output_format.output_lossless, ) if task_data.metadata_output_format: for metadata_output_format in task_data.metadata_output_format.split(","): @@ -178,18 +185,26 @@ def save_images_to_disk(images: list, filtered_images: list, req: GenerateImageR metadata_entries, save_dir_path, file_name=make_filter_filename, - output_format=task_data.metadata_output_format, - file_format=task_data.output_format, + output_format=metadata_output_format, + file_format=output_format.output_format, ) -def get_metadata_entries_for_request(req: GenerateImageRequest, task_data: TaskData): - metadata = get_printable_request(req, task_data) +def get_metadata_entries_for_request(req: GenerateImageRequest, task_data: TaskData, output_format: OutputFormatData): + metadata = get_printable_request(req, task_data, output_format) # if text, format it in the text format expected by the UI is_txt_format = task_data.metadata_output_format and "txt" in task_data.metadata_output_format.lower().split(",") if is_txt_format: - metadata = {TASK_TEXT_MAPPING[key]: val for key, val in metadata.items() if key in TASK_TEXT_MAPPING} + + def format_value(value): + if isinstance(value, list): + return ", ".join([str(it) for it in value]) + return value + + metadata = { + TASK_TEXT_MAPPING[key]: format_value(val) for key, val in metadata.items() if key in TASK_TEXT_MAPPING + } entries = [metadata.copy() for _ in range(req.num_outputs)] for i, entry in enumerate(entries): @@ -198,9 +213,13 @@ def get_metadata_entries_for_request(req: GenerateImageRequest, task_data: TaskD return entries -def get_printable_request(req: GenerateImageRequest, task_data: TaskData): +def get_printable_request(req: GenerateImageRequest, task_data: TaskData, output_format: OutputFormatData): req_metadata = req.dict() task_data_metadata = task_data.dict() + task_data_metadata.update(output_format.dict()) + + app_config = app.getConfig() + using_diffusers = app_config.get("test_diffusers", False) # Save the metadata in the order defined in TASK_TEXT_MAPPING metadata = {} @@ -209,7 +228,29 @@ def get_printable_request(req: GenerateImageRequest, task_data: TaskData): metadata[key] = req_metadata[key] elif key in task_data_metadata: metadata[key] = task_data_metadata[key] - + elif key == "use_embedding_models" and using_diffusers: + embeddings_extensions = {".pt", ".bin", ".safetensors"} + + def scan_directory(directory_path: str): + used_embeddings = [] + for entry in os.scandir(directory_path): + if entry.is_file(): + entry_extension = os.path.splitext(entry.name)[1] + if entry_extension not in embeddings_extensions: + continue + + embedding_name_regex = regex.compile( + r"(^|[\s,])" + regex.escape(os.path.splitext(entry.name)[0]) + r"([+-]*$|[\s,]|[+-]+[\s,])" + ) + if embedding_name_regex.search(req.prompt) or embedding_name_regex.search(req.negative_prompt): + used_embeddings.append(entry.path) + elif entry.is_dir(): + used_embeddings.extend(scan_directory(entry.path)) + return used_embeddings + + used_embeddings = scan_directory(os.path.join(app.MODELS_DIR, "embeddings")) + metadata["use_embedding_models"] = used_embeddings if len(used_embeddings) > 0 else None + # Clean up the metadata if req.init_image is None and "prompt_strength" in metadata: del metadata["prompt_strength"] @@ -221,10 +262,13 @@ def get_printable_request(req: GenerateImageRequest, task_data: TaskData): del metadata["lora_alpha"] if task_data.use_upscale != "latent_upscaler" and "latent_upscaler_steps" in metadata: del metadata["latent_upscaler_steps"] + if task_data.use_controlnet_model is None and "control_filter_to_apply" in metadata: + del metadata["control_filter_to_apply"] - app_config = app.getConfig() - if not app_config.get("test_diffusers", False): - for key in (x for x in ["use_lora_model", "lora_alpha", "clip_skip", "tiling", "latent_upscaler_steps"] if x in metadata): + if not using_diffusers: + for key in ( + x for x in ["use_lora_model", "lora_alpha", "clip_skip", "tiling", "latent_upscaler_steps", "use_controlnet_model", "control_filter_to_apply"] if x in metadata + ): del metadata[key] return metadata diff --git a/ui/index.html b/ui/index.html index 0c4386de..7cedc0e5 100644 --- a/ui/index.html +++ b/ui/index.html @@ -16,6 +16,8 @@ + + @@ -30,7 +32,7 @@
- Click to learn more about custom models + Click to learn more about custom models | |||
+ | + + + | +||
- | + | - Click to learn more about Clip Skip + Click to learn more about Clip Skip + | +|
+ |
+
+
+
+
+
+
+ Click to learn more about ControlNets
+
+
+
+
+ + + + |
||
- Click to learn more about VAEs + Click to learn more about VAEs | |||
- Click to learn more about samplers + Click to learn more about samplers | |||
+ | |||
+ Swap width and height
+
+
+
+ Custom size:
+ + + × + + + Enlarge: + + + Recently used: +
+
+ Small image sizes can cause bad image quality
| |||
| |||
- - | |||
+ | |||
- -2 2
- + + |
+ + + | ||
@@ -239,15 +331,18 @@ | |
||
- - Click to learn more about Seamless Tiling - | |||
+ | + + Click to learn more about Seamless Tiling + | +||