diff --git a/CHANGES.md b/CHANGES.md index 9fe2cff0..b13083dc 100644 --- a/CHANGES.md +++ b/CHANGES.md @@ -4,16 +4,17 @@ ### Major Changes - **Nearly twice as fast** - significantly faster speed of image generation. Code contributions are welcome to make our project even faster: https://github.com/easydiffusion/sdkit/#is-it-fast - **Mac M1/M2 support** - Experimental support for Mac M1/M2. Thanks @michaelgallacher, @JeLuf and vishae. +- **AMD support for Linux** - Experimental support for AMD GPUs on Linux. Thanks @DianaNites and @JeLuf. - **Full support for Stable Diffusion 2.1 (including CPU)** - supports loading v1.4 or v2.0 or v2.1 models seamlessly. No need to enable "Test SD2", and no need to add `sd2_` to your SD 2.0 model file names. Works on CPU as well. - **Memory optimized Stable Diffusion 2.1** - you can now use Stable Diffusion 2.1 models, with the same low VRAM optimizations that we've always had for SD 1.4. Please note, the SD 2.0 and 2.1 models require more GPU and System RAM, as compared to the SD 1.4 and 1.5 models. - **11 new samplers!** - explore the new samplers, some of which can generate great images in less than 10 inference steps! We've added the Karras and UniPC samplers. Thanks @Schorny for the UniPC samplers. -- **Model Merging** - You can now merge two models (`.ckpt` or `.safetensors`) and output `.ckpt` or `.safetensors` models, optionally in `fp16` precision. Details: https://github.com/cmdr2/stable-diffusion-ui/wiki/Model-Merging . Thanks @JeLuf. +- **Model Merging** - You can now merge two models (`.ckpt` or `.safetensors`) and output `.ckpt` or `.safetensors` models, optionally in `fp16` precision. Details: https://github.com/easydiffusion/easydiffusion/wiki/Model-Merging . Thanks @JeLuf. - **Fast loading/unloading of VAEs** - No longer needs to reload the entire Stable Diffusion model, each time you change the VAE - **Database of known models** - automatically picks the right configuration for known models. E.g. we automatically detect and apply "v" parameterization (required for some SD 2.0 models), and "fp32" attention precision (required for some SD 2.1 models). - **Color correction for img2img** - an option to preserve the color profile (histogram) of the initial image. This is especially useful if you're getting red-tinted images after inpainting/masking. - **Three GPU Memory Usage Settings** - `High` (fastest, maximum VRAM usage), `Balanced` (default - almost as fast, significantly lower VRAM usage), `Low` (slowest, very low VRAM usage). The `Low` setting is applied automatically for GPUs with less than 4 GB of VRAM. - **Find models in sub-folders** - This allows you to organize your models into sub-folders inside `models/stable-diffusion`, instead of keeping them all in a single folder. Thanks @patriceac and @ogmaresca. -- **Custom Modifier Categories** - Ability to create custom modifiers with thumbnails, and custom categories (and hierarchy of categories). Details: https://github.com/cmdr2/stable-diffusion-ui/wiki/Custom-Modifiers . Thanks @ogmaresca. +- **Custom Modifier Categories** - Ability to create custom modifiers with thumbnails, and custom categories (and hierarchy of categories). Details: https://github.com/easydiffusion/easydiffusion/wiki/Custom-Modifiers . Thanks @ogmaresca. - **Embed metadata, or save as TXT/JSON** - You can now embed the metadata directly into the images, or save them as text or json files (choose in the Settings tab). Thanks @patriceac. - **Major rewrite of the code** - Most of the codebase has been reorganized and rewritten, to make it more manageable and easier for new developers to contribute features. We've separated our core engine into a new project called `sdkit`, which allows anyone to easily integrate Stable Diffusion (and related modules like GFPGAN etc) into their programming projects (via a simple `pip install sdkit`): https://github.com/easydiffusion/sdkit/ - **Name change** - Last, and probably the least, the UI is now called "Easy Diffusion". It indicates the focus of this project - an easy way for people to play with Stable Diffusion. @@ -21,6 +22,30 @@ Our focus continues to remain on an easy installation experience, and an easy user-interface. While still remaining pretty powerful, in terms of features and speed. ### Detailed changelog +* 2.5.41 - 24 Jun 2023 - (beta-only) Fix broken inpainting in low VRAM usage mode. +* 2.5.41 - 24 Jun 2023 - (beta-only) Fix a recent regression where the LoRA would not get applied when changing SD models. +* 2.5.41 - 23 Jun 2023 - Fix a regression where latent upscaler stopped working on PCs without a graphics card. +* 2.5.41 - 20 Jun 2023 - Automatically fix black images if fp32 attention precision is required in diffusers. +* 2.5.41 - 19 Jun 2023 - Another fix for multi-gpu rendering (in all VRAM usage modes). +* 2.5.41 - 13 Jun 2023 - Fix multi-gpu bug with "low" VRAM usage mode while generating images. +* 2.5.41 - 12 Jun 2023 - Fix multi-gpu bug with CodeFormer. +* 2.5.41 - 6 Jun 2023 - Allow changing the strength of CodeFormer, and slightly improved styling of the CodeFormer options. +* 2.5.41 - 5 Jun 2023 - Allow sharing an Easy Diffusion instance via https://try.cloudflare.com/ . You can find this option at the bottom of the Settings tab. Thanks @JeLuf. +* 2.5.41 - 5 Jun 2023 - Show an option to download for tiled images. Shows a button on the generated image. Creates larger images by tiling them with the image generated by Easy Diffusion. Thanks @JeLuf. +* 2.5.41 - 5 Jun 2023 - (beta-only) Allow LoRA strengths between -2 and 2. Thanks @ogmaresca. +* 2.5.40 - 5 Jun 2023 - Reduce the VRAM usage of Latent Upscaling when using "balanced" VRAM usage mode. +* 2.5.40 - 5 Jun 2023 - Fix the "realesrgan" key error when using CodeFormer with more than 1 image in a batch. +* 2.5.40 - 3 Jun 2023 - Added CodeFormer as another option for fixing faces and eyes. CodeFormer tends to perform better than GFPGAN for many images. Thanks @patriceac for the implementation, and for contacting the CodeFormer team (who were supportive of it being integrated into Easy Diffusion). +* 2.5.39 - 25 May 2023 - (beta-only) Seamless Tiling - make seamlessly tiled images, e.g. rock and grass textures. Thanks @JeLuf. +* 2.5.38 - 24 May 2023 - Better reporting of errors, and show an explanation if the user cannot disable the "Use CPU" setting. +* 2.5.38 - 23 May 2023 - Add Latent Upscaler as another option for upscaling images. Thanks @JeLuf for the implementation of the Latent Upscaler model. +* 2.5.37 - 19 May 2023 - (beta-only) Two more samplers: DDPM and DEIS. Also disables the samplers that aren't working yet in the Diffusers version. Thanks @ogmaresca. +* 2.5.37 - 19 May 2023 - (beta-only) Support CLIP-Skip. You can set this option under the models dropdown. Thanks @JeLuf. +* 2.5.37 - 19 May 2023 - (beta-only) More VRAM optimizations for all modes in diffusers. The VRAM usage for diffusers in "low" and "balanced" should now be equal or less than the non-diffusers version. Performs softmax in half precision, like sdkit does. +* 2.5.36 - 16 May 2023 - (beta-only) More VRAM optimizations for "balanced" VRAM usage mode. +* 2.5.36 - 11 May 2023 - (beta-only) More VRAM optimizations for "low" VRAM usage mode. +* 2.5.36 - 10 May 2023 - (beta-only) Bug fix for "meta" error when using a LoRA in 'low' VRAM usage mode. +* 2.5.35 - 8 May 2023 - Allow dragging a zoomed-in image (after opening an image with the "expand" button). Thanks @ogmaresca. * 2.5.35 - 3 May 2023 - (beta-only) First round of VRAM Optimizations for the "Test Diffusers" version. This change significantly reduces the amount of VRAM used by the diffusers version during image generation. The VRAM usage is still not equal to the "non-diffusers" version, but more optimizations are coming soon. * 2.5.34 - 22 Apr 2023 - Don't start the browser in an incognito new profile (on Windows). Thanks @JeLuf. * 2.5.33 - 21 Apr 2023 - Install PyTorch 2.0 on new installations (on Windows and Linux). @@ -47,7 +72,7 @@ Our focus continues to remain on an easy installation experience, and an easy us * 2.5.24 - 11 Mar 2023 - Button to load an image mask from a file. * 2.5.24 - 10 Mar 2023 - Logo change. Image credit: @lazlo_vii. * 2.5.23 - 8 Mar 2023 - Experimental support for Mac M1/M2. Thanks @michaelgallacher, @JeLuf and vishae! -* 2.5.23 - 8 Mar 2023 - Ability to create custom modifiers with thumbnails, and custom categories (and hierarchy of categories). More details - https://github.com/cmdr2/stable-diffusion-ui/wiki/Custom-Modifiers . Thanks @ogmaresca. +* 2.5.23 - 8 Mar 2023 - Ability to create custom modifiers with thumbnails, and custom categories (and hierarchy of categories). More details - https://github.com/easydiffusion/easydiffusion/wiki/Custom-Modifiers . Thanks @ogmaresca. * 2.5.22 - 28 Feb 2023 - Minor styling changes to UI buttons, and the models dropdown. * 2.5.22 - 28 Feb 2023 - Lots of UI-related bug fixes. Thanks @patriceac. * 2.5.21 - 22 Feb 2023 - An option to control the size of the image thumbnails. You can use the `Display options` in the top-right corner to change this. Thanks @JeLuf. @@ -72,7 +97,7 @@ Our focus continues to remain on an easy installation experience, and an easy us * 2.5.14 - 3 Feb 2023 - Fix the 'Make Similar Images' button, which was producing incorrect images (weren't very similar). * 2.5.13 - 1 Feb 2023 - Fix the remaining GPU memory leaks, including a better fix (more comprehensive) for the change in 2.5.12 (27 Jan). * 2.5.12 - 27 Jan 2023 - Fix a memory leak, which made the UI unresponsive after an out-of-memory error. The allocated memory is now freed-up after an error. -* 2.5.11 - 25 Jan 2023 - UI for Merging Models. Thanks @JeLuf. More info: https://github.com/cmdr2/stable-diffusion-ui/wiki/Model-Merging +* 2.5.11 - 25 Jan 2023 - UI for Merging Models. Thanks @JeLuf. More info: https://github.com/easydiffusion/easydiffusion/wiki/Model-Merging * 2.5.10 - 24 Jan 2023 - Reduce the VRAM usage for img2img in 'balanced' mode (without reducing the rendering speed), to make it similar to v2.4 of this UI. * 2.5.9 - 23 Jan 2023 - Fix a bug where img2img would produce poorer-quality images for the same settings, as compared to version 2.4 of this UI. * 2.5.9 - 23 Jan 2023 - Reduce the VRAM usage for 'balanced' mode (without reducing the rendering speed), to make it similar to v2.4 of the UI. @@ -101,8 +126,8 @@ Our focus continues to remain on an easy installation experience, and an easy us - **Automatic scanning for malicious model files** - using `picklescan`, and support for `safetensor` model format. Thanks @JeLuf - **Image Editor** - for drawing simple images for guiding the AI. Thanks @mdiller - **Use pre-trained hypernetworks** - for improving the quality of images. Thanks @C0bra5 -- **Support for custom VAE models**. You can place your VAE files in the `models/vae` folder, and refresh the browser page to use them. More info: https://github.com/cmdr2/stable-diffusion-ui/wiki/VAE-Variational-Auto-Encoder -- **Experimental support for multiple GPUs!** It should work automatically. Just open one browser tab per GPU, and spread your tasks across your GPUs. For e.g. open our UI in two browser tabs if you have two GPUs. You can customize which GPUs it should use in the "Settings" tab, otherwise let it automatically pick the best GPUs. Thanks @madrang . More info: https://github.com/cmdr2/stable-diffusion-ui/wiki/Run-on-Multiple-GPUs +- **Support for custom VAE models**. You can place your VAE files in the `models/vae` folder, and refresh the browser page to use them. More info: https://github.com/easydiffusion/easydiffusion/wiki/VAE-Variational-Auto-Encoder +- **Experimental support for multiple GPUs!** It should work automatically. Just open one browser tab per GPU, and spread your tasks across your GPUs. For e.g. open our UI in two browser tabs if you have two GPUs. You can customize which GPUs it should use in the "Settings" tab, otherwise let it automatically pick the best GPUs. Thanks @madrang . More info: https://github.com/easydiffusion/easydiffusion/wiki/Run-on-Multiple-GPUs - **Cleaner UI design** - Show settings and help in new tabs, instead of dropdown popups (which were buggy). Thanks @mdiller - **Progress bar.** Thanks @mdiller - **Custom Image Modifiers** - You can now save your custom image modifiers! Your saved modifiers can include special characters like `{}, (), [], |` diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index c01d489a..bb6408c8 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -1,6 +1,6 @@ Hi there, these instructions are meant for the developers of this project. -If you only want to use the Stable Diffusion UI, you've downloaded the wrong file. In that case, please download and follow the instructions at https://github.com/cmdr2/stable-diffusion-ui#installation +If you only want to use the Stable Diffusion UI, you've downloaded the wrong file. In that case, please download and follow the instructions at https://github.com/easydiffusion/easydiffusion#installation Thanks @@ -13,7 +13,7 @@ If you would like to contribute to this project, there is a discord for discussi This is in-flux, but one way to get a development environment running for editing the UI of this project is: (swap `.sh` or `.bat` in instructions depending on your environment, and be sure to adjust any paths to match where you're working) -1) Install the project to a new location using the [usual installation process](https://github.com/cmdr2/stable-diffusion-ui#installation), e.g. to `/projects/stable-diffusion-ui-archive` +1) Install the project to a new location using the [usual installation process](https://github.com/easydiffusion/easydiffusion#installation), e.g. to `/projects/stable-diffusion-ui-archive` 2) Start the newly installed project, and check that you can view and generate images on `localhost:9000` 3) Next, please clone the project repository using `git clone` (e.g. to `/projects/stable-diffusion-ui-repo`) 4) Close the server (started in step 2), and edit `/projects/stable-diffusion-ui-archive/scripts/on_env_start.sh` (or `on_env_start.bat`) diff --git a/How to install and run.txt b/How to install and run.txt index e48d217c..af783b64 100644 --- a/How to install and run.txt +++ b/How to install and run.txt @@ -1,6 +1,6 @@ Congrats on downloading Stable Diffusion UI, version 2! -If you haven't downloaded Stable Diffusion UI yet, please download from https://github.com/cmdr2/stable-diffusion-ui#installation +If you haven't downloaded Stable Diffusion UI yet, please download from https://github.com/easydiffusion/easydiffusion#installation After downloading, to install please follow these instructions: @@ -16,9 +16,9 @@ To start the UI in the future, please run the same command mentioned above. If you have any problems, please: -1. Try the troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting +1. Try the troubleshooting steps at https://github.com/easydiffusion/easydiffusion/wiki/Troubleshooting 2. Or, seek help from the community at https://discord.com/invite/u9yhsFmEkB -3. Or, file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues +3. Or, file an issue at https://github.com/easydiffusion/easydiffusion/issues Thanks cmdr2 (and contributors to the project) \ No newline at end of file diff --git a/PRIVACY.md b/PRIVACY.md new file mode 100644 index 00000000..543a167d --- /dev/null +++ b/PRIVACY.md @@ -0,0 +1,9 @@ +// placeholder until a more formal and legal-sounding privacy policy document is written. but the information below is true. + +This is a summary of whether Easy Diffusion uses your data or tracks you: +* The short answer is - Easy Diffusion does *not* use your data, and does *not* track you. +* Easy Diffusion does not send your prompts or usage or analytics to anyone. There is no tracking. We don't even know how many people use Easy Diffusion, let alone their prompts. +* Easy Diffusion fetches updates to the code whenever it starts up. It does this by contacting GitHub directly, via SSL (secure connection). Only your computer and GitHub and [this repository](https://github.com/easydiffusion/easydiffusion) are involved, and no third party is involved. Some countries intercepts SSL connections, that's not something we can do much about. GitHub does *not* share statistics (even with me) about how many people fetched code updates. +* Easy Diffusion fetches the models from huggingface.co and github.com, if they don't exist on your PC. For e.g. if the safety checker (NSFW) model doesn't exist, it'll try to download it. +* Easy Diffusion fetches code packages from pypi.org, which is the standard hosting service for all Python projects. That's where packages installed via `pip install` are stored. +* Occasionally, antivirus software are known to *incorrectly* flag and delete some model files, which will result in Easy Diffusion re-downloading `pytorch_model.bin`. This *incorrect deletion* affects other Stable Diffusion UIs as well, like Invoke AI - https://itch.io/post/7509488 diff --git a/README BEFORE YOU RUN THIS.txt b/README BEFORE YOU RUN THIS.txt index e9f81544..a989b835 100644 --- a/README BEFORE YOU RUN THIS.txt +++ b/README BEFORE YOU RUN THIS.txt @@ -3,6 +3,6 @@ Hi there, What you have downloaded is meant for the developers of this project, not for users. If you only want to use the Stable Diffusion UI, you've downloaded the wrong file. -Please download and follow the instructions at https://github.com/cmdr2/stable-diffusion-ui#installation +Please download and follow the instructions at https://github.com/easydiffusion/easydiffusion#installation Thanks \ No newline at end of file diff --git a/README.md b/README.md index 3cb0bf8e..b97c35d1 100644 --- a/README.md +++ b/README.md @@ -3,7 +3,7 @@ Does not require technical knowledge, does not require pre-installed software. 1-click install, powerful features, friendly community. -[Installation guide](#installation) | [Troubleshooting guide](https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting) | [](https://discord.com/invite/u9yhsFmEkB) (for support queries, and development discussions) +[Installation guide](#installation) | [Troubleshooting guide](https://github.com/easydiffusion/easydiffusion/wiki/Troubleshooting) | [](https://discord.com/invite/u9yhsFmEkB) (for support queries, and development discussions)  @@ -11,15 +11,17 @@ Does not require technical knowledge, does not require pre-installed software. 1 Click the download button for your operating system:
**Hardware requirements:** -- **Windows:** NVIDIA graphics card, or run on your CPU -- **Linux:** NVIDIA or AMD graphics card, or run on your CPU -- **Mac:** M1 or M2, or run on your CPU +- **Windows:** NVIDIA graphics card (minimum 2 GB RAM), or run on your CPU. +- **Linux:** NVIDIA or AMD graphics card (minimum 2 GB RAM), or run on your CPU. +- **Mac:** M1 or M2, or run on your CPU. +- Minimum 8 GB of system RAM. +- Atleast 25 GB of space on the hard disk. The installer will take care of whatever is needed. If you face any problems, you can join the friendly [Discord community](https://discord.com/invite/u9yhsFmEkB) and ask for assistance. @@ -58,7 +60,7 @@ Just delete the `EasyDiffusion` folder to uninstall all the downloaded packages. ### Image generation - **Supports**: "*Text to Image*" and "*Image to Image*". -- **19 Samplers**: `ddim`, `plms`, `heun`, `euler`, `euler_a`, `dpm2`, `dpm2_a`, `lms`, `dpm_solver_stability`, `dpmpp_2s_a`, `dpmpp_2m`, `dpmpp_sde`, `dpm_fast`, `dpm_adaptive`, `unipc_snr`, `unipc_tu`, `unipc_tq`, `unipc_snr_2`, `unipc_tu_2`. +- **21 Samplers**: `ddim`, `plms`, `heun`, `euler`, `euler_a`, `dpm2`, `dpm2_a`, `lms`, `dpm_solver_stability`, `dpmpp_2s_a`, `dpmpp_2m`, `dpmpp_sde`, `dpm_fast`, `dpm_adaptive`, `ddpm`, `deis`, `unipc_snr`, `unipc_tu`, `unipc_tq`, `unipc_snr_2`, `unipc_tu_2`. - **In-Painting**: Specify areas of your image to paint into. - **Simple Drawing Tool**: Draw basic images to guide the AI, without needing an external drawing program. - **Face Correction (GFPGAN)** @@ -68,6 +70,7 @@ Just delete the `EasyDiffusion` folder to uninstall all the downloaded packages. - **Attention/Emphasis**: () in the prompt increases the model's attention to enclosed words, and [] decreases it. - **Weighted Prompts**: Use weights for specific words in your prompt to change their importance, e.g. `red:2.4 dragon:1.2`. - **Prompt Matrix**: Quickly create multiple variations of your prompt, e.g. `a photograph of an astronaut riding a horse | illustration | cinematic lighting`. +- **Prompt Set**: Quickly create multiple variations of your prompt, e.g. `a photograph of an astronaut on the {moon,earth}` - **1-click Upscale/Face Correction**: Upscale or correct an image after it has been generated. - **Make Similar Images**: Click to generate multiple variations of a generated image. - **NSFW Setting**: A setting in the UI to control *NSFW content*. @@ -80,11 +83,11 @@ Just delete the `EasyDiffusion` folder to uninstall all the downloaded packages. - **Use custom VAE models** - **Use pre-trained Hypernetworks** - **Use custom GFPGAN models** -- **UI Plugins**: Choose from a growing list of [community-generated UI plugins](https://github.com/cmdr2/stable-diffusion-ui/wiki/UI-Plugins), or write your own plugin to add features to the project! +- **UI Plugins**: Choose from a growing list of [community-generated UI plugins](https://github.com/easydiffusion/easydiffusion/wiki/UI-Plugins), or write your own plugin to add features to the project! ### Performance and security - **Fast**: Creates a 512x512 image with euler_a in 5 seconds, on an NVIDIA 3060 12GB. -- **Low Memory Usage**: Create 512x512 images with less than 3 GB of GPU RAM, and 768x768 images with less than 4 GB of GPU RAM! +- **Low Memory Usage**: Create 512x512 images with less than 2 GB of GPU RAM, and 768x768 images with less than 3 GB of GPU RAM! - **Use CPU setting**: If you don't have a compatible graphics card, but still want to run it on your CPU. - **Multi-GPU support**: Automatically spreads your tasks across multiple GPUs (if available), for faster performance! - **Auto scan for malicious models**: Uses picklescan to prevent malicious models. @@ -113,21 +116,13 @@ Useful for judging (and stopping) an image quickly, without waiting for it to fi  - -# System Requirements -1. Windows 10/11, or Linux. Experimental support for Mac is coming soon. -2. An NVIDIA graphics card, preferably with 4GB or more of VRAM. If you don't have a compatible graphics card, it'll automatically run in the slower "CPU Mode". -3. Minimum 8 GB of RAM and 25GB of disk space. - -You don't need to install or struggle with Python, Anaconda, Docker etc. The installer will take care of whatever is needed. - ---- # How to use? -Please refer to our [guide](https://github.com/cmdr2/stable-diffusion-ui/wiki/How-to-Use) to understand how to use the features in this UI. +Please refer to our [guide](https://github.com/easydiffusion/easydiffusion/wiki/How-to-Use) to understand how to use the features in this UI. # Bugs reports and code contributions welcome -If there are any problems or suggestions, please feel free to ask on the [discord server](https://discord.com/invite/u9yhsFmEkB) or [file an issue](https://github.com/cmdr2/stable-diffusion-ui/issues). +If there are any problems or suggestions, please feel free to ask on the [discord server](https://discord.com/invite/u9yhsFmEkB) or [file an issue](https://github.com/easydiffusion/easydiffusion/issues). We could really use help on these aspects (click to view tasks that need your help): * [User Interface](https://github.com/users/cmdr2/projects/1/views/1) diff --git a/build.bat b/build.bat index 6e3f3f81..dc9e622f 100644 --- a/build.bat +++ b/build.bat @@ -2,7 +2,7 @@ @echo "Hi there, what you are running is meant for the developers of this project, not for users." & echo. @echo "If you only want to use the Stable Diffusion UI, you've downloaded the wrong file." -@echo "Please download and follow the instructions at https://github.com/cmdr2/stable-diffusion-ui#installation" & echo. +@echo "Please download and follow the instructions at https://github.com/easydiffusion/easydiffusion#installation" & echo. @echo "If you are actually a developer of this project, please type Y and press enter" & echo. set /p answer=Are you a developer of this project (Y/N)? diff --git a/build.sh b/build.sh index f4538e5c..bddf3c49 100755 --- a/build.sh +++ b/build.sh @@ -2,7 +2,7 @@ printf "Hi there, what you are running is meant for the developers of this project, not for users.\n\n" printf "If you only want to use the Stable Diffusion UI, you've downloaded the wrong file.\n" -printf "Please download and follow the instructions at https://github.com/cmdr2/stable-diffusion-ui#installation\n\n" +printf "Please download and follow the instructions at https://github.com/easydiffusion/easydiffusion#installation \n\n" printf "If you are actually a developer of this project, please type Y and press enter\n\n" read -p "Are you a developer of this project (Y/N) " yn diff --git a/scripts/Developer Console.cmd b/scripts/Developer Console.cmd index 921a9dca..0efbda13 100644 --- a/scripts/Developer Console.cmd +++ b/scripts/Developer Console.cmd @@ -2,6 +2,8 @@ echo "Opening Stable Diffusion UI - Developer Console.." & echo. +cd /d %~dp0 + set PATH=C:\Windows\System32;%PATH% @rem set legacy and new installer's PATH, if they exist @@ -21,6 +23,8 @@ call git --version call where conda call conda --version +echo. +echo COMSPEC=%COMSPEC% echo. @rem activate the legacy environment (if present) and set PYTHONPATH diff --git a/scripts/Start Stable Diffusion UI.cmd b/scripts/Start Stable Diffusion UI.cmd index 4f8555ea..9a4a6303 100644 --- a/scripts/Start Stable Diffusion UI.cmd +++ b/scripts/Start Stable Diffusion UI.cmd @@ -36,8 +36,9 @@ call git --version call where conda call conda --version +echo . +echo COMSPEC=%COMSPEC% @rem Download the rest of the installer and UI call scripts\on_env_start.bat - @pause diff --git a/scripts/bootstrap.bat b/scripts/bootstrap.bat index d3cdd19f..8c1069c8 100644 --- a/scripts/bootstrap.bat +++ b/scripts/bootstrap.bat @@ -11,7 +11,7 @@ setlocal enabledelayedexpansion set MAMBA_ROOT_PREFIX=%cd%\installer_files\mamba set INSTALL_ENV_DIR=%cd%\installer_files\env set LEGACY_INSTALL_ENV_DIR=%cd%\installer -set MICROMAMBA_DOWNLOAD_URL=https://github.com/cmdr2/stable-diffusion-ui/releases/download/v1.1/micromamba.exe +set MICROMAMBA_DOWNLOAD_URL=https://github.com/easydiffusion/easydiffusion/releases/download/v1.1/micromamba.exe set umamba_exists=F set OLD_APPDATA=%APPDATA% diff --git a/scripts/check_models.py b/scripts/check_models.py deleted file mode 100644 index 4b8d68c6..00000000 --- a/scripts/check_models.py +++ /dev/null @@ -1,101 +0,0 @@ -# this script runs inside the legacy "stable-diffusion" folder - -from sdkit.models import download_model, get_model_info_from_db -from sdkit.utils import hash_file_quick - -import os -import shutil -from glob import glob -import traceback - -models_base_dir = os.path.abspath(os.path.join("..", "models")) - -models_to_check = { - "stable-diffusion": [ - {"file_name": "sd-v1-4.ckpt", "model_id": "1.4"}, - ], - "gfpgan": [ - {"file_name": "GFPGANv1.4.pth", "model_id": "1.4"}, - ], - "realesrgan": [ - {"file_name": "RealESRGAN_x4plus.pth", "model_id": "x4plus"}, - {"file_name": "RealESRGAN_x4plus_anime_6B.pth", "model_id": "x4plus_anime_6"}, - ], - "vae": [ - {"file_name": "vae-ft-mse-840000-ema-pruned.ckpt", "model_id": "vae-ft-mse-840000-ema-pruned"}, - ], -} -MODEL_EXTENSIONS = { # copied from easydiffusion/model_manager.py - "stable-diffusion": [".ckpt", ".safetensors"], - "vae": [".vae.pt", ".ckpt", ".safetensors"], - "hypernetwork": [".pt", ".safetensors"], - "gfpgan": [".pth"], - "realesrgan": [".pth"], - "lora": [".ckpt", ".safetensors"], -} - - -def download_if_necessary(model_type: str, file_name: str, model_id: str): - model_path = os.path.join(models_base_dir, model_type, file_name) - expected_hash = get_model_info_from_db(model_type=model_type, model_id=model_id)["quick_hash"] - - other_models_exist = any_model_exists(model_type) - known_model_exists = os.path.exists(model_path) - known_model_is_corrupt = known_model_exists and hash_file_quick(model_path) != expected_hash - - if known_model_is_corrupt or (not other_models_exist and not known_model_exists): - print("> download", model_type, model_id) - download_model(model_type, model_id, download_base_dir=models_base_dir) - - -def init(): - migrate_legacy_model_location() - - for model_type, models in models_to_check.items(): - for model in models: - try: - download_if_necessary(model_type, model["file_name"], model["model_id"]) - except: - traceback.print_exc() - fail(model_type) - - print(model_type, "model(s) found.") - - -### utilities -def any_model_exists(model_type: str) -> bool: - extensions = MODEL_EXTENSIONS.get(model_type, []) - for ext in extensions: - if any(glob(f"{models_base_dir}/{model_type}/**/*{ext}", recursive=True)): - return True - - return False - - -def migrate_legacy_model_location(): - 'Move the models inside the legacy "stable-diffusion" folder, to their respective folders' - - for model_type, models in models_to_check.items(): - for model in models: - file_name = model["file_name"] - if os.path.exists(file_name): - dest_dir = os.path.join(models_base_dir, model_type) - os.makedirs(dest_dir, exist_ok=True) - shutil.move(file_name, os.path.join(dest_dir, file_name)) - - -def fail(model_name): - print( - f"""Error downloading the {model_name} model. Sorry about that, please try to: -1. Run this installer again. -2. If that doesn't fix it, please try to download the file manually. The address to download from, and the destination to save to are printed above this message. -3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB -4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues -Thanks!""" - ) - exit(1) - - -### start - -init() diff --git a/scripts/check_modules.py b/scripts/check_modules.py index 031f7d66..4634adb3 100644 --- a/scripts/check_modules.py +++ b/scripts/check_modules.py @@ -18,13 +18,15 @@ os_name = platform.system() modules_to_check = { "torch": ("1.11.0", "1.13.1", "2.0.0"), "torchvision": ("0.12.0", "0.14.1", "0.15.1"), - "sdkit": "1.0.87", + "sdkit": "1.0.112", "stable-diffusion-sdkit": "2.1.4", "rich": "12.6.0", "uvicorn": "0.19.0", "fastapi": "0.85.1", + "pycloudflared": "0.2.0", # "xformers": "0.0.16", } +modules_to_log = ["torch", "torchvision", "sdkit", "stable-diffusion-sdkit"] def version(module_name: str) -> str: @@ -89,7 +91,8 @@ def init(): traceback.print_exc() fail(module_name) - print(f"{module_name}: {version(module_name)}") + if module_name in modules_to_log: + print(f"{module_name}: {version(module_name)}") ### utilities @@ -130,10 +133,13 @@ def include_cuda_versions(module_versions: tuple) -> tuple: def is_amd_on_linux(): if os_name == "Linux": - with open("/proc/bus/pci/devices", "r") as f: - device_info = f.read() - if "amdgpu" in device_info and "nvidia" not in device_info: - return True + try: + with open("/proc/bus/pci/devices", "r") as f: + device_info = f.read() + if "amdgpu" in device_info and "nvidia" not in device_info: + return True + except: + return False return False @@ -142,9 +148,9 @@ def fail(module_name): print( f"""Error installing {module_name}. Sorry about that, please try to: 1. Run this installer again. -2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting +2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/easydiffusion/easydiffusion/wiki/Troubleshooting 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB -4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues +4. If that doesn't solve the problem, please file an issue at https://github.com/easydiffusion/easydiffusion/issues Thanks!""" ) exit(1) diff --git a/scripts/developer_console.sh b/scripts/developer_console.sh index 73972568..57846eeb 100755 --- a/scripts/developer_console.sh +++ b/scripts/developer_console.sh @@ -39,6 +39,8 @@ if [ "$0" == "bash" ]; then export PYTHONPATH="$(pwd)/stable-diffusion/env/lib/python3.8/site-packages" fi + export PYTHONNOUSERSITE=y + which python python --version diff --git a/scripts/functions.sh b/scripts/functions.sh index 495e9950..477b7743 100644 --- a/scripts/functions.sh +++ b/scripts/functions.sh @@ -15,9 +15,9 @@ fail() { Error downloading Stable Diffusion UI. Sorry about that, please try to: 1. Run this installer again. - 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting + 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/easydiffusion/easydiffusion/wiki/Troubleshooting 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB - 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues + 4. If that doesn't solve the problem, please file an issue at https://github.com/easydiffusion/easydiffusion/issues Thanks! diff --git a/scripts/get_config.py b/scripts/get_config.py index 02523364..9cdfb2fe 100644 --- a/scripts/get_config.py +++ b/scripts/get_config.py @@ -1,5 +1,6 @@ import os import argparse +import sys # The config file is in the same directory as this script config_directory = os.path.dirname(__file__) @@ -21,16 +22,16 @@ if os.path.isfile(config_yaml): try: config = yaml.safe_load(configfile) except Exception as e: - print(e) - exit() + print(e, file=sys.stderr) + config = {} elif os.path.isfile(config_json): import json with open(config_json, 'r') as configfile: try: config = json.load(configfile) except Exception as e: - print(e) - exit() + print(e, file=sys.stderr) + config = {} else: config = {} diff --git a/scripts/on_env_start.bat b/scripts/on_env_start.bat index 44144cfa..0871973f 100644 --- a/scripts/on_env_start.bat +++ b/scripts/on_env_start.bat @@ -55,10 +55,10 @@ if "%update_branch%"=="" ( @echo. & echo "Downloading Easy Diffusion..." & echo. @echo "Using the %update_branch% channel" & echo. - @call git clone -b "%update_branch%" https://github.com/cmdr2/stable-diffusion-ui.git sd-ui-files && ( + @call git clone -b "%update_branch%" https://github.com/easydiffusion/easydiffusion.git sd-ui-files && ( @echo sd_ui_git_cloned >> scripts\install_status.txt ) || ( - @echo "Error downloading Easy Diffusion. Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" + @echo "Error downloading Easy Diffusion. Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/easydiffusion/easydiffusion/wiki/Troubleshooting" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/easydiffusion/easydiffusion/issues" & echo "Thanks!" pause @exit /b ) @@ -67,7 +67,6 @@ if "%update_branch%"=="" ( @xcopy sd-ui-files\ui ui /s /i /Y /q @copy sd-ui-files\scripts\on_sd_start.bat scripts\ /Y @copy sd-ui-files\scripts\check_modules.py scripts\ /Y -@copy sd-ui-files\scripts\check_models.py scripts\ /Y @copy sd-ui-files\scripts\get_config.py scripts\ /Y @copy "sd-ui-files\scripts\Start Stable Diffusion UI.cmd" . /Y @copy "sd-ui-files\scripts\Developer Console.cmd" . /Y diff --git a/scripts/on_env_start.sh b/scripts/on_env_start.sh index 30465975..d936924e 100755 --- a/scripts/on_env_start.sh +++ b/scripts/on_env_start.sh @@ -38,7 +38,7 @@ else printf "\n\nDownloading Easy Diffusion..\n\n" printf "Using the $update_branch channel\n\n" - if git clone -b "$update_branch" https://github.com/cmdr2/stable-diffusion-ui.git sd-ui-files ; then + if git clone -b "$update_branch" https://github.com/easydiffusion/easydiffusion.git sd-ui-files ; then echo sd_ui_git_cloned >> scripts/install_status.txt else fail "git clone failed" @@ -50,7 +50,6 @@ cp -Rf sd-ui-files/ui . cp sd-ui-files/scripts/on_sd_start.sh scripts/ cp sd-ui-files/scripts/bootstrap.sh scripts/ cp sd-ui-files/scripts/check_modules.py scripts/ -cp sd-ui-files/scripts/check_models.py scripts/ cp sd-ui-files/scripts/get_config.py scripts/ cp sd-ui-files/scripts/start.sh . cp sd-ui-files/scripts/developer_console.sh . diff --git a/scripts/on_sd_start.bat b/scripts/on_sd_start.bat index ba205c9e..860361d4 100644 --- a/scripts/on_sd_start.bat +++ b/scripts/on_sd_start.bat @@ -5,7 +5,6 @@ @copy sd-ui-files\scripts\on_env_start.bat scripts\ /Y @copy sd-ui-files\scripts\check_modules.py scripts\ /Y -@copy sd-ui-files\scripts\check_models.py scripts\ /Y @copy sd-ui-files\scripts\get_config.py scripts\ /Y if exist "%cd%\profile" ( @@ -27,7 +26,7 @@ if exist "%cd%\stable-diffusion\env" ( @rem activate the installer env call conda activate @if "%ERRORLEVEL%" NEQ "0" ( - @echo. & echo "Error activating conda for Easy Diffusion. Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo. + @echo. & echo "Error activating conda for Easy Diffusion. Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/easydiffusion/easydiffusion/wiki/Troubleshooting" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/easydiffusion/easydiffusion/issues" & echo "Thanks!" & echo. pause exit /b ) @@ -69,7 +68,7 @@ if "%ERRORLEVEL%" NEQ "0" ( call WHERE uvicorn > .tmp @>nul findstr /m "uvicorn" .tmp @if "%ERRORLEVEL%" NEQ "0" ( - @echo. & echo "UI packages not found! Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo. + @echo. & echo "UI packages not found! Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/easydiffusion/easydiffusion/wiki/Troubleshooting" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/easydiffusion/easydiffusion/issues" & echo "Thanks!" & echo. pause exit /b ) @@ -79,13 +78,6 @@ call WHERE uvicorn > .tmp @echo conda_sd_ui_deps_installed >> ..\scripts\install_status.txt ) -@rem Download the required models -call python ..\scripts\check_models.py -if "%ERRORLEVEL%" NEQ "0" ( - pause - exit /b -) - @>nul findstr /m "sd_install_complete" ..\scripts\install_status.txt @if "%ERRORLEVEL%" NEQ "0" ( @echo sd_weights_downloaded >> ..\scripts\install_status.txt diff --git a/scripts/on_sd_start.sh b/scripts/on_sd_start.sh index 820c36ed..be5161d4 100755 --- a/scripts/on_sd_start.sh +++ b/scripts/on_sd_start.sh @@ -4,7 +4,6 @@ cp sd-ui-files/scripts/functions.sh scripts/ cp sd-ui-files/scripts/on_env_start.sh scripts/ cp sd-ui-files/scripts/bootstrap.sh scripts/ cp sd-ui-files/scripts/check_modules.py scripts/ -cp sd-ui-files/scripts/check_models.py scripts/ cp sd-ui-files/scripts/get_config.py scripts/ source ./scripts/functions.sh @@ -51,12 +50,6 @@ if ! command -v uvicorn &> /dev/null; then fail "UI packages not found!" fi -# Download the required models -if ! python ../scripts/check_models.py; then - read -p "Press any key to continue" - exit 1 -fi - if [ `grep -c sd_install_complete ../scripts/install_status.txt` -gt "0" ]; then echo sd_weights_downloaded >> ../scripts/install_status.txt echo sd_install_complete >> ../scripts/install_status.txt diff --git a/ui/easydiffusion/app.py b/ui/easydiffusion/app.py index b6318f01..99810e75 100644 --- a/ui/easydiffusion/app.py +++ b/ui/easydiffusion/app.py @@ -10,6 +10,8 @@ import warnings from easydiffusion import task_manager from easydiffusion.utils import log from rich.logging import RichHandler +from rich.console import Console +from rich.panel import Panel from sdkit.utils import log as sdkit_log # hack, so we can overwrite the log config # Remove all handlers associated with the root logger object. @@ -88,8 +90,8 @@ def init(): os.makedirs(USER_SERVER_PLUGINS_DIR, exist_ok=True) # https://pytorch.org/docs/stable/storage.html - warnings.filterwarnings('ignore', category=UserWarning, message='TypedStorage is deprecated') - + warnings.filterwarnings("ignore", category=UserWarning, message="TypedStorage is deprecated") + load_server_plugins() update_render_threads() @@ -213,11 +215,48 @@ def open_browser(): ui = config.get("ui", {}) net = config.get("net", {}) port = net.get("listen_port", 9000) + if ui.get("open_browser_on_start", True): import webbrowser webbrowser.open(f"http://localhost:{port}") + Console().print( + Panel( + "\n" + + "[white]Easy Diffusion is ready to serve requests.\n\n" + + "A new browser tab should have been opened by now.\n" + + f"If not, please open your web browser and navigate to [bold yellow underline]http://localhost:{port}/\n", + title="Easy Diffusion is ready", + style="bold yellow on blue", + ) + ) + + +def fail_and_die(fail_type: str, data: str): + suggestions = [ + "Run this installer again.", + "If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB", + "If that doesn't solve the problem, please file an issue at https://github.com/easydiffusion/easydiffusion/issues", + ] + + if fail_type == "model_download": + fail_label = f"Error downloading the {data} model" + suggestions.insert( + 1, + "If that doesn't fix it, please try to download the file manually. The address to download from, and the destination to save to are printed above this message.", + ) + else: + fail_label = "Error while installing Easy Diffusion" + + msg = [f"{fail_label}. Sorry about that, please try to:"] + for i, suggestion in enumerate(suggestions): + msg.append(f"{i+1}. {suggestion}") + msg.append("Thanks!") + + print("\n".join(msg)) + exit(1) + def get_image_modifiers(): modifiers_json_path = os.path.join(SD_UI_DIR, "modifiers.json") diff --git a/ui/easydiffusion/device_manager.py b/ui/easydiffusion/device_manager.py index 59c07ea3..dc705927 100644 --- a/ui/easydiffusion/device_manager.py +++ b/ui/easydiffusion/device_manager.py @@ -98,8 +98,8 @@ def auto_pick_devices(currently_active_devices): continue mem_free, mem_total = torch.cuda.mem_get_info(device) - mem_free /= float(10 ** 9) - mem_total /= float(10 ** 9) + mem_free /= float(10**9) + mem_total /= float(10**9) device_name = torch.cuda.get_device_name(device) log.debug( f"{device} detected: {device_name} - Memory (free/total): {round(mem_free, 2)}Gb / {round(mem_total, 2)}Gb" @@ -165,6 +165,7 @@ def needs_to_force_full_precision(context): and ( " 1660" in device_name or " 1650" in device_name + or " 1630" in device_name or " t400" in device_name or " t550" in device_name or " t600" in device_name @@ -181,7 +182,7 @@ def get_max_vram_usage_level(device): else: return "high" - mem_total /= float(10 ** 9) + mem_total /= float(10**9) if mem_total < 4.5: return "low" elif mem_total < 6.5: @@ -223,10 +224,10 @@ def is_device_compatible(device): # Memory check try: _, mem_total = torch.cuda.mem_get_info(device) - mem_total /= float(10 ** 9) - if mem_total < 3.0: + mem_total /= float(10**9) + if mem_total < 1.9: if is_device_compatible.history.get(device) == None: - log.warn(f"GPU {device} with less than 3 GB of VRAM is not compatible with Stable Diffusion") + log.warn(f"GPU {device} with less than 2 GB of VRAM is not compatible with Stable Diffusion") is_device_compatible.history[device] = 1 return False except RuntimeError as e: diff --git a/ui/easydiffusion/model_manager.py b/ui/easydiffusion/model_manager.py index 7bf56575..de2c10ac 100644 --- a/ui/easydiffusion/model_manager.py +++ b/ui/easydiffusion/model_manager.py @@ -1,10 +1,14 @@ import os +import shutil +from glob import glob +import traceback from easydiffusion import app from easydiffusion.types import TaskData from easydiffusion.utils import log from sdkit import Context -from sdkit.models import load_model, scan_model, unload_model +from sdkit.models import load_model, scan_model, unload_model, download_model, get_model_info_from_db +from sdkit.utils import hash_file_quick KNOWN_MODEL_TYPES = [ "stable-diffusion", @@ -13,6 +17,7 @@ KNOWN_MODEL_TYPES = [ "gfpgan", "realesrgan", "lora", + "codeformer", ] MODEL_EXTENSIONS = { "stable-diffusion": [".ckpt", ".safetensors"], @@ -21,14 +26,22 @@ MODEL_EXTENSIONS = { "gfpgan": [".pth"], "realesrgan": [".pth"], "lora": [".ckpt", ".safetensors"], + "codeformer": [".pth"], } DEFAULT_MODELS = { - "stable-diffusion": [ # needed to support the legacy installations - "custom-model", # only one custom model file was supported initially, creatively named 'custom-model' - "sd-v1-4", # Default fallback. + "stable-diffusion": [ + {"file_name": "sd-v1-4.ckpt", "model_id": "1.4"}, + ], + "gfpgan": [ + {"file_name": "GFPGANv1.4.pth", "model_id": "1.4"}, + ], + "realesrgan": [ + {"file_name": "RealESRGAN_x4plus.pth", "model_id": "x4plus"}, + {"file_name": "RealESRGAN_x4plus_anime_6B.pth", "model_id": "x4plus_anime_6"}, + ], + "vae": [ + {"file_name": "vae-ft-mse-840000-ema-pruned.ckpt", "model_id": "vae-ft-mse-840000-ema-pruned"}, ], - "gfpgan": ["GFPGANv1.3"], - "realesrgan": ["RealESRGAN_x4plus"], } MODELS_TO_LOAD_ON_START = ["stable-diffusion", "vae", "hypernetwork", "lora"] @@ -37,6 +50,8 @@ known_models = {} def init(): make_model_folders() + migrate_legacy_model_location() # if necessary + download_default_models_if_necessary() getModels() # run this once, to cache the picklescan results @@ -45,7 +60,7 @@ def load_default_models(context: Context): # init default model paths for model_type in MODELS_TO_LOAD_ON_START: - context.model_paths[model_type] = resolve_model_to_use(model_type=model_type) + context.model_paths[model_type] = resolve_model_to_use(model_type=model_type, fail_if_not_found=False) try: load_model( context, @@ -53,23 +68,34 @@ def load_default_models(context: Context): scan_model=context.model_paths[model_type] != None and not context.model_paths[model_type].endswith(".safetensors"), ) + if model_type in context.model_load_errors: + del context.model_load_errors[model_type] except Exception as e: log.error(f"[red]Error while loading {model_type} model: {context.model_paths[model_type]}[/red]") - log.exception(e) + if "DefaultCPUAllocator: not enough memory" in str(e): + log.error( + f"[red]Your PC is low on system RAM. Please add some virtual memory (or swap space) by following the instructions at this link: https://www.ibm.com/docs/en/opw/8.2.0?topic=tuning-optional-increasing-paging-file-size-windows-computers[/red]" + ) + else: + log.exception(e) del context.model_paths[model_type] + context.model_load_errors[model_type] = str(e) # storing the entire Exception can lead to memory leaks + def unload_all(context: Context): for model_type in KNOWN_MODEL_TYPES: unload_model(context, model_type) + if model_type in context.model_load_errors: + del context.model_load_errors[model_type] -def resolve_model_to_use(model_name: str = None, model_type: str = None): +def resolve_model_to_use(model_name: str = None, model_type: str = None, fail_if_not_found: bool = True): model_extensions = MODEL_EXTENSIONS.get(model_type, []) default_models = DEFAULT_MODELS.get(model_type, []) config = app.getConfig() - model_dirs = [os.path.join(app.MODELS_DIR, model_type), app.SD_DIR] + model_dir = os.path.join(app.MODELS_DIR, model_type) if not model_name: # When None try user configured model. # config = getConfig() if "model" in config and model_type in config["model"]: @@ -77,42 +103,42 @@ def resolve_model_to_use(model_name: str = None, model_type: str = None): if model_name: # Check models directory - models_dir_path = os.path.join(app.MODELS_DIR, model_type, model_name) + model_path = os.path.join(model_dir, model_name) + if os.path.exists(model_path): + return model_path for model_extension in model_extensions: - if os.path.exists(models_dir_path + model_extension): - return models_dir_path + model_extension + if os.path.exists(model_path + model_extension): + return model_path + model_extension if os.path.exists(model_name + model_extension): return os.path.abspath(model_name + model_extension) - # Default locations - if model_name in default_models: - default_model_path = os.path.join(app.SD_DIR, model_name) - for model_extension in model_extensions: - if os.path.exists(default_model_path + model_extension): - return default_model_path + model_extension - # Can't find requested model, check the default paths. - for default_model in default_models: - for model_dir in model_dirs: - default_model_path = os.path.join(model_dir, default_model) - for model_extension in model_extensions: - if os.path.exists(default_model_path + model_extension): - if model_name is not None: - log.warn( - f"Could not find the configured custom model {model_name}{model_extension}. Using the default one: {default_model_path}{model_extension}" - ) - return default_model_path + model_extension + if model_type == "stable-diffusion" and not fail_if_not_found: + for default_model in default_models: + default_model_path = os.path.join(model_dir, default_model["file_name"]) + if os.path.exists(default_model_path): + if model_name is not None: + log.warn( + f"Could not find the configured custom model {model_name}. Using the default one: {default_model_path}" + ) + return default_model_path - return None + if model_name and fail_if_not_found: + raise Exception(f"Could not find the desired model {model_name}! Is it present in the {model_dir} folder?") def reload_models_if_necessary(context: Context, task_data: TaskData): + face_fix_lower = task_data.use_face_correction.lower() if task_data.use_face_correction else "" + upscale_lower = task_data.use_upscale.lower() if task_data.use_upscale else "" + model_paths_in_req = { "stable-diffusion": task_data.use_stable_diffusion_model, "vae": task_data.use_vae_model, "hypernetwork": task_data.use_hypernetwork_model, - "gfpgan": task_data.use_face_correction, - "realesrgan": task_data.use_upscale, + "codeformer": task_data.use_face_correction if "codeformer" in face_fix_lower else None, + "gfpgan": task_data.use_face_correction if "gfpgan" in face_fix_lower else None, + "realesrgan": task_data.use_upscale if "realesrgan" in upscale_lower else None, + "latent_upscaler": True if "latent_upscaler" in upscale_lower else None, "nsfw_checker": True if task_data.block_nsfw else None, "lora": task_data.use_lora_model, } @@ -122,14 +148,28 @@ def reload_models_if_necessary(context: Context, task_data: TaskData): if context.model_paths.get(model_type) != path } - if set_vram_optimizations(context): # reload SD + if task_data.codeformer_upscale_faces: + if "realesrgan" not in models_to_reload and "realesrgan" not in context.models: + default_realesrgan = DEFAULT_MODELS["realesrgan"][0]["file_name"] + models_to_reload["realesrgan"] = resolve_model_to_use(default_realesrgan, "realesrgan") + elif "realesrgan" in models_to_reload and models_to_reload["realesrgan"] is None: + del models_to_reload["realesrgan"] # don't unload realesrgan + + if set_vram_optimizations(context) or set_clip_skip(context, task_data): # reload SD models_to_reload["stable-diffusion"] = model_paths_in_req["stable-diffusion"] for model_type, model_path_in_req in models_to_reload.items(): context.model_paths[model_type] = model_path_in_req action_fn = unload_model if context.model_paths[model_type] is None else load_model - action_fn(context, model_type, scan_model=False) # we've scanned them already + try: + action_fn(context, model_type, scan_model=False) # we've scanned them already + if model_type in context.model_load_errors: + del context.model_load_errors[model_type] + except Exception as e: + log.exception(e) + if action_fn == load_model: + context.model_load_errors[model_type] = str(e) # storing the entire Exception can lead to memory leaks def resolve_model_paths(task_data: TaskData): @@ -141,11 +181,49 @@ def resolve_model_paths(task_data: TaskData): task_data.use_lora_model = resolve_model_to_use(task_data.use_lora_model, model_type="lora") if task_data.use_face_correction: - task_data.use_face_correction = resolve_model_to_use(task_data.use_face_correction, "gfpgan") - if task_data.use_upscale: + if "gfpgan" in task_data.use_face_correction.lower(): + model_type = "gfpgan" + elif "codeformer" in task_data.use_face_correction.lower(): + model_type = "codeformer" + download_if_necessary("codeformer", "codeformer.pth", "codeformer-0.1.0") + + task_data.use_face_correction = resolve_model_to_use(task_data.use_face_correction, model_type) + if task_data.use_upscale and "realesrgan" in task_data.use_upscale.lower(): task_data.use_upscale = resolve_model_to_use(task_data.use_upscale, "realesrgan") +def fail_if_models_did_not_load(context: Context): + for model_type in KNOWN_MODEL_TYPES: + if model_type in context.model_load_errors: + e = context.model_load_errors[model_type] + raise Exception(f"Could not load the {model_type} model! Reason: " + e) + + +def download_default_models_if_necessary(): + for model_type, models in DEFAULT_MODELS.items(): + for model in models: + try: + download_if_necessary(model_type, model["file_name"], model["model_id"]) + except: + traceback.print_exc() + app.fail_and_die(fail_type="model_download", data=model_type) + + print(model_type, "model(s) found.") + + +def download_if_necessary(model_type: str, file_name: str, model_id: str): + model_path = os.path.join(app.MODELS_DIR, model_type, file_name) + expected_hash = get_model_info_from_db(model_type=model_type, model_id=model_id)["quick_hash"] + + other_models_exist = any_model_exists(model_type) + known_model_exists = os.path.exists(model_path) + known_model_is_corrupt = known_model_exists and hash_file_quick(model_path) != expected_hash + + if known_model_is_corrupt or (not other_models_exist and not known_model_exists): + print("> download", model_type, model_id) + download_model(model_type, model_id, download_base_dir=app.MODELS_DIR) + + def set_vram_optimizations(context: Context): config = app.getConfig() vram_usage_level = config.get("vram_usage_level", "balanced") @@ -157,6 +235,36 @@ def set_vram_optimizations(context: Context): return False +def migrate_legacy_model_location(): + 'Move the models inside the legacy "stable-diffusion" folder, to their respective folders' + + for model_type, models in DEFAULT_MODELS.items(): + for model in models: + file_name = model["file_name"] + legacy_path = os.path.join(app.SD_DIR, file_name) + if os.path.exists(legacy_path): + shutil.move(legacy_path, os.path.join(app.MODELS_DIR, model_type, file_name)) + + +def any_model_exists(model_type: str) -> bool: + extensions = MODEL_EXTENSIONS.get(model_type, []) + for ext in extensions: + if any(glob(f"{app.MODELS_DIR}/{model_type}/**/*{ext}", recursive=True)): + return True + + return False + + +def set_clip_skip(context: Context, task_data: TaskData): + clip_skip = task_data.clip_skip + + if clip_skip != context.clip_skip: + context.clip_skip = clip_skip + return True + + return False + + def make_model_folders(): for model_type in KNOWN_MODEL_TYPES: model_dir_path = os.path.join(app.MODELS_DIR, model_type) @@ -204,17 +312,12 @@ def is_malicious_model(file_path): def getModels(): models = { - "active": { - "stable-diffusion": "sd-v1-4", - "vae": "", - "hypernetwork": "", - "lora": "", - }, "options": { "stable-diffusion": ["sd-v1-4"], "vae": [], "hypernetwork": [], "lora": [], + "codeformer": ["codeformer"], }, } @@ -275,9 +378,4 @@ def getModels(): if models_scanned > 0: log.info(f"[green]Scanned {models_scanned} models. Nothing infected[/]") - # legacy - custom_weight_path = os.path.join(app.SD_DIR, "custom-model.ckpt") - if os.path.exists(custom_weight_path): - models["options"]["stable-diffusion"].append("custom-model") - return models diff --git a/ui/easydiffusion/renderer.py b/ui/easydiffusion/renderer.py index f685d038..a57dfc6c 100644 --- a/ui/easydiffusion/renderer.py +++ b/ui/easydiffusion/renderer.py @@ -7,10 +7,12 @@ from easydiffusion import device_manager from easydiffusion.types import GenerateImageRequest from easydiffusion.types import Image as ResponseImage from easydiffusion.types import Response, TaskData, UserInitiatedStop +from easydiffusion.model_manager import DEFAULT_MODELS, resolve_model_to_use from easydiffusion.utils import get_printable_request, log, save_images_to_disk from sdkit import Context from sdkit.filter import apply_filters from sdkit.generate import generate_images +from sdkit.models import load_model from sdkit.utils import ( diffusers_latent_samples_to_images, gc, @@ -33,6 +35,8 @@ def init(device): context.stop_processing = False context.temp_images = {} context.partial_x_samples = None + context.model_load_errors = {} + context.enable_codeformer = True from easydiffusion import app @@ -72,7 +76,7 @@ def make_images( def print_task_info(req: GenerateImageRequest, task_data: TaskData): - req_str = pprint.pformat(get_printable_request(req)).replace("[", "\[") + req_str = pprint.pformat(get_printable_request(req, task_data)).replace("[", "\[") task_str = pprint.pformat(task_data.dict()).replace("[", "\[") log.info(f"request: {req_str}") log.info(f"task data: {task_str}") @@ -95,7 +99,7 @@ def make_images_internal( task_data.stream_image_progress_interval, ) gc(context) - filtered_images = filter_images(task_data, images, user_stopped) + filtered_images = filter_images(req, task_data, images, user_stopped) if task_data.save_to_disk_path is not None: save_images_to_disk(images, filtered_images, req, task_data) @@ -151,22 +155,55 @@ def generate_images_internal( return images, user_stopped -def filter_images(task_data: TaskData, images: list, user_stopped): +def filter_images(req: GenerateImageRequest, task_data: TaskData, images: list, user_stopped): if user_stopped: return images - filters_to_apply = [] if task_data.block_nsfw: - filters_to_apply.append("nsfw_checker") - if task_data.use_face_correction and "gfpgan" in task_data.use_face_correction.lower(): - filters_to_apply.append("gfpgan") - if task_data.use_upscale and "realesrgan" in task_data.use_upscale.lower(): - filters_to_apply.append("realesrgan") + images = apply_filters(context, "nsfw_checker", images) - if len(filters_to_apply) == 0: - return images + if task_data.use_face_correction and "codeformer" in task_data.use_face_correction.lower(): + default_realesrgan = DEFAULT_MODELS["realesrgan"][0]["file_name"] + prev_realesrgan_path = None + if task_data.codeformer_upscale_faces and default_realesrgan not in context.model_paths["realesrgan"]: + prev_realesrgan_path = context.model_paths["realesrgan"] + context.model_paths["realesrgan"] = resolve_model_to_use(default_realesrgan, "realesrgan") + load_model(context, "realesrgan") - return apply_filters(context, filters_to_apply, images, scale=task_data.upscale_amount) + try: + images = apply_filters( + context, + "codeformer", + images, + upscale_faces=task_data.codeformer_upscale_faces, + codeformer_fidelity=task_data.codeformer_fidelity, + ) + finally: + if prev_realesrgan_path: + context.model_paths["realesrgan"] = prev_realesrgan_path + load_model(context, "realesrgan") + elif task_data.use_face_correction and "gfpgan" in task_data.use_face_correction.lower(): + images = apply_filters(context, "gfpgan", images) + + if task_data.use_upscale: + if "realesrgan" in task_data.use_upscale.lower(): + images = apply_filters(context, "realesrgan", images, scale=task_data.upscale_amount) + elif task_data.use_upscale == "latent_upscaler": + images = apply_filters( + context, + "latent_upscaler", + images, + scale=task_data.upscale_amount, + latent_upscaler_options={ + "prompt": req.prompt, + "negative_prompt": req.negative_prompt, + "seed": req.seed, + "num_inference_steps": task_data.latent_upscaler_steps, + "guidance_scale": 0, + }, + ) + + return images def construct_response(images: list, seeds: list, task_data: TaskData, base_seed: int): diff --git a/ui/easydiffusion/server.py b/ui/easydiffusion/server.py index a1aab6c0..d8940bb5 100644 --- a/ui/easydiffusion/server.py +++ b/ui/easydiffusion/server.py @@ -15,6 +15,7 @@ from fastapi import FastAPI, HTTPException from fastapi.staticfiles import StaticFiles from pydantic import BaseModel, Extra from starlette.responses import FileResponse, JSONResponse, StreamingResponse +from pycloudflared import try_cloudflare log.info(f"started in {app.SD_DIR}") log.info(f"started at {datetime.datetime.now():%x %X}") @@ -113,6 +114,14 @@ def init(): def get_image(task_id: int, img_id: int): return get_image_internal(task_id, img_id) + @server_api.post("/tunnel/cloudflare/start") + def start_cloudflare_tunnel(req: dict): + return start_cloudflare_tunnel_internal(req) + + @server_api.post("/tunnel/cloudflare/stop") + def stop_cloudflare_tunnel(req: dict): + return stop_cloudflare_tunnel_internal(req) + @server_api.get("/") def read_root(): return FileResponse(os.path.join(app.SD_UI_DIR, "index.html"), headers=NOCACHE_HEADERS) @@ -211,6 +220,8 @@ def ping_internal(session_id: str = None): session = task_manager.get_cached_session(session_id, update_ttl=True) response["tasks"] = {id(t): t.status for t in session.tasks} response["devices"] = task_manager.get_devices() + if cloudflare.address != None: + response["cloudflare"] = cloudflare.address return JSONResponse(response, headers=NOCACHE_HEADERS) @@ -322,3 +333,47 @@ def get_image_internal(task_id: int, img_id: int): return StreamingResponse(img_data, media_type="image/jpeg") except KeyError as e: raise HTTPException(status_code=500, detail=str(e)) + +#---- Cloudflare Tunnel ---- +class CloudflareTunnel: + def __init__(self): + config = app.getConfig() + self.urls = None + self.port = config.get("net", {}).get("listen_port") + + def start(self): + if self.port: + self.urls = try_cloudflare(self.port) + + def stop(self): + if self.urls: + try_cloudflare.terminate(self.port) + self.urls = None + + @property + def address(self): + if self.urls: + return self.urls.tunnel + else: + return None + +cloudflare = CloudflareTunnel() + +def start_cloudflare_tunnel_internal(req: dict): + try: + cloudflare.start() + log.info(f"- Started cloudflare tunnel. Using address: {cloudflare.address}") + return JSONResponse({"address":cloudflare.address}) + except Exception as e: + log.error(str(e)) + log.error(traceback.format_exc()) + return HTTPException(status_code=500, detail=str(e)) + +def stop_cloudflare_tunnel_internal(req: dict): + try: + cloudflare.stop() + except Exception as e: + log.error(str(e)) + log.error(traceback.format_exc()) + return HTTPException(status_code=500, detail=str(e)) + diff --git a/ui/easydiffusion/task_manager.py b/ui/easydiffusion/task_manager.py index c11acbec..a91cd9c6 100644 --- a/ui/easydiffusion/task_manager.py +++ b/ui/easydiffusion/task_manager.py @@ -336,6 +336,7 @@ def thread_render(device): current_state = ServerStates.LoadingModel model_manager.resolve_model_paths(task.task_data) model_manager.reload_models_if_necessary(renderer.context, task.task_data) + model_manager.fail_if_models_did_not_load(renderer.context) current_state = ServerStates.Rendering task.response = renderer.make_images( diff --git a/ui/easydiffusion/types.py b/ui/easydiffusion/types.py index 7462355f..abf8db29 100644 --- a/ui/easydiffusion/types.py +++ b/ui/easydiffusion/types.py @@ -23,6 +23,7 @@ class GenerateImageRequest(BaseModel): sampler_name: str = None # "ddim", "plms", "heun", "euler", "euler_a", "dpm2", "dpm2_a", "lms" hypernetwork_strength: float = 0 lora_alpha: float = 0 + tiling: str = "none" # "none", "x", "y", "xy" class TaskData(BaseModel): @@ -32,8 +33,9 @@ class TaskData(BaseModel): vram_usage_level: str = "balanced" # or "low" or "medium" use_face_correction: str = None # or "GFPGANv1.3" - use_upscale: str = None # or "RealESRGAN_x4plus" or "RealESRGAN_x4plus_anime_6B" + use_upscale: str = None # or "RealESRGAN_x4plus" or "RealESRGAN_x4plus_anime_6B" or "latent_upscaler" upscale_amount: int = 4 # or 2 + latent_upscaler_steps: int = 10 use_stable_diffusion_model: str = "sd-v1-4" # use_stable_diffusion_config: str = "v1-inference" use_vae_model: str = None @@ -48,6 +50,9 @@ class TaskData(BaseModel): metadata_output_format: str = "txt" # or "json" stream_image_progress: bool = False stream_image_progress_interval: int = 5 + clip_skip: bool = False + codeformer_upscale_faces: bool = False + codeformer_fidelity: float = 0.5 class MergeRequest(BaseModel): diff --git a/ui/easydiffusion/utils/save_utils.py b/ui/easydiffusion/utils/save_utils.py index a7043f27..ff2906a6 100644 --- a/ui/easydiffusion/utils/save_utils.py +++ b/ui/easydiffusion/utils/save_utils.py @@ -15,23 +15,26 @@ img_number_regex = re.compile("([0-9]{5,})") # keep in sync with `ui/media/js/dnd.js` TASK_TEXT_MAPPING = { "prompt": "Prompt", + "negative_prompt": "Negative Prompt", + "seed": "Seed", + "use_stable_diffusion_model": "Stable Diffusion model", + "clip_skip": "Clip Skip", + "use_vae_model": "VAE model", + "sampler_name": "Sampler", "width": "Width", "height": "Height", - "seed": "Seed", "num_inference_steps": "Steps", "guidance_scale": "Guidance Scale", "prompt_strength": "Prompt Strength", + "use_lora_model": "LoRA model", + "lora_alpha": "LoRA Strength", + "use_hypernetwork_model": "Hypernetwork model", + "hypernetwork_strength": "Hypernetwork Strength", + "tiling": "Seamless Tiling", "use_face_correction": "Use Face Correction", "use_upscale": "Use Upscaling", "upscale_amount": "Upscale By", - "sampler_name": "Sampler", - "negative_prompt": "Negative Prompt", - "use_stable_diffusion_model": "Stable Diffusion model", - "use_vae_model": "VAE model", - "use_hypernetwork_model": "Hypernetwork model", - "hypernetwork_strength": "Hypernetwork Strength", - "use_lora_model": "LoRA model", - "lora_alpha": "LoRA Strength", + "latent_upscaler_steps": "Latent Upscaler Steps" } time_placeholders = { @@ -168,41 +171,23 @@ def save_images_to_disk(images: list, filtered_images: list, req: GenerateImageR output_quality=task_data.output_quality, output_lossless=task_data.output_lossless, ) - if task_data.metadata_output_format.lower() in ["json", "txt", "embed"]: - save_dicts( - metadata_entries, - save_dir_path, - file_name=make_filter_filename, - output_format=task_data.metadata_output_format, - file_format=task_data.output_format, - ) + if task_data.metadata_output_format: + for metadata_output_format in task_data.metadata_output_format.split(","): + if metadata_output_format.lower() in ["json", "txt", "embed"]: + save_dicts( + metadata_entries, + save_dir_path, + file_name=make_filter_filename, + output_format=task_data.metadata_output_format, + file_format=task_data.output_format, + ) def get_metadata_entries_for_request(req: GenerateImageRequest, task_data: TaskData): - metadata = get_printable_request(req) - metadata.update( - { - "use_stable_diffusion_model": task_data.use_stable_diffusion_model, - "use_vae_model": task_data.use_vae_model, - "use_hypernetwork_model": task_data.use_hypernetwork_model, - "use_lora_model": task_data.use_lora_model, - "use_face_correction": task_data.use_face_correction, - "use_upscale": task_data.use_upscale, - } - ) - if metadata["use_upscale"] is not None: - metadata["upscale_amount"] = task_data.upscale_amount - if task_data.use_hypernetwork_model is None: - del metadata["hypernetwork_strength"] - if task_data.use_lora_model is None: - if "lora_alpha" in metadata: - del metadata["lora_alpha"] - app_config = app.getConfig() - if not app_config.get("test_diffusers", False) and "use_lora_model" in metadata: - del metadata["use_lora_model"] + metadata = get_printable_request(req, task_data) # if text, format it in the text format expected by the UI - is_txt_format = task_data.metadata_output_format.lower() == "txt" + is_txt_format = task_data.metadata_output_format and "txt" in task_data.metadata_output_format.lower().split(",") if is_txt_format: metadata = {TASK_TEXT_MAPPING[key]: val for key, val in metadata.items() if key in TASK_TEXT_MAPPING} @@ -213,12 +198,35 @@ def get_metadata_entries_for_request(req: GenerateImageRequest, task_data: TaskD return entries -def get_printable_request(req: GenerateImageRequest): - metadata = req.dict() - del metadata["init_image"] - del metadata["init_image_mask"] - if req.init_image is None: +def get_printable_request(req: GenerateImageRequest, task_data: TaskData): + req_metadata = req.dict() + task_data_metadata = task_data.dict() + + # Save the metadata in the order defined in TASK_TEXT_MAPPING + metadata = {} + for key in TASK_TEXT_MAPPING.keys(): + if key in req_metadata: + metadata[key] = req_metadata[key] + elif key in task_data_metadata: + metadata[key] = task_data_metadata[key] + + # Clean up the metadata + if req.init_image is None and "prompt_strength" in metadata: del metadata["prompt_strength"] + if task_data.use_upscale is None and "upscale_amount" in metadata: + del metadata["upscale_amount"] + if task_data.use_hypernetwork_model is None and "hypernetwork_strength" in metadata: + del metadata["hypernetwork_strength"] + if task_data.use_lora_model is None and "lora_alpha" in metadata: + del metadata["lora_alpha"] + if task_data.use_upscale != "latent_upscaler" and "latent_upscaler_steps" in metadata: + del metadata["latent_upscaler_steps"] + + app_config = app.getConfig() + if not app_config.get("test_diffusers", False): + for key in (x for x in ["use_lora_model", "lora_alpha", "clip_skip", "tiling", "latent_upscaler_steps"] if x in metadata): + del metadata[key] + return metadata diff --git a/ui/index.html b/ui/index.html index 412e92bb..77337505 100644 --- a/ui/index.html +++ b/ui/index.html @@ -31,7 +31,7 @@