Compare commits

...

235 Commits
beta ... beta

Author SHA1 Message Date
8de2536982 Disable Forge's behavior of renaming models to corrupted on any exception (while loading) 2025-08-08 15:27:40 +05:30
4fb876c393 Support python paths in legacy installations in the dev console 2025-07-25 14:49:19 +05:30
cce6a205a6 Use the launched python executable for installing new packages 2025-07-25 14:43:26 +05:30
afcf85c3f7 Fix a conflict where the system-wide python gets picked up on some Windows PCs 2025-07-25 14:29:36 +05:30
0daf5ea9f6 WebUI: Temporarily apply the zombie process fix only on Windows. It seems to break on Linux (possibly Mac too) 2025-07-19 00:14:54 +05:30
491ee9ef1e Fix a bug where 'Use these settings' wouldn't reset the text encoder field (from non-Flux/Chroma tasks) 2025-07-18 16:37:23 +05:30
1adf7d1a6e v3.5.9 2025-07-18 16:15:12 +05:30
36f5ee97f8 WebUI: Stability fix, to avoid zombie processes hanging around occasionally after closing Easy Diffusion 2025-07-18 16:13:47 +05:30
ffaae89e7f WebUI: Force it to start at the specified port, or fail if something's already running at that port. Avoids Gradio's silent behavior of automatically starting at the next available port, which throws the system into an inconsistent state if Forge is already running (as a user choice, or as a zombie process) 2025-07-18 13:36:41 +05:30
ec638b4343 Fix typo that broke copy/paste settings 2025-07-18 10:26:17 +05:30
3977565cfa Allow custom CLI args for WebUI 2025-07-14 19:40:05 +05:30
09f7250454 Log the WebUI server error 2025-07-14 19:11:07 +05:30
4e07966e54 Log the WebUI server error 2025-07-14 19:10:39 +05:30
5548e422e9 Better logging for webui server errors 2025-07-14 18:54:39 +05:30
efd6dfaca5 Stability fix for Forge - garbage collect forcibly before reloading big models. caused a process crash on occasion 2025-07-14 18:40:15 +05:30
8da94496ed Allow gguf text encoders and vae 2025-07-14 18:03:39 +05:30
78538f5cfb Remove temporary logs 2025-07-14 13:25:23 +05:30
279ce2e263 v3.5.8 2025-07-14 13:21:29 +05:30
889a070e62 Support custom text encoders and Flux VAEs in the UI 2025-07-14 13:20:26 +05:30
497b996ce9 Stability patches for the WebUI backend 2025-07-14 12:59:04 +05:30
83b8028e0a Ignore pickle scanning for .sft and .gguf files 2025-06-27 16:43:41 +05:30
7315584904 v3.5.7 - chroma support 2025-06-27 15:32:18 +05:30
9d86291b13 webui: chroma support; update Forge to the latest commit 2025-06-27 15:21:09 +05:30
2c359e0e39 webui: logging changes 2025-06-27 15:20:30 +05:30
a7f0568bff Remove an unnecessary check, the installer already checks for this 2025-06-27 14:06:42 +05:30
275897fcd4 Fix a install warning check - this field isn't saved anymore 2025-06-27 14:05:32 +05:30
9a71f23709 Potential fix for #1942 - basicsr wasn't installing on mac 2025-06-27 13:58:18 +05:30
c845dd32a8 Add support for AMD 9060/9070 2025-06-13 10:45:35 +05:30
193a8dc7c5 Auto-fix cpu-only torch installations on NVIDIA 5060 2025-05-30 18:34:54 +05:30
712770d0f8 Early support for AMD 9070 (and Navi 4x series) 2025-05-30 13:07:56 +05:30
d5e4fc2e2f Recognize 5060 Ti and a few other recent GPUs 2025-05-30 10:58:10 +05:30
58c6d02c41 Support more GPUs (workstation and mining); Fix torchruntime test 2025-04-29 10:40:08 +05:30
e4dd2e26ab Fix torchruntime to not force cu128 on older NVIDIA GPUs (which don't support it) 2025-04-29 10:08:24 +05:30
40801de615 Tell the user to reinstall ED if they're trying to use a 50xx GPU with Python 3.8-based Easy Diffusion 2025-04-27 21:28:56 +05:30
adfe24fdd0 Fix FileNotFoundError when installing torch on Windows PCs without powershell in their system PATH variable 2025-04-25 20:45:14 +05:30
1ed27e63f9 Use pytorch 2.7 (with cuda 12.8) on new installations with NVIDIA gpus 2025-04-24 15:50:55 +05:30
0e3b6a8609 Auto-upgrade torch for NVIDIA 50xx series. Fix for #1918 2025-04-23 15:22:59 +05:30
16c76886fa Fix for incorrect upgrade logic for torchruntime. Affects the workaround in #1918 2025-04-23 15:22:30 +05:30
5c7625c425 Fix for blackwell GPUs on new installations 2025-03-29 14:34:17 +05:30
4044d696e6 Use torchruntime with Forge 2025-03-11 11:03:48 +05:30
5285e8f5e8 Use safetensors as the default model instead of ckpt 2025-03-11 10:24:55 +05:30
7a9d25fb4f Fix nvidia 50xx support 2025-03-10 06:20:46 +05:30
0b24ef71a1 Fix compatibility with python 3.9 and directml 2025-03-07 10:39:38 +05:30
24e374228d Check for half-precision on GeForce MX450 2025-03-07 10:28:07 +05:30
9c7f84bd55 Fix Granite Ridge APU device type 2025-03-06 12:48:18 +05:30
5df082a98f Recognize some more cards (5070, Granite Ridge) via a torchruntime upgrade 2025-03-06 12:06:13 +05:30
895801b667 Use 64MB dict size for NSIS compressor 2025-03-06 10:18:55 +05:30
f0671d407d Use python 3.9 by default 2025-03-04 15:27:57 +05:30
64be622285 sdkit 2.0.22.7/2.0.15.16 - python 3.9 compatibility 2025-03-04 14:58:34 +05:30
80a4c6f295 torchruntime upgrade - fix bug with multiple GPUs of the same model 2025-03-04 11:27:17 +05:30
7a383d7bc4 sdkit upgrade - fixes loras with numpy arrays 2025-02-19 14:56:23 +05:30
5cddfe78b2 Potential fix for #1902 2025-02-18 11:12:46 +05:30
1ec4547a68 v3.5.6 2025-02-17 14:53:00 +05:30
827ec785e1 Fix broken model merge 2025-02-17 14:52:13 +05:30
40bcbad797 Recognize Phoenix3 and Phoenix4 AMD APUs 2025-02-13 15:21:52 +05:30
03c49d5b47 Hotfix for missing torchruntime on new installations 2025-02-10 19:31:05 +05:30
e9667fefa9 Temporary fix for #1899 2025-02-10 18:14:43 +05:30
19c6e10eda Use the correct size of the image when used as the input. Code credit: @AvidGameFan 2025-02-10 12:49:38 +05:30
55daf647a0 v3.5.5 / v3.0.13 - torchruntime 2025-02-10 10:36:43 +05:30
077c6c3ac9 Use torchruntime for installing torch/torchvision on the users' PC, instead of the custom logic used here. torchruntime was built from the custom logic used here, and covers a lot more scenarios and supports more devices 2025-02-10 10:33:57 +05:30
d0f45f1f51 Another fix for mps backend 2025-02-10 10:04:55 +05:30
7a17adc46b Fix broken mps backend 2025-02-10 09:55:08 +05:30
3380b58c18 torchruntime 1.9.2 - fixes a bug on cpu platforms 2025-02-08 16:14:48 +05:30
7577a1f66c v3.5.4 2025-02-08 11:27:32 +05:30
a8b1bbe441 Fix a bug where the inpainting mask wasn't resized to the image size when using the WebUI/v3.5 backend. Thanks @AvidGameFan for their help in investigating and fixing this! 2025-02-08 11:26:03 +05:30
a1fb9bc65c Revert "Merge pull request #1874 from AvidGameFan/inpaint_mask_size"
This reverts commit f737921eaa, reversing
changes made to b12a6b9537.
2025-02-07 10:46:03 +05:30
f737921eaa Merge pull request #1874 from AvidGameFan/inpaint_mask_size
Remove inpaint size limit
2025-02-07 10:27:44 +05:30
b12a6b9537 changelog 2025-02-06 13:08:58 +05:30
2efef8043e Remove hardcoded torch.cuda references from the webui backend code 2025-02-06 13:07:10 +05:30
9e21d681a0 Remove hardcoded references to torch.cuda; Use torchruntime and sdkit's device utilities instead 2025-02-06 13:06:51 +05:30
964aef6bc3 Fix a bug preventing the installation of the WebUI backend 2025-02-04 17:57:08 +05:30
07105d7cfd Debug logging for webui root dir 2025-02-04 17:47:58 +05:30
7aa4fe9c4b sdkit 2.0.22.3 or 2.0.15.12 - fixes a regression on mac 2025-02-03 21:29:48 +05:30
03a5108cdd sdkit 2.0.22.2 or 2.0.15.11 - install torchruntime 2025-02-03 16:35:48 +05:30
c248231181 sdkit 2.0.22 or 2.0.15.9 2025-01-31 18:35:29 +05:30
35f752b36d Move the half precision bug check logic to sdkit 2025-01-31 16:28:12 +05:30
d5a7c1bdf6 [sdkit update] v2.0.22 (and v2.0.15.8 for LTS) 2025-01-31 16:22:02 +05:30
d082ac3519 changelog 2025-01-28 10:34:27 +05:30
e57599c01e Fix for conda accidental jailbreak when using WebUI 2025-01-28 10:33:25 +05:30
fd76a160ac changelog 2025-01-28 09:53:28 +05:30
5337aa1a6e Temporarily remove torch 2.5 from the list, since it doesn't work with Python 3.8. More on this in future commits 2025-01-28 09:51:18 +05:30
89ada5bfa9 Skip sdkit/diffusers install if it's in developer mode 2025-01-27 18:46:19 +05:30
88b8e54ad8 Don't need wmic in webui_console.py 2025-01-09 17:15:19 +05:30
be7adcc367 Even older rocm versions 2025-01-09 16:39:21 +05:30
00fc1a81e0 Allow older rocm versions 2025-01-09 15:59:28 +05:30
5023619676 Upgrade the version of torch used for rocm for Navi 30+, and point to the broader torch URL 2025-01-07 10:32:02 +05:30
52aaef5e39 Update the index url for AMD ROCm torch install 2025-01-06 19:48:44 +05:30
493035df16 Extend the list of supported torch, CUDA and ROCm versions 2025-01-06 19:30:18 +05:30
4a62d4e76e Workaround for when the context doesn't have a model_load_errors field; Not sure why it doesn't have it 2025-01-04 18:04:43 +05:30
0af3fa8e98 version bump for wmic deprecation 2025-01-04 13:28:50 +05:30
74b25bdcb1 Replace the use of wmic (deprecated) with a powershell call 2025-01-04 13:25:53 +05:30
e084a35ffc Allow image editor to grow
Limiting the editor size seems to cause failures in inpainting, probably due to a mismatch with the requested image size.
2024-12-27 22:10:14 -05:00
ad06e345c9 Allow new resolutions with Use As Input
When Use As Input provides unlisted resolutions, add them first so that they'll be recognized as valid.
2024-12-24 23:36:12 -05:00
572b0329cf Prevent invalid size settings for Use As Input
Images larger than the window size show up as blank sizes when setting the field.  Retain previous (presumably valid) size rather than set an invalid one.
2024-12-23 21:46:25 -05:00
33ca04b916 Use as Input sets size
Force the generate size to match the image size when clicking Use as Input.
2024-12-22 21:22:37 -05:00
8cd6ca6269 Remove inpaint size limit
With the limit removed, inpaint can work with image sizes larger than 768 pixels.
2024-12-22 17:36:01 -05:00
49488ded01 v3.5.1 2024-12-17 18:29:39 +05:30
2f9b907a6b Update Forge to the latest commit 2024-12-17 18:28:47 +05:30
d5277cd38c Potential fix for #1869 2024-12-17 14:35:41 +05:30
fe2443ec0c Annual 2024-12-13 15:55:19 +05:30
b3136e5738 typo 2024-12-12 00:27:03 +05:30
b302764265 2024-12-12 00:26:16 +05:30
6121f580e9 winter is coming 2024-12-12 00:15:31 +05:30
45a882731a Pin wandb 2024-12-11 11:59:09 +05:30
d38512d841 Merge pull request #1867 from tjcomserv/tjcomserv-patch-1-pydantic
Tjcomserv patch 1 pydantic
2024-12-11 11:20:17 +05:30
5a03b61aef Merge branch 'beta' into tjcomserv-patch-1-pydantic 2024-12-11 11:19:58 +05:30
TJ
ca0dca4a0f Update check_modules.py
Use later version of FastApi that is compatible with pydantic v2

Set allowed version of setuptools to 69.5.1 to clear error on startup
2024-12-10 01:54:56 +00:00
TJ
4228ec0df8 Update types.py
Pydantic Error:

pydantic.errors.PydanticUserError: A non-annotated attribute was detected: `preserve_init_image_color_profile = False`. All model fields require a type annotation; if `preserve_init_image_color_profile` is not meant to be a field, you may be able to resolve this error by annotating it as a `ClassVar` or updating `model_config['ignored_types']`.
2024-12-10 01:52:28 +00:00
3c9ffcf7ca Update check_modules.py 2024-11-13 21:50:52 +05:30
b3a961fc82 Update check_modules.py 2024-11-13 21:49:17 +05:30
0c8410c371 Pin huggingface-hub to 0.23.2 to fix broken deployments 2024-11-13 21:39:01 +05:30
96ec3ed270 Pin huggingface-hub to 0.23.2 to fix broken deployments 2024-11-13 21:36:35 +05:30
d0be4edf1d Update to the latest forge commit - b592142f3b46852263747a4efd0d244ad17b5bb3 2024-10-28 18:34:36 +05:30
d4ea34a013 Increase the webui health-check timeout to 30 seconds (from 1 second); Also trap ReadTimeout in the collection of TimeoutError 2024-10-28 18:34:00 +05:30
459bfd4280 Temp patch for missing attribute 2024-10-18 08:30:29 +05:30
f751070c7f Warn for schedulers and steps with Flux models 2024-10-17 12:14:01 +05:30
3f03432580 Get the Controlnet filters from the backend, instead of hardcoding to sdkit 2024-10-17 11:23:52 +05:30
d5edbfee8b Remember the Face Fix and Codeformer settings in the UI 2024-10-17 10:49:26 +05:30
7e0d5893cd Merge pull request #1849 from easydiffusion/forge
Include wbem in env PATH (to find wmic)
2024-10-17 10:29:39 +05:30
5d0e5e96d6 Include wbem in env PATH (to find wmic) 2024-10-17 10:28:53 +05:30
d8c3d7cf92 Merge pull request #1847 from easydiffusion/forge
ED 3.5 - Forge as a new backend
2024-10-12 12:52:16 +05:30
7f9394b621 Refactor force-fp32 to not crash on PCs without cuda 2024-10-12 12:45:48 +05:30
76c8e18fcf [webui] Force full-precision on NVIDIA 16xx graphics cards 2024-10-12 12:23:45 +05:30
f514fe6c11 Different way to use python path in dev shell 2024-10-11 19:46:36 +05:30
c947004ec9 Don't use shell=True in dev console 2024-10-11 19:36:05 +05:30
154b550e0e Handle connection error during ping 2024-10-11 19:18:35 +05:30
739ad3a964 Remove fixes for condabin, since we copy that for linux/mac now 2024-10-11 19:14:07 +05:30
648187d2aa use condabin's conda instead of bin/conda 2024-10-11 19:13:02 +05:30
79cfee0447 debugging 2024-10-11 19:05:37 +05:30
7e00e3c260 typo 2024-10-11 19:04:01 +05:30
4ab9d7aebb Expose the CONDA_BASEPATH env variable 2024-10-11 19:03:10 +05:30
3ff9d9c3bb Attempt 2 at fixing conda basepath 2024-10-11 18:57:49 +05:30
02a2dce049 Use condabin's conda binary on linux and mac. Works around a bug in conda 4.14 (very old version) 2024-10-11 18:48:48 +05:30
72095a6d97 Copy webui_console.py in on_sd_start script 2024-10-11 17:34:23 +05:30
3580fa5e32 update changelog for 3.5 2024-10-11 17:21:08 +05:30
926c55cf69 update changelog for 3.5 2024-10-11 17:19:58 +05:30
8bf8d0b1a1 update changelog for 3.5 2024-10-11 17:18:26 +05:30
b3f714923f changelog 2024-10-11 15:13:42 +05:30
7e208fb682 Remove temp hack for debugging 2024-10-11 13:38:45 +05:30
d0cd340cbd Fix bug with post-install browser open 2024-10-11 13:37:26 +05:30
696b65049f Merge branch 'forge' of github.com:cmdr2/stable-diffusion-ui into forge 2024-10-11 13:34:45 +05:30
cabf4a4f07 Error reporting while starting the post-installation browser; Temp hack 2024-10-11 13:34:23 +05:30
f6e6b1ae5f Copy webui_console.py in sh 2024-10-11 12:58:28 +05:30
ec18bae5e4 Copy webui_console.py in bat 2024-10-11 12:57:58 +05:30
e804247acb WebUI dev console script 2024-10-11 12:55:35 +05:30
07483891b6 Tweak message for 'still installing' 2024-10-11 12:55:24 +05:30
dfa552585e Don't open the browser while webui is still installing. wait until the webui actually starts up. 2024-10-11 12:28:20 +05:30
05cf4be89b Check for context attr 2024-10-11 12:05:29 +05:30
e8ee9275bc Don't parse if no filters are applied 2024-10-11 11:39:13 +05:30
c46f29abb0 Include scheduler in drag-and-drop 2024-10-09 21:18:15 +05:30
8bd5eb5ce4 Remove unnecessary patch for sdkit's model downloader. ED now handles both kinds of model folder names 2024-10-09 18:19:21 +05:30
78dcc7cb03 Bring back the earlier model scanning logic 2024-10-09 18:16:15 +05:30
e32b34e2f4 Show version 3.0.10 for ED 2 and 3.0 users, and 3.5.0 for webui users 2024-10-09 14:33:04 +05:30
16463431dd case 2024-10-09 14:19:28 +05:30
9761b172de Use a monitor thread for restarting the webui process if it has crashed. Sometimes it would crash but the process would be a zombie. An external monitor ensures that it gets killed and restarted 2024-10-09 13:47:06 +05:30
d6adb17746 Fix a bug where the vae wouldn't be unloaded in webui 2024-10-09 13:41:56 +05:30
3327244da2 Scheduler selection in the UI; Store Scheduler and Distilled Guidance in metadata 2024-10-09 11:43:45 +05:30
d283fb0776 Option to select Distilled Guidance Scale; Show warnings for Euler Ancestral sampler with Flux 2024-10-09 11:12:28 +05:30
90bc1456c9 Suggest guidance value for Flux and non-Flux models 2024-10-08 18:54:06 +05:30
84c8284a90 Proxy webui api 2024-10-08 18:35:08 +05:30
c1193377b6 Use vram_usage_level while starting webui 2024-10-07 13:33:25 +05:30
5a5d37ba52 Enable rendering only the CPU 2024-10-07 12:59:19 +05:30
b6ba782c35 Support both WebUI and ED folder names for models 2024-10-07 12:48:27 +05:30
9abc76482c Wait until the webui backend responds with a 200 OK to ping requests 2024-10-07 12:39:07 +05:30
6f4e2017f4 Install Forge automatically by creating a conda environment and cloing the forge repo 2024-10-07 11:28:28 +05:30
6ea7dd36da Fix a bug where the config file wasn't actually read on linux/mac 2024-10-02 15:30:29 +05:30
391e12e20d Require onnxruntime for nsfw checking 2024-10-01 16:41:26 +05:30
4e3a5cb6d9 Automatically restart webui if it stops/crashes 2024-10-01 16:25:56 +05:30
f51ab909ff Reset VAE upon restart 2024-10-01 16:25:24 +05:30
754a5f5e52 Case-insensitive model directories 2024-10-01 13:55:35 +05:30
9a12a8618c First working version of dynamic backends, with Forge and ed_diffusers (v3) and ed_classic (v2). Does not auto-install Forge yet 2024-10-01 10:54:58 +05:30
2eb0c9106a Temporarily disable the auto-selection of the appropriate controlnet model 2024-09-24 17:43:34 +05:30
a0de0b5814 Minor refactoring of SD dir variables 2024-09-24 17:40:00 +05:30
6559c41b2e minor log import change 2024-09-24 17:27:50 +05:30
dfb8313d1a Merge branch 'beta' of github.com:cmdr2/stable-diffusion-ui into beta 2024-09-09 18:49:07 +05:30
b7d46be530 Use SD 1.4 instead of 1.5 during installation 2024-09-09 18:48:38 +05:30
5fe3acd44b Use 1.4 by default, instead of 1.5 2024-09-09 18:34:32 +05:30
1cc7c1afa0 Merge pull request #1823 from d8ahazard/main
Don't break if we can't write a file
2024-07-24 09:48:30 +05:30
89f5e07619 Log the exception while trying to create an extension info file 2024-07-24 09:47:35 +05:30
45f350239e Don't break if we can't write a file 2024-07-23 17:56:35 -05:00
716f30fecb Merge pull request #1810 from easydiffusion/beta
Beta
2024-06-14 09:49:27 +05:30
364902f8a1 Ignore text in the version string when comparing them 2024-06-14 09:48:49 +05:30
a261a2d47d Fix #1779 - add to PATH only if it isn't present, to avoid exploding the PATH variable each time the function is called 2024-06-14 09:43:30 +05:30
dea962dc89 Merge pull request #1808 from easydiffusion/beta
temp hotfix for rocm torch
2024-06-13 14:06:14 +05:30
d062c2149a temp hotfix for rocm torch 2024-06-13 14:05:11 +05:30
7d49dc105e Merge pull request #1806 from easydiffusion/beta
Don't crash if psutils fails to get cpu or memory usage
2024-06-11 18:28:07 +05:30
fcdc3f2dd0 Don't crash if psutils fails to get cpu or memory usage 2024-06-11 18:27:21 +05:30
d17b167a81 Merge pull request #1803 from easydiffusion/beta
Support legacy installations with torch 1.11, as well as an option for people to upgrade to the latest sdkit+diffusers
2024-06-07 10:33:56 +05:30
1fa83eda0e Merge pull request #1800 from siakc/uvicorn-run-programmatically
Enhancement - using uvicorn.run() instead of os.system()
2024-06-06 17:51:06 +05:30
969751a195 Use uvicorn.run since it's clearer to read 2024-06-06 17:50:41 +05:30
1ae8675487 typo 2024-06-06 16:20:04 +05:30
05f0bfebba Upgrade torch if using the newer sdkit versions 2024-06-06 16:18:11 +05:30
91ad53cd94 Enhancement - using uvicorn.run() instead of os.system() 2024-06-06 10:28:01 +03:30
de680dfd09 Print diffusers' version 2024-06-05 18:53:45 +05:30
4edeb14e94 Allow a user to opt-in to the latest sdkit+diffusers version, while keeping existing 2.0.15.x users restricted to diffusers 0.21.4. This avoids a lengthy upgrade for existing users, while allowing users to opt-in to the latest version. More to come. 2024-06-05 18:46:22 +05:30
e64cf9c9eb Merge pull request #1796 from easydiffusion/beta
Another typo
2024-06-01 09:02:08 +05:30
66d0c4726e Another typo 2024-06-01 09:01:35 +05:30
c923b44f56 Merge pull request #1795 from easydiffusion/beta
typo
2024-05-31 19:30:50 +05:30
b9c343195b typo 2024-05-31 19:30:29 +05:30
4427e8d3dd Merge pull request #1794 from easydiffusion/beta
Generalize the hotfix for missing sdkit dependencies. This is still a…
2024-05-31 19:28:32 +05:30
87c8fe2758 Generalize the hotfix for missing sdkit dependencies. This is still a temporary hotfix, but will ensure that missing packages are installed, not assume that having picklescan means everything's good 2024-05-31 19:27:30 +05:30
70acde7809 Merge pull request #1792 from easydiffusion/beta
Hotfix - sdkit's dependencies aren't getting pulled for some reason
2024-05-31 10:57:43 +05:30
c4b938f132 Hotfix - sdkit's dependencies aren't getting pulled for some reason 2024-05-31 10:56:43 +05:30
d6fdb8d5a9 Merge pull request #1788 from easydiffusion/beta
Hotfix for older accelerate version in the Windows installer
2024-05-30 17:51:32 +05:30
54ac1f7169 Hotfix for older accelerate version in the Windows installer 2024-05-30 17:50:36 +05:30
deebfc6850 Merge pull request #1787 from easydiffusion/beta
Controlnet Strength and SDXL Controlnet support for img2img and inpainting
2024-05-30 13:22:12 +05:30
21644adbe1 sdkit 2.0.15.6 - typo that prevented 0 controlnet strength 2024-05-29 10:01:04 +05:30
fe3c648a24 sdkit 2.0.15.5 - minor null check 2024-05-28 19:46:59 +05:30
05f3523364 Set the controlnet alpha correctly from older exports; Fix a bug with null lora model in exports 2024-05-28 19:16:48 +05:30
4d9b023378 changelog 2024-05-28 18:48:23 +05:30
44789bf16b sdkit 2.0.15.4 - Controlnet strength slider 2024-05-28 18:45:08 +05:30
ad649a8050 sdkit 2.0.15.3 - disable watermarking on SDXL ControlNets to avoid visual artifacts 2024-05-28 09:00:57 +05:30
723304204e diffusers 0.21.4 2024-05-27 15:26:09 +05:30
ddf54d589e v3.0.8 - use sdkit 2.0.15.1, to enable SDXL Controlnets for img2img and inpainting, using diffusers 0.21.4 2024-05-27 15:18:14 +05:30
a5c9c44e53 Merge pull request #1784 from easydiffusion/beta
Another hotfix for setuptools version on Windows and Linux/mac
2024-05-27 10:58:06 +05:30
4d28c78fcc Another hotfix for setuptools version on Windows and Linux/mac 2024-05-27 10:57:20 +05:30
7dc01370ea Merge pull request #1783 from easydiffusion/beta
Pin setuptools to 0.59
2024-05-27 10:45:49 +05:30
21ff109632 Pin setuptools to 0.59 2024-05-27 10:45:19 +05:30
9b0a654d32 Merge pull request #1782 from easydiffusion/beta
Hotfix to pin setuptools to 0.69 - for #1781
2024-05-27 10:38:41 +05:30
fb749dbe24 Potential hotfix for #1781 - pin setuptools to a specific version, until clip is upgraded 2024-05-27 10:37:09 +05:30
17ef1e04f7 Roll back sdkit 2.0.18 (again) 2024-03-19 20:06:49 +05:30
a5b9eefcf9 v3.0.8 - update diffusers to v0.26.3 2024-03-19 19:17:10 +05:30
e5519cda37 sdkit 2.0.18 (diffusers 0.26.3) 2024-03-19 19:12:00 +05:30
d1bd9e2a16 Prev version 2024-03-13 19:53:14 +05:30
5924d01789 Temporarily revert to sdkit 2.0.15 2024-03-13 19:05:55 +05:30
47432fe54e diffusers 0.26.3 2024-03-13 19:01:09 +05:30
8660a79ccd v3.0.8 - update diffusers to v0.26.3 2024-03-13 18:44:17 +05:30
dfb26ed781 Merge pull request #1702 from easydiffusion/beta
Beta
2023-12-12 18:10:46 +05:30
25272ce083 Kofi only 2023-12-12 12:48:13 +05:30
212fa77b47 Merge pull request #1700 from easydiffusion/beta
Beta
2023-12-12 12:47:24 +05:30
6489cd785d Merge pull request #1648 from michaelachrisco/main
Fix Sampler learn more link
2023-11-05 19:16:14 +05:30
a4e651e27e Click to learn more about samplers should go to wiki page 2023-10-28 23:40:20 -07:00
bedf176e62 Merge pull request #1630 from easydiffusion/beta
Beta
2023-10-12 10:06:42 +05:30
824e057d7b Merge pull request #1624 from easydiffusion/beta
sdkit 2.0.15 - fix for gfgpan/realesrgan in parallel threads
2023-10-06 09:54:40 +05:30
307b00cc05 Merge pull request #1622 from easydiffusion/beta
Beta
2023-10-03 19:38:11 +05:30
50 changed files with 3379 additions and 1649 deletions

1
.github/FUNDING.yml vendored
View File

@ -1,4 +1,3 @@
# These are supported funding model platforms
ko_fi: easydiffusion
patreon: easydiffusion

View File

@ -1,5 +1,33 @@
# What's new?
## v3.5 (preview)
### Major Changes
- **Chroma** - support for the Chroma model, including quantized bnb and nf4 models.
- **Flux** - full support for the Flux model, including quantized bnb and nf4 models.
- **LyCORIS** - including `LoCon`, `Hada`, `IA3` and `Lokr`.
- **11 new samplers** - `DDIM CFG++`, `DPM Fast`, `DPM++ 2m SDE Heun`, `DPM++ 3M SDE`, `Restart`, `Heun PP2`, `IPNDM`, `IPNDM_V`, `LCM`, `[Forge] Flux Realistic`, `[Forge] Flux Realistic (Slow)`.
- **15 new schedulers** - `Uniform`, `Karras`, `Exponential`, `Polyexponential`, `SGM Uniform`, `KL Optimal`, `Align Your Steps`, `Normal`, `DDIM`, `Beta`, `Turbo`, `Align Your Steps GITS`, `Align Your Steps 11`, `Align Your Steps 32`.
- **42 new Controlnet filters, and support for lots of new ControlNet models** (including QR ControlNets).
- **5 upscalers** - `SwinIR`, `ScuNET`, `Nearest`, `Lanczos`, `ESRGAN`.
- **Faster than v3.0**
- **Major rewrite of the code** - We've switched to `Forge WebUI` under the hood, which brings a lot of new features, faster image generation, and support for all the extensions in the Forge/Automatic1111 community. This allows Easy Diffusion to stay up-to-date with the latest features, and focus on making the UI and installation experience even easier.
v3.5 is currently an optional upgrade, and you can switch between the v3.0 (diffusers) engine and the v3.5 (webui) engine using the `Settings` tab in the UI.
### Detailed changelog
* 3.5.9 - 18 Jul 2025 - Stability fix for the Forge backend. Prevents unused Forge processes from hanging around even after closing Easy Diffusion.
* 3.5.8 - 14 Jul 2025 - Support custom Text Encoders and Flux VAEs in the UI.
* 3.5.7 - 27 Jun 2025 - Support for the Chroma model. Update Forge to the latest commit.
* 3.5.6 - 17 Feb 2025 - Fix broken model merging.
* 3.5.5 - 10 Feb 2025 - (Internal code change) Use `torchruntime` for installing torch/torchvision, instead of custom logic. This supports a lot more GPUs on various platforms, and was built using Easy Diffusion's torch-installation code.
* 3.5.4 - 8 Feb 2025 - Fix a bug where the inpainting mask wasn't resized to the image size when using the WebUI/v3.5 backend. Thanks @AvidGameFan for their help in investigating and fixing this!
* 3.5.3 - 6 Feb 2025 - (Internal code change) Remove hardcoded references to `torch.cuda`, and replace with torchruntime's device utilities.
* 3.5.2 - 28 Jan 2025 - Fix for accidental jailbreak when using conda with WebUI - fixes the `type not subscriptable` error when using WebUI.
* 3.5.2 - 28 Jan 2025 - Fix a bug affecting older versions of Easy Diffusion, which tried to upgrade to an incompatible version of PyTorch.
* 3.5.2 - 4 Jan 2025 - Replace the use of WMIC (deprecated) with a powershell call.
* 3.5.1 - 17 Dec 2024 - Update Forge to the latest commit.
* 3.5.0 - 11 Oct 2024 - **Preview release** of the new v3.5 engine, powered by Forge WebUI (a fork of Automatic1111). This enables Flux, SD3, LyCORIS and lots of new features, while using the same familiar Easy Diffusion interface.
## v3.0
### Major Changes
- **ControlNet** - Full support for ControlNet, with native integration of the common ControlNet models. Just select a control image, then choose the ControlNet filter/model and run. No additional configuration or download necessary. Supports custom ControlNets as well.
@ -17,6 +45,12 @@
- **Major rewrite of the code** - We've switched to using diffusers under-the-hood, which allows us to release new features faster, and focus on making the UI and installer even easier to use.
### Detailed changelog
* 3.0.13 - 10 Feb 2025 - (Internal code change) Use `torchruntime` for installing torch/torchvision, instead of custom logic. This supports a lot more GPUs on various platforms, and was built using Easy Diffusion's torch-installation code.
* 3.0.12 - 6 Feb 2025 - (Internal code change) Remove hardcoded references to `torch.cuda`, and replace with torchruntime's device utilities.
* 3.0.11 - 4 Jan 2025 - Replace the use of WMIC (deprecated) with a powershell call.
* 3.0.10 - 11 Oct 2024 - **Major Update** - An option to upgrade to v3.5, which enables Flux, Stable Diffusion 3, LyCORIS models and lots more.
* 3.0.9 - 28 May 2024 - Slider for controlling the strength of controlnets.
* 3.0.8 - 27 May 2024 - SDXL ControlNets for Img2Img and Inpainting.
* 3.0.7 - 11 Dec 2023 - Setting to enable/disable VAE tiling (in the Image Settings panel). Sometimes VAE tiling reduces the quality of the image, so this setting will help control that.
* 3.0.6 - 18 Sep 2023 - Add thumbnails to embeddings from the UI, using the new `Upload Thumbnail` button in the Embeddings popup. Thanks @JeLuf.
* 3.0.6 - 15 Sep 2023 - Fix broken embeddings dialog when LoRA information couldn't be fetched.

View File

@ -3,6 +3,7 @@
Target amd64-unicode
Unicode True
SetCompressor /FINAL lzma
SetCompressorDictSize 64
RequestExecutionLevel user
!AddPluginDir /amd64-unicode "."
; HM NIS Edit Wizard helper defines
@ -235,8 +236,8 @@ Section "MainSection" SEC01
CreateDirectory "$SMPROGRAMS\Easy Diffusion"
CreateShortCut "$SMPROGRAMS\Easy Diffusion\Easy Diffusion.lnk" "$INSTDIR\Start Stable Diffusion UI.cmd" "" "$INSTDIR\installer_files\cyborg_flower_girl.ico"
DetailPrint 'Downloading the Stable Diffusion 1.5 model...'
NScurl::http get "https://github.com/easydiffusion/sdkit-test-data/releases/download/assets/sd-v1-5.safetensors" "$INSTDIR\models\stable-diffusion\sd-v1-5.safetensors" /CANCEL /INSIST /END
DetailPrint 'Downloading the Stable Diffusion 1.4 model...'
NScurl::http get "https://github.com/easydiffusion/sdkit-test-data/releases/download/assets/sd-v1-4.safetensors" "$INSTDIR\models\stable-diffusion\sd-v1-4.safetensors" /CANCEL /INSIST /END
DetailPrint 'Downloading the GFPGAN model...'
NScurl::http get "https://github.com/TencentARC/GFPGAN/releases/download/v1.3.4/GFPGANv1.4.pth" "$INSTDIR\models\gfpgan\GFPGANv1.4.pth" /CANCEL /INSIST /END

View File

@ -4,7 +4,7 @@ echo "Opening Stable Diffusion UI - Developer Console.." & echo.
cd /d %~dp0
set PATH=C:\Windows\System32;%PATH%
set PATH=C:\Windows\System32;C:\Windows\System32\WindowsPowerShell\v1.0;%PATH%
@rem set legacy and new installer's PATH, if they exist
if exist "installer" set PATH=%cd%\installer;%cd%\installer\Library\bin;%cd%\installer\Scripts;%cd%\installer\Library\usr\bin;%PATH%
@ -26,18 +26,23 @@ call conda --version
echo.
echo COMSPEC=%COMSPEC%
echo.
powershell -Command "(Get-WmiObject Win32_VideoController | Select-Object Name, AdapterRAM, DriverDate, DriverVersion)"
@rem activate the legacy environment (if present) and set PYTHONPATH
if exist "installer_files\env" (
set PYTHONPATH=%cd%\installer_files\env\lib\site-packages
set PYTHON=%cd%\installer_files\env\python.exe
echo PYTHON=%PYTHON%
)
if exist "stable-diffusion\env" (
call conda activate .\stable-diffusion\env
set PYTHONPATH=%cd%\stable-diffusion\env\lib\site-packages
set PYTHON=%cd%\stable-diffusion\env\python.exe
echo PYTHON=%PYTHON%
)
call where python
call python --version
@REM call where python
call "%PYTHON%" --version
echo PYTHONPATH=%PYTHONPATH%

View File

@ -3,7 +3,7 @@
cd /d %~dp0
echo Install dir: %~dp0
set PATH=C:\Windows\System32;%PATH%
set PATH=C:\Windows\System32;C:\Windows\System32\WindowsPowerShell\v1.0;%PATH%
set PYTHONHOME=
if exist "on_sd_start.bat" (
@ -39,7 +39,7 @@ call where conda
call conda --version
echo .
echo COMSPEC=%COMSPEC%
wmic path win32_VideoController get name,AdapterRAM,DriverDate,DriverVersion
powershell -Command "(Get-WmiObject Win32_VideoController | Select-Object Name, AdapterRAM, DriverDate, DriverVersion)"
@rem Download the rest of the installer and UI
call scripts\on_env_start.bat

View File

@ -24,7 +24,7 @@ set USERPROFILE=%cd%\profile
@rem figure out whether git and conda needs to be installed
if exist "%INSTALL_ENV_DIR%" set PATH=%INSTALL_ENV_DIR%;%INSTALL_ENV_DIR%\Library\bin;%INSTALL_ENV_DIR%\Scripts;%INSTALL_ENV_DIR%\Library\usr\bin;%PATH%
set PACKAGES_TO_INSTALL=git python=3.8.5
set PACKAGES_TO_INSTALL=git python=3.9
if not exist "%LEGACY_INSTALL_ENV_DIR%\etc\profile.d\conda.sh" (
if not exist "%INSTALL_ENV_DIR%\etc\profile.d\conda.sh" set PACKAGES_TO_INSTALL=%PACKAGES_TO_INSTALL% conda

View File

@ -46,7 +46,7 @@ if [ -e "$INSTALL_ENV_DIR" ]; then export PATH="$INSTALL_ENV_DIR/bin:$PATH"; fi
PACKAGES_TO_INSTALL=""
if [ ! -e "$LEGACY_INSTALL_ENV_DIR/etc/profile.d/conda.sh" ] && [ ! -e "$INSTALL_ENV_DIR/etc/profile.d/conda.sh" ]; then PACKAGES_TO_INSTALL="$PACKAGES_TO_INSTALL conda python=3.8.5"; fi
if [ ! -e "$LEGACY_INSTALL_ENV_DIR/etc/profile.d/conda.sh" ] && [ ! -e "$INSTALL_ENV_DIR/etc/profile.d/conda.sh" ]; then PACKAGES_TO_INSTALL="$PACKAGES_TO_INSTALL conda python=3.9"; fi
if ! hash "git" &>/dev/null; then PACKAGES_TO_INSTALL="$PACKAGES_TO_INSTALL git"; fi
if "$MAMBA_ROOT_PREFIX/micromamba" --version &>/dev/null; then umamba_exists="T"; fi

File diff suppressed because it is too large Load Diff

View File

@ -6,6 +6,7 @@ import shutil
# The config file is in the same directory as this script
config_directory = os.path.dirname(__file__)
config_yaml = os.path.join(config_directory, "..", "config.yaml")
config_yaml = os.path.abspath(config_yaml)
config_json = os.path.join(config_directory, "config.json")
parser = argparse.ArgumentParser(description='Get values from config file')

View File

@ -26,19 +26,19 @@ if "%update_branch%"=="" (
set update_branch=main
)
@>nul findstr /m "conda_sd_ui_deps_installed" scripts\install_status.txt
@if "%ERRORLEVEL%" NEQ "0" (
for /f "tokens=*" %%a in ('python -c "import os; parts = os.getcwd().split(os.path.sep); print(len(parts))"') do if "%%a" NEQ "2" (
echo. & echo "!!!! WARNING !!!!" & echo.
echo "Your 'stable-diffusion-ui' folder is at %cd%" & echo.
echo "The 'stable-diffusion-ui' folder needs to be at the top of your drive, for e.g. 'C:\stable-diffusion-ui' or 'D:\stable-diffusion-ui' etc."
echo "Not placing this folder at the top of a drive can cause errors on some computers."
echo. & echo "Recommended: Please close this window and move the 'stable-diffusion-ui' folder to the top of a drive. For e.g. 'C:\stable-diffusion-ui'. Then run the installer again." & echo.
echo "Not Recommended: If you're sure that you want to install at the current location, please press any key to continue." & echo.
@REM @>nul findstr /m "sd_install_complete" scripts\install_status.txt
@REM @if "%ERRORLEVEL%" NEQ "0" (
@REM for /f "tokens=*" %%a in ('python -c "import os; parts = os.getcwd().split(os.path.sep); print(len(parts))"') do if "%%a" NEQ "2" (
@REM echo. & echo "!!!! WARNING !!!!" & echo.
@REM echo "Your 'stable-diffusion-ui' folder is at %cd%" & echo.
@REM echo "The 'stable-diffusion-ui' folder needs to be at the top of your drive, for e.g. 'C:\stable-diffusion-ui' or 'D:\stable-diffusion-ui' etc."
@REM echo "Not placing this folder at the top of a drive can cause errors on some computers."
@REM echo. & echo "Recommended: Please close this window and move the 'stable-diffusion-ui' folder to the top of a drive. For e.g. 'C:\stable-diffusion-ui'. Then run the installer again." & echo.
@REM echo "Not Recommended: If you're sure that you want to install at the current location, please press any key to continue." & echo.
pause
)
)
@REM pause
@REM )
@REM )
@>nul findstr /m "sd_ui_git_cloned" scripts\install_status.txt
@if "%ERRORLEVEL%" EQU "0" (
@ -71,6 +71,7 @@ if "%update_branch%"=="" (
@copy sd-ui-files\scripts\check_modules.py scripts\ /Y
@copy sd-ui-files\scripts\get_config.py scripts\ /Y
@copy sd-ui-files\scripts\config.yaml.sample scripts\ /Y
@copy sd-ui-files\scripts\webui_console.py scripts\ /Y
@copy "sd-ui-files\scripts\Start Stable Diffusion UI.cmd" . /Y
@copy "sd-ui-files\scripts\Developer Console.cmd" . /Y

View File

@ -54,6 +54,7 @@ cp sd-ui-files/scripts/bootstrap.sh scripts/
cp sd-ui-files/scripts/check_modules.py scripts/
cp sd-ui-files/scripts/get_config.py scripts/
cp sd-ui-files/scripts/config.yaml.sample scripts/
cp sd-ui-files/scripts/webui_console.py scripts/
cp sd-ui-files/scripts/start.sh .
cp sd-ui-files/scripts/developer_console.sh .
cp sd-ui-files/scripts/functions.sh scripts/

View File

@ -7,6 +7,7 @@
@copy sd-ui-files\scripts\check_modules.py scripts\ /Y
@copy sd-ui-files\scripts\get_config.py scripts\ /Y
@copy sd-ui-files\scripts\config.yaml.sample scripts\ /Y
@copy sd-ui-files\scripts\webui_console.py scripts\ /Y
if exist "%cd%\profile" (
set HF_HOME=%cd%\profile\.cache\huggingface
@ -66,11 +67,17 @@ set PYTHONNOUSERSITE=1
set PYTHONPATH=%INSTALL_ENV_DIR%\lib\site-packages
echo PYTHONPATH=%PYTHONPATH%
@rem Download the required packages
call where python
call python --version
set PYTHON=%INSTALL_ENV_DIR%\python.exe
echo PYTHON=%PYTHON%
call python scripts\check_modules.py --launch-uvicorn
@rem Download the required packages
@REM call where python
call "%PYTHON%" --version
@rem this is outside check_modules.py to ensure that the required version of torchruntime is present
call "%PYTHON%" -m pip install -q "torchruntime>=1.19.1"
call "%PYTHON%" scripts\check_modules.py --launch-uvicorn
pause
exit /b

View File

@ -6,16 +6,20 @@ cp sd-ui-files/scripts/bootstrap.sh scripts/
cp sd-ui-files/scripts/check_modules.py scripts/
cp sd-ui-files/scripts/get_config.py scripts/
cp sd-ui-files/scripts/config.yaml.sample scripts/
cp sd-ui-files/scripts/webui_console.py scripts/
source ./scripts/functions.sh
# activate the installer env
CONDA_BASEPATH=$(conda info --base)
export CONDA_BASEPATH=$(conda info --base)
source "$CONDA_BASEPATH/etc/profile.d/conda.sh" # avoids the 'shell not initialized' error
conda activate || fail "Failed to activate conda"
# hack to fix conda 4.14 on older installations
cp $CONDA_BASEPATH/condabin/conda $CONDA_BASEPATH/bin/conda
# remove the old version of the dev console script, if it's still present
if [ -e "open_dev_console.sh" ]; then
rm "open_dev_console.sh"
@ -46,6 +50,9 @@ fi
if [ -e "src" ]; then mv src src-old; fi
if [ -e "ldm" ]; then mv ldm ldm-old; fi
# this is outside check_modules.py to ensure that the required version of torchruntime is present
python -m pip install -q "torchruntime>=1.19.1"
cd ..
# Download the required packages
python scripts/check_modules.py --launch-uvicorn

101
scripts/webui_console.py Normal file
View File

@ -0,0 +1,101 @@
import os
import platform
import subprocess
def configure_env(dir):
env_entries = {
"PATH": [
f"{dir}",
f"{dir}/bin",
f"{dir}/Library/bin",
f"{dir}/Scripts",
f"{dir}/usr/bin",
],
"PYTHONPATH": [
f"{dir}",
f"{dir}/lib/site-packages",
f"{dir}/lib/python3.10/site-packages",
],
"PYTHONHOME": [],
"PY_LIBS": [
f"{dir}/Scripts/Lib",
f"{dir}/Scripts/Lib/site-packages",
f"{dir}/lib",
f"{dir}/lib/python3.10/site-packages",
],
"PY_PIP": [f"{dir}/Scripts", f"{dir}/bin"],
}
if platform.system() == "Windows":
env_entries["PATH"].append("C:/Windows/System32")
env_entries["PATH"].append("C:/Windows/System32/wbem")
env_entries["PYTHONNOUSERSITE"] = ["1"]
env_entries["PYTHON"] = [f"{dir}/python"]
env_entries["GIT"] = [f"{dir}/Library/bin/git"]
else:
env_entries["PATH"].append("/bin")
env_entries["PATH"].append("/usr/bin")
env_entries["PATH"].append("/usr/sbin")
env_entries["PYTHONNOUSERSITE"] = ["y"]
env_entries["PYTHON"] = [f"{dir}/bin/python"]
env_entries["GIT"] = [f"{dir}/bin/git"]
env = {}
for key, paths in env_entries.items():
paths = [p.replace("/", os.path.sep) for p in paths]
paths = os.pathsep.join(paths)
os.environ[key] = paths
return env
def print_env_info():
which_cmd = "where" if platform.system() == "Windows" else "which"
python = "python"
def locate_python():
nonlocal python
python = subprocess.getoutput(f"{which_cmd} python")
python = python.split("\n")
python = python[0].strip()
print("python: ", python)
locate_python()
def run(cmd):
with subprocess.Popen(cmd) as p:
p.wait()
run([which_cmd, "git"])
run(["git", "--version"])
run([which_cmd, "python"])
run([python, "--version"])
print(f"PATH={os.environ['PATH']}")
# if platform.system() == "Windows":
# print(f"COMSPEC={os.environ['COMSPEC']}")
# print("")
# run("wmic path win32_VideoController get name,AdapterRAM,DriverDate,DriverVersion".split(" "))
print(f"PYTHONPATH={os.environ['PYTHONPATH']}")
print("")
def open_dev_shell():
if platform.system() == "Windows":
subprocess.Popen("cmd").communicate()
else:
subprocess.Popen("bash").communicate()
if __name__ == "__main__":
env_dir = os.path.abspath(os.path.join("webui", "system"))
configure_env(env_dir)
print_env_info()
open_dev_shell()

View File

@ -11,7 +11,7 @@ from ruamel.yaml import YAML
import urllib
import warnings
from easydiffusion import task_manager
from easydiffusion import task_manager, backend_manager
from easydiffusion.utils import log
from rich.logging import RichHandler
from rich.console import Console
@ -36,10 +36,10 @@ ROOT_DIR = os.path.abspath(os.path.join(SD_DIR, ".."))
SD_UI_DIR = os.getenv("SD_UI_PATH", None)
CONFIG_DIR = os.path.abspath(os.path.join(SD_UI_DIR, "..", "scripts"))
BUCKET_DIR = os.path.abspath(os.path.join(SD_DIR, "..", "bucket"))
CONFIG_DIR = os.path.abspath(os.path.join(ROOT_DIR, "scripts"))
BUCKET_DIR = os.path.abspath(os.path.join(ROOT_DIR, "bucket"))
USER_PLUGINS_DIR = os.path.abspath(os.path.join(SD_DIR, "..", "plugins"))
USER_PLUGINS_DIR = os.path.abspath(os.path.join(ROOT_DIR, "plugins"))
CORE_PLUGINS_DIR = os.path.abspath(os.path.join(SD_UI_DIR, "plugins"))
USER_UI_PLUGINS_DIR = os.path.join(USER_PLUGINS_DIR, "ui")
@ -54,13 +54,12 @@ OUTPUT_DIRNAME = "Stable Diffusion UI" # in the user's home folder
PRESERVE_CONFIG_VARS = ["FORCE_FULL_PRECISION"]
TASK_TTL = 15 * 60 # Discard last session's task timeout
APP_CONFIG_DEFAULTS = {
# auto: selects the cuda device with the most free memory, cuda: use the currently active cuda device.
"render_devices": "auto", # valid entries: 'auto', 'cpu' or 'cuda:N' (where N is a GPU index)
"render_devices": "auto",
"update_branch": "main",
"ui": {
"open_browser_on_start": True,
},
"use_v3_engine": True,
"backend": "ed_diffusers",
}
IMAGE_EXTENSIONS = [
@ -77,7 +76,7 @@ IMAGE_EXTENSIONS = [
".avif",
".svg",
]
CUSTOM_MODIFIERS_DIR = os.path.abspath(os.path.join(SD_DIR, "..", "modifiers"))
CUSTOM_MODIFIERS_DIR = os.path.abspath(os.path.join(ROOT_DIR, "modifiers"))
CUSTOM_MODIFIERS_PORTRAIT_EXTENSIONS = [
".portrait",
"_portrait",
@ -91,7 +90,7 @@ CUSTOM_MODIFIERS_LANDSCAPE_EXTENSIONS = [
"-landscape",
]
MODELS_DIR = os.path.abspath(os.path.join(SD_DIR, "..", "models"))
MODELS_DIR = os.path.abspath(os.path.join(ROOT_DIR, "models"))
def init():
@ -105,9 +104,11 @@ def init():
config = getConfig()
config_models_dir = config.get("models_dir", None)
if (config_models_dir is not None and config_models_dir != ""):
if config_models_dir is not None and config_models_dir != "":
MODELS_DIR = config_models_dir
backend_manager.start_backend()
def init_render_threads():
load_server_plugins()
@ -117,6 +118,7 @@ def init_render_threads():
def getConfig(default_val=APP_CONFIG_DEFAULTS):
config_yaml_path = os.path.join(CONFIG_DIR, "..", "config.yaml")
config_yaml_path = os.path.abspath(config_yaml_path)
# migrate the old config yaml location
config_legacy_yaml = os.path.join(CONFIG_DIR, "config.yaml")
@ -124,9 +126,9 @@ def getConfig(default_val=APP_CONFIG_DEFAULTS):
shutil.move(config_legacy_yaml, config_yaml_path)
def set_config_on_startup(config: dict):
if getConfig.__use_v3_engine_on_startup is None:
getConfig.__use_v3_engine_on_startup = config.get("use_v3_engine", True)
config["config_on_startup"] = {"use_v3_engine": getConfig.__use_v3_engine_on_startup}
if getConfig.__use_backend_on_startup is None:
getConfig.__use_backend_on_startup = config.get("backend", "ed_diffusers")
config["config_on_startup"] = {"backend": getConfig.__use_backend_on_startup}
if os.path.isfile(config_yaml_path):
try:
@ -144,6 +146,15 @@ def getConfig(default_val=APP_CONFIG_DEFAULTS):
else:
config["net"]["listen_to_network"] = True
if "backend" not in config:
if "use_v3_engine" in config:
config["backend"] = "ed_diffusers" if config["use_v3_engine"] else "ed_classic"
else:
config["backend"] = "ed_diffusers"
# this default will need to be smarter when WebUI becomes the main backend, but needs to maintain backwards
# compatibility with existing ED 3.0 installations that haven't opted into the WebUI backend, and haven't
# set a "use_v3_engine" flag in their config
set_config_on_startup(config)
return config
@ -174,7 +185,7 @@ def getConfig(default_val=APP_CONFIG_DEFAULTS):
return default_val
getConfig.__use_v3_engine_on_startup = None
getConfig.__use_backend_on_startup = None
def setConfig(config):
@ -307,28 +318,43 @@ def getIPConfig():
def open_browser():
from easydiffusion.backend_manager import backend
config = getConfig()
ui = config.get("ui", {})
net = config.get("net", {})
port = net.get("listen_port", 9000)
if ui.get("open_browser_on_start", True):
import webbrowser
if backend.is_installed():
if ui.get("open_browser_on_start", True):
import webbrowser
log.info("Opening browser..")
log.info("Opening browser..")
webbrowser.open(f"http://localhost:{port}")
webbrowser.open(f"http://localhost:{port}")
Console().print(
Panel(
"\n"
+ "[white]Easy Diffusion is ready to serve requests.\n\n"
+ "A new browser tab should have been opened by now.\n"
+ f"If not, please open your web browser and navigate to [bold yellow underline]http://localhost:{port}/\n",
title="Easy Diffusion is ready",
style="bold yellow on blue",
Console().print(
Panel(
"\n"
+ "[white]Easy Diffusion is ready to serve requests.\n\n"
+ "A new browser tab should have been opened by now.\n"
+ f"If not, please open your web browser and navigate to [bold yellow underline]http://localhost:{port}/\n",
title="Easy Diffusion is ready",
style="bold yellow on blue",
)
)
else:
backend_name = config["backend"]
Console().print(
Panel(
"\n"
+ f"[white]Backend: {backend_name} is still installing..\n\n"
+ "A new browser tab will open automatically after it finishes.\n"
+ f"If it does not, please open your web browser and navigate to [bold yellow underline]http://localhost:{port}/\n",
title=f"Backend engine is installing",
style="bold yellow on blue",
)
)
)
def fail_and_die(fail_type: str, data: str):

View File

@ -0,0 +1,105 @@
import os
import ast
import sys
import importlib.util
import traceback
from easydiffusion.utils import log
backend = None
curr_backend_name = None
def is_valid_backend(file_path):
with open(file_path, "r", encoding="utf-8") as file:
node = ast.parse(file.read())
# Check for presence of a dictionary named 'ed_info'
for item in node.body:
if isinstance(item, ast.Assign):
for target in item.targets:
if isinstance(target, ast.Name) and target.id == "ed_info":
return True
return False
def find_valid_backends(root_dir) -> dict:
backends_path = os.path.join(root_dir, "backends")
valid_backends = {}
if not os.path.exists(backends_path):
return valid_backends
for item in os.listdir(backends_path):
item_path = os.path.join(backends_path, item)
if os.path.isdir(item_path):
init_file = os.path.join(item_path, "__init__.py")
if os.path.exists(init_file) and is_valid_backend(init_file):
valid_backends[item] = item_path
elif item.endswith(".py"):
if is_valid_backend(item_path):
backend_name = os.path.splitext(item)[0] # strip the .py extension
valid_backends[backend_name] = item_path
return valid_backends
def load_backend_module(backend_name, backend_dict):
if backend_name not in backend_dict:
raise ValueError(f"Backend '{backend_name}' not found.")
module_path = backend_dict[backend_name]
mod_dir = os.path.dirname(module_path)
sys.path.insert(0, mod_dir)
# If it's a package (directory), add its parent directory to sys.path
if os.path.isdir(module_path):
module_path = os.path.join(module_path, "__init__.py")
spec = importlib.util.spec_from_file_location(backend_name, module_path)
module = importlib.util.module_from_spec(spec)
spec.loader.exec_module(module)
if mod_dir in sys.path:
sys.path.remove(mod_dir)
log.info(f"Loaded backend: {module}")
return module
def start_backend():
global backend, curr_backend_name
from easydiffusion.app import getConfig, ROOT_DIR
curr_dir = os.path.dirname(__file__)
backends = find_valid_backends(curr_dir)
plugin_backends = find_valid_backends(ROOT_DIR)
backends.update(plugin_backends)
config = getConfig()
backend_name = config["backend"]
if backend_name not in backends:
raise RuntimeError(
f"Couldn't find the backend configured in config.yaml: {backend_name}. Please check the name!"
)
if backend is not None and backend_name != curr_backend_name:
try:
backend.stop_backend()
except:
log.exception(traceback.format_exc())
log.info(f"Loading backend: {backend_name}")
backend = load_backend_module(backend_name, backends)
try:
backend.start_backend()
except:
log.exception(traceback.format_exc())

View File

@ -0,0 +1,28 @@
from sdkit_common import (
start_backend,
stop_backend,
install_backend,
uninstall_backend,
is_installed,
create_sdkit_context,
ping,
load_model,
unload_model,
set_options,
generate_images,
filter_images,
get_url,
stop_rendering,
refresh_models,
list_controlnet_filters,
)
ed_info = {
"name": "Classic backend for Easy Diffusion v2",
"version": (1, 0, 0),
"type": "backend",
}
def create_context():
return create_sdkit_context(use_diffusers=False)

View File

@ -0,0 +1,28 @@
from sdkit_common import (
start_backend,
stop_backend,
install_backend,
uninstall_backend,
is_installed,
create_sdkit_context,
ping,
load_model,
unload_model,
set_options,
generate_images,
filter_images,
get_url,
stop_rendering,
refresh_models,
list_controlnet_filters,
)
ed_info = {
"name": "Diffusers Backend for Easy Diffusion v3",
"version": (1, 0, 0),
"type": "backend",
}
def create_context():
return create_sdkit_context(use_diffusers=True)

View File

@ -0,0 +1,246 @@
from sdkit import Context
from easydiffusion.types import UserInitiatedStop
from sdkit.utils import (
diffusers_latent_samples_to_images,
gc,
img_to_base64_str,
latent_samples_to_images,
)
opts = {}
def install_backend():
pass
def start_backend():
print("Started sdkit backend")
def stop_backend():
pass
def uninstall_backend():
pass
def is_installed():
return True
def create_sdkit_context(use_diffusers):
c = Context()
c.test_diffusers = use_diffusers
return c
def ping(timeout=1):
return True
def load_model(context, model_type, **kwargs):
from sdkit.models import load_model
load_model(context, model_type, **kwargs)
def unload_model(context, model_type, **kwargs):
from sdkit.models import unload_model
unload_model(context, model_type, **kwargs)
def set_options(context, **kwargs):
if "vae_tiling" in kwargs and context.test_diffusers:
pipe = context.models["stable-diffusion"]["default"]
vae_tiling = kwargs["vae_tiling"]
if vae_tiling:
if hasattr(pipe, "enable_vae_tiling"):
pipe.enable_vae_tiling()
else:
if hasattr(pipe, "disable_vae_tiling"):
pipe.disable_vae_tiling()
for key in (
"output_format",
"output_quality",
"output_lossless",
"stream_image_progress",
"stream_image_progress_interval",
):
if key in kwargs:
opts[key] = kwargs[key]
def generate_images(
context: Context,
callback=None,
controlnet_filter=None,
distilled_guidance_scale: float = 3.5,
scheduler_name: str = "simple",
output_type="pil",
**req,
):
from sdkit.generate import generate_images
if req["init_image"] is not None and not context.test_diffusers:
req["sampler_name"] = "ddim"
gc(context)
context.stop_processing = False
if req["control_image"] and controlnet_filter:
controlnet_filter = convert_ED_controlnet_filter_name(controlnet_filter)
req["control_image"] = filter_images(context, req["control_image"], controlnet_filter)[0]
callback = make_step_callback(context, callback)
try:
images = generate_images(context, callback=callback, **req)
except UserInitiatedStop:
images = []
if context.partial_x_samples is not None:
if context.test_diffusers:
images = diffusers_latent_samples_to_images(context, context.partial_x_samples)
else:
images = latent_samples_to_images(context, context.partial_x_samples)
finally:
if hasattr(context, "partial_x_samples") and context.partial_x_samples is not None:
if not context.test_diffusers:
del context.partial_x_samples
context.partial_x_samples = None
gc(context)
if output_type == "base64":
output_format = opts.get("output_format", "jpeg")
output_quality = opts.get("output_quality", 75)
output_lossless = opts.get("output_lossless", False)
images = [img_to_base64_str(img, output_format, output_quality, output_lossless) for img in images]
return images
def filter_images(context: Context, images, filters, filter_params={}, input_type="pil"):
gc(context)
if "nsfw_checker" in filters:
filters.remove("nsfw_checker") # handled by ED directly
if len(filters) == 0:
return images
images = _filter_images(context, images, filters, filter_params)
if input_type == "base64":
output_format = opts.get("output_format", "jpg")
output_quality = opts.get("output_quality", 75)
output_lossless = opts.get("output_lossless", False)
images = [img_to_base64_str(img, output_format, output_quality, output_lossless) for img in images]
return images
def _filter_images(context, images, filters, filter_params={}):
from sdkit.filter import apply_filters
filters = filters if isinstance(filters, list) else [filters]
filters = convert_ED_controlnet_filter_name(filters)
for filter_name in filters:
params = filter_params.get(filter_name, {})
previous_state = before_filter(context, filter_name, params)
try:
images = apply_filters(context, filter_name, images, **params)
finally:
after_filter(context, filter_name, params, previous_state)
return images
def before_filter(context, filter_name, filter_params):
if filter_name == "codeformer":
from easydiffusion.model_manager import DEFAULT_MODELS, resolve_model_to_use
default_realesrgan = DEFAULT_MODELS["realesrgan"][0]["file_name"]
prev_realesrgan_path = None
upscale_faces = filter_params.get("upscale_faces", False)
if upscale_faces and default_realesrgan not in context.model_paths["realesrgan"]:
prev_realesrgan_path = context.model_paths.get("realesrgan")
context.model_paths["realesrgan"] = resolve_model_to_use(default_realesrgan, "realesrgan")
load_model(context, "realesrgan")
return prev_realesrgan_path
def after_filter(context, filter_name, filter_params, previous_state):
if filter_name == "codeformer":
prev_realesrgan_path = previous_state
if prev_realesrgan_path:
context.model_paths["realesrgan"] = prev_realesrgan_path
load_model(context, "realesrgan")
def get_url():
pass
def stop_rendering(context):
context.stop_processing = True
def refresh_models():
pass
def list_controlnet_filters():
from sdkit.models.model_loader.controlnet_filters import filters as cn_filters
return cn_filters
def make_step_callback(context, callback):
def on_step(x_samples, i, *args):
stream_image_progress = opts.get("stream_image_progress", False)
stream_image_progress_interval = opts.get("stream_image_progress_interval", 3)
if context.test_diffusers:
context.partial_x_samples = (x_samples, args[0])
else:
context.partial_x_samples = x_samples
if stream_image_progress and stream_image_progress_interval > 0 and i % stream_image_progress_interval == 0:
if context.test_diffusers:
images = diffusers_latent_samples_to_images(context, context.partial_x_samples)
else:
images = latent_samples_to_images(context, context.partial_x_samples)
else:
images = None
if callback:
callback(images, i, *args)
if context.stop_processing:
raise UserInitiatedStop("User requested that we stop processing")
return on_step
def convert_ED_controlnet_filter_name(filter):
def cn(n):
if n.startswith("controlnet_"):
return n[len("controlnet_") :]
return n
if isinstance(filter, list):
return [cn(f) for f in filter]
return cn(filter)

View File

@ -0,0 +1,455 @@
import os
import platform
import subprocess
import threading
from threading import local
import psutil
import time
import shutil
import atexit
from easydiffusion.app import ROOT_DIR, getConfig
from easydiffusion.model_manager import get_model_dirs
from easydiffusion.utils import log
from torchruntime.utils import get_device, get_device_name, get_installed_torch_platform
from sdkit.utils import is_cpu_device
from . import impl
from .impl import (
ping,
load_model,
unload_model,
flush_model_changes,
set_options,
generate_images,
filter_images,
get_url,
stop_rendering,
refresh_models,
list_controlnet_filters,
)
ed_info = {
"name": "WebUI backend for Easy Diffusion",
"version": (1, 0, 0),
"type": "backend",
}
WEBUI_REPO = "https://github.com/lllyasviel/stable-diffusion-webui-forge.git"
WEBUI_COMMIT = "dfdcbab685e57677014f05a3309b48cc87383167"
BACKEND_DIR = os.path.abspath(os.path.join(ROOT_DIR, "webui"))
SYSTEM_DIR = os.path.join(BACKEND_DIR, "system")
WEBUI_DIR = os.path.join(BACKEND_DIR, "webui")
OS_NAME = platform.system()
MODELS_TO_OVERRIDE = {
"stable-diffusion": "--ckpt-dir",
"vae": "--vae-dir",
"hypernetwork": "--hypernetwork-dir",
"gfpgan": "--gfpgan-models-path",
"realesrgan": "--realesrgan-models-path",
"lora": "--lora-dir",
"codeformer": "--codeformer-models-path",
"embeddings": "--embeddings-dir",
"controlnet": "--controlnet-dir",
"text-encoder": "--text-encoder-dir",
}
WEBUI_PATCHES = [
"forge_exception_leak_patch.patch",
"forge_model_crash_recovery.patch",
"forge_api_refresh_text_encoders.patch",
"forge_loader_force_gc.patch",
"forge_monitor_parent_process.patch",
"forge_disable_corrupted_model_renaming.patch",
]
backend_process = None
conda = "conda"
def locate_conda():
global conda
which = "where" if OS_NAME == "Windows" else "which"
conda = subprocess.getoutput(f"{which} conda")
conda = conda.split("\n")
conda = conda[0].strip()
print("conda: ", conda)
locate_conda()
def install_backend():
print("Installing the WebUI backend..")
# create the conda env
run([conda, "create", "-y", "--prefix", SYSTEM_DIR], cwd=ROOT_DIR)
print("Installing packages..")
# install python 3.10 and git in the conda env
run([conda, "install", "-y", "--prefix", SYSTEM_DIR, "-c", "conda-forge", "python=3.10", "git"], cwd=ROOT_DIR)
env = dict(os.environ)
env.update(get_env())
# print info
run_in_conda(["git", "--version"], cwd=ROOT_DIR, env=env)
run_in_conda(["python", "--version"], cwd=ROOT_DIR, env=env)
# clone webui
run_in_conda(["git", "clone", WEBUI_REPO, WEBUI_DIR], cwd=ROOT_DIR, env=env)
# install the appropriate version of torch using torchruntime
run_in_conda(["python", "-m", "pip", "install", "torchruntime"], cwd=WEBUI_DIR, env=env)
run_in_conda(["python", "-m", "torchruntime", "install", "torch", "torchvision"], cwd=WEBUI_DIR, env=env)
def start_backend():
config = getConfig()
backend_config = config.get("backend_config", {})
log.info(f"Expected WebUI backend dir: {BACKEND_DIR}")
if not os.path.exists(BACKEND_DIR):
install_backend()
env = dict(os.environ)
env.update(get_env())
was_still_installing = not is_installed()
if backend_config.get("auto_update", True):
run_in_conda(["git", "add", "-A", "."], cwd=WEBUI_DIR, env=env)
run_in_conda(["git", "stash"], cwd=WEBUI_DIR, env=env)
run_in_conda(["git", "reset", "--hard"], cwd=WEBUI_DIR, env=env)
run_in_conda(["git", "fetch"], cwd=WEBUI_DIR, env=env)
run_in_conda(["git", "-c", "advice.detachedHead=false", "checkout", WEBUI_COMMIT], cwd=WEBUI_DIR, env=env)
# patch forge for various stability-related fixes
for patch in WEBUI_PATCHES:
patch_path = os.path.join(os.path.dirname(__file__), patch)
log.info(f"Applying WebUI patch: {patch_path}")
run_in_conda(["git", "apply", patch_path], cwd=WEBUI_DIR, env=env)
# workaround for the installations that broke out of conda and used ED's python 3.8 instead of WebUI conda's Py 3.10
run_in_conda(["python", "-m", "pip", "install", "-q", "--upgrade", "urllib3==2.2.3"], cwd=WEBUI_DIR, env=env)
# hack to prevent webui-macos-env.sh from overwriting the COMMANDLINE_ARGS env variable
mac_webui_file = os.path.join(WEBUI_DIR, "webui-macos-env.sh")
if os.path.exists(mac_webui_file):
os.remove(mac_webui_file)
impl.WEBUI_HOST = backend_config.get("host", "localhost")
impl.WEBUI_PORT = backend_config.get("port", "7860")
def restart_if_webui_dies_after_starting():
has_started = False
while True:
try:
impl.ping(timeout=30)
is_first_start = not has_started
has_started = True
if was_still_installing and is_first_start:
ui = config.get("ui", {})
net = config.get("net", {})
port = net.get("listen_port", 9000)
if ui.get("open_browser_on_start", True):
import webbrowser
log.info("Opening browser..")
webbrowser.open(f"http://localhost:{port}")
except (TimeoutError, ConnectionError):
if has_started: # process probably died
print("######################## WebUI probably died. Restarting...")
stop_backend()
backend_thread = threading.Thread(target=target)
backend_thread.start()
break
except Exception:
import traceback
log.exception(traceback.format_exc())
time.sleep(1)
def target():
global backend_process
cmd = "webui.bat" if OS_NAME == "Windows" else "./webui.sh"
print("starting", cmd, WEBUI_DIR)
backend_process = run_in_conda([cmd], cwd=WEBUI_DIR, env=env, wait=False, output_prefix="[WebUI] ")
# atexit.register isn't 100% reliable, that's why we also use `forge_monitor_parent_process.patch`
# which causes Forge to kill itself if the parent pid passed to it is no longer valid.
atexit.register(backend_process.terminate)
restart_if_dead_thread = threading.Thread(target=restart_if_webui_dies_after_starting)
restart_if_dead_thread.start()
backend_process.wait()
backend_thread = threading.Thread(target=target)
backend_thread.start()
start_proxy()
def start_proxy():
# proxy
from easydiffusion.server import server_api
from fastapi import FastAPI, Request
from fastapi.responses import Response
import json
URI_PREFIX = "/webui"
webui_proxy = FastAPI(root_path=f"{URI_PREFIX}", docs_url="/swagger")
@webui_proxy.get("{uri:path}")
def proxy_get(uri: str, req: Request):
if uri == "/openapi-proxy.json":
uri = "/openapi.json"
res = impl.webui_get(uri, headers=req.headers)
content = res.content
headers = dict(res.headers)
if uri == "/docs":
content = res.text.replace("url: '/openapi.json'", f"url: '{URI_PREFIX}/openapi-proxy.json'")
elif uri == "/openapi.json":
content = res.json()
content["paths"] = {f"{URI_PREFIX}{k}": v for k, v in content["paths"].items()}
content = json.dumps(content)
if isinstance(content, str):
content = bytes(content, encoding="utf-8")
headers["content-length"] = str(len(content))
# Return the same response back to the client
return Response(content=content, status_code=res.status_code, headers=headers)
@webui_proxy.post("{uri:path}")
async def proxy_post(uri: str, req: Request):
body = await req.body()
res = impl.webui_post(uri, data=body, headers=req.headers)
# Return the same response back to the client
return Response(content=res.content, status_code=res.status_code, headers=dict(res.headers))
server_api.mount(f"{URI_PREFIX}", webui_proxy)
def stop_backend():
global backend_process
if backend_process:
try:
kill(backend_process.pid)
except:
pass
backend_process = None
def uninstall_backend():
shutil.rmtree(BACKEND_DIR)
def is_installed():
if not os.path.exists(BACKEND_DIR) or not os.path.exists(SYSTEM_DIR) or not os.path.exists(WEBUI_DIR):
return True
env = dict(os.environ)
env.update(get_env())
try:
out = check_output_in_conda(["python", "-m", "pip", "show", "torch"], env=env)
return "Version" in out.decode()
except subprocess.CalledProcessError:
pass
return False
def read_output(pipe, prefix=""):
while True:
output = pipe.readline()
if output:
print(f"{prefix}{output.decode('utf-8')}", end="")
else:
break # Pipe is closed, subprocess has likely exited
def run(cmds: list, cwd=None, env=None, stream_output=True, wait=True, output_prefix=""):
p = subprocess.Popen(cmds, cwd=cwd, env=env, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
if stream_output:
output_thread = threading.Thread(target=read_output, args=(p.stdout, output_prefix))
output_thread.start()
if wait:
p.wait()
return p
def run_in_conda(cmds: list, *args, **kwargs):
cmds = [conda, "run", "--no-capture-output", "--prefix", SYSTEM_DIR] + cmds
return run(cmds, *args, **kwargs)
def check_output_in_conda(cmds: list, cwd=None, env=None):
cmds = [conda, "run", "--no-capture-output", "--prefix", SYSTEM_DIR] + cmds
return subprocess.check_output(cmds, cwd=cwd, env=env, stderr=subprocess.PIPE)
def create_context():
context = local()
# temp hack, throws an attribute not found error otherwise
context.torch_device = get_device(0)
context.device = f"{context.torch_device.type}:{context.torch_device.index}"
context.half_precision = True
context.vram_usage_level = None
context.models = {}
context.model_paths = {}
context.model_configs = {}
context.device_name = None
context.vram_optimizations = set()
context.vram_usage_level = "balanced"
context.test_diffusers = False
context.enable_codeformer = False
return context
def get_env():
dir = os.path.abspath(SYSTEM_DIR)
if not os.path.exists(dir):
raise RuntimeError("The system folder is missing!")
config = getConfig()
backend_config = config.get("backend_config", {})
models_dir = config.get("models_dir", os.path.join(ROOT_DIR, "models"))
model_path_args = get_model_path_args()
env_entries = {
"PATH": [
f"{dir}",
f"{dir}/bin",
f"{dir}/Library/bin",
f"{dir}/Scripts",
f"{dir}/usr/bin",
],
"PYTHONPATH": [
f"{dir}",
f"{dir}/lib/site-packages",
f"{dir}/lib/python3.10/site-packages",
],
"PYTHONHOME": [],
"PY_LIBS": [
f"{dir}/Scripts/Lib",
f"{dir}/Scripts/Lib/site-packages",
f"{dir}/lib",
f"{dir}/lib/python3.10/site-packages",
],
"PY_PIP": [f"{dir}/Scripts", f"{dir}/bin"],
"PIP_INSTALLER_LOCATION": [], # [f"{dir}/python/get-pip.py"],
"TRANSFORMERS_CACHE": [f"{dir}/transformers-cache"],
"HF_HUB_DISABLE_SYMLINKS_WARNING": ["true"],
"COMMANDLINE_ARGS": [
f'--api --models-dir "{models_dir}" {model_path_args} --skip-torch-cuda-test --disable-gpu-warning --port {impl.WEBUI_PORT}'
],
"SKIP_VENV": ["1"],
"SD_WEBUI_RESTARTING": ["1"],
}
if OS_NAME == "Windows":
env_entries["PATH"].append("C:/Windows/System32")
env_entries["PATH"].append("C:/Windows/System32/wbem")
env_entries["PYTHONNOUSERSITE"] = ["1"]
env_entries["PYTHON"] = [f"{dir}/python"]
env_entries["GIT"] = [f"{dir}/Library/bin/git"]
env_entries["COMMANDLINE_ARGS"][0] += f" --parent-pid {os.getpid()}"
else:
env_entries["PATH"].append("/bin")
env_entries["PATH"].append("/usr/bin")
env_entries["PATH"].append("/usr/sbin")
env_entries["PYTHONNOUSERSITE"] = ["y"]
env_entries["PYTHON"] = [f"{dir}/bin/python"]
env_entries["GIT"] = [f"{dir}/bin/git"]
env_entries["venv_dir"] = ["-"]
if OS_NAME == "Darwin":
# based on https://github.com/lllyasviel/stable-diffusion-webui-forge/blob/e26abf87ecd1eefd9ab0a198eee56f9c643e4001/webui-macos-env.sh
# hack - have to define these here, otherwise webui-macos-env.sh will overwrite COMMANDLINE_ARGS
env_entries["COMMANDLINE_ARGS"][0] += " --upcast-sampling --no-half-vae --use-cpu interrogate"
env_entries["PYTORCH_ENABLE_MPS_FALLBACK"] = ["1"]
cpu_name = str(subprocess.check_output(["sysctl", "-n", "machdep.cpu.brand_string"]))
if "Intel" in cpu_name:
env_entries["TORCH_COMMAND"] = ["pip install torch==2.1.2 torchvision==0.16.2"]
else:
env_entries["TORCH_COMMAND"] = ["pip install torch==2.3.1 torchvision==0.18.1"]
else:
from easydiffusion.device_manager import needs_to_force_full_precision
torch_platform_name = get_installed_torch_platform()[0]
vram_usage_level = config.get("vram_usage_level", "balanced")
if config.get("render_devices", "auto") == "cpu" or is_cpu_device(torch_platform_name):
env_entries["COMMANDLINE_ARGS"][0] += " --always-cpu"
else:
device = get_device(0)
if needs_to_force_full_precision(get_device_name(device)):
env_entries["COMMANDLINE_ARGS"][0] += " --no-half --precision full"
if vram_usage_level == "low":
env_entries["COMMANDLINE_ARGS"][0] += " --always-low-vram"
elif vram_usage_level == "high":
env_entries["COMMANDLINE_ARGS"][0] += " --always-high-vram"
cli_args = backend_config.get("COMMANDLINE_ARGS")
if cli_args:
env_entries["COMMANDLINE_ARGS"][0] += " " + cli_args
env = {}
for key, paths in env_entries.items():
paths = [p.replace("/", os.path.sep) for p in paths]
paths = os.pathsep.join(paths)
env[key] = paths
return env
# https://stackoverflow.com/a/25134985
def kill(proc_pid):
process = psutil.Process(proc_pid)
for proc in process.children(recursive=True):
proc.kill()
process.kill()
def get_model_path_args():
args = []
for model_type, flag in MODELS_TO_OVERRIDE.items():
model_dir = get_model_dirs(model_type)[0]
args.append(f'{flag} "{model_dir}"')
return " ".join(args)

View File

@ -0,0 +1,25 @@
diff --git a/modules/api/api.py b/modules/api/api.py
index 9754be03..26d9eb9b 100644
--- a/modules/api/api.py
+++ b/modules/api/api.py
@@ -234,6 +234,7 @@ class Api:
self.add_api_route("/sdapi/v1/refresh-embeddings", self.refresh_embeddings, methods=["POST"])
self.add_api_route("/sdapi/v1/refresh-checkpoints", self.refresh_checkpoints, methods=["POST"])
self.add_api_route("/sdapi/v1/refresh-vae", self.refresh_vae, methods=["POST"])
+ self.add_api_route("/sdapi/v1/refresh-vae-and-text-encoders", self.refresh_vae_and_text_encoders, methods=["POST"])
self.add_api_route("/sdapi/v1/create/embedding", self.create_embedding, methods=["POST"], response_model=models.CreateResponse)
self.add_api_route("/sdapi/v1/create/hypernetwork", self.create_hypernetwork, methods=["POST"], response_model=models.CreateResponse)
self.add_api_route("/sdapi/v1/memory", self.get_memory, methods=["GET"], response_model=models.MemoryResponse)
@@ -779,6 +780,12 @@ class Api:
with self.queue_lock:
shared_items.refresh_vae_list()
+ def refresh_vae_and_text_encoders(self):
+ from modules_forge.main_entry import refresh_models
+
+ with self.queue_lock:
+ refresh_models()
+
def create_embedding(self, args: dict):
try:
shared.state.begin(job="create_embedding")

View File

@ -0,0 +1,35 @@
diff --git a/modules_forge/patch_basic.py b/modules_forge/patch_basic.py
index 822e2838..5893efad 100644
--- a/modules_forge/patch_basic.py
+++ b/modules_forge/patch_basic.py
@@ -39,18 +39,18 @@ def build_loaded(module, loader_name):
except Exception as e:
result = None
exp = str(e) + '\n'
- for path in list(args) + list(kwargs.values()):
- if isinstance(path, str):
- if os.path.exists(path):
- exp += f'File corrupted: {path} \n'
- corrupted_backup_file = path + '.corrupted'
- if os.path.exists(corrupted_backup_file):
- os.remove(corrupted_backup_file)
- os.replace(path, corrupted_backup_file)
- if os.path.exists(path):
- os.remove(path)
- exp += f'Forge has tried to move the corrupted file to {corrupted_backup_file} \n'
- exp += f'You may try again now and Forge will download models again. \n'
+ # for path in list(args) + list(kwargs.values()):
+ # if isinstance(path, str):
+ # if os.path.exists(path):
+ # exp += f'File corrupted: {path} \n'
+ # corrupted_backup_file = path + '.corrupted'
+ # if os.path.exists(corrupted_backup_file):
+ # os.remove(corrupted_backup_file)
+ # os.replace(path, corrupted_backup_file)
+ # if os.path.exists(path):
+ # os.remove(path)
+ # exp += f'Forge has tried to move the corrupted file to {corrupted_backup_file} \n'
+ # exp += f'You may try again now and Forge will download models again. \n'
raise ValueError(exp)
return result

View File

@ -0,0 +1,20 @@
diff --git a/modules_forge/main_thread.py b/modules_forge/main_thread.py
index eb3e7889..0cfcadd1 100644
--- a/modules_forge/main_thread.py
+++ b/modules_forge/main_thread.py
@@ -33,8 +33,8 @@ class Task:
except Exception as e:
traceback.print_exc()
print(e)
- self.exception = e
- last_exception = e
+ self.exception = str(e)
+ last_exception = str(e)
def loop():
@@ -74,4 +74,3 @@ def run_and_wait_result(func, *args, **kwargs):
with lock:
finished_list.remove(finished_task)
return finished_task.result
-

View File

@ -0,0 +1,27 @@
diff --git a/backend/loader.py b/backend/loader.py
index a090adab..53fdb26d 100644
--- a/backend/loader.py
+++ b/backend/loader.py
@@ -1,4 +1,5 @@
import os
+import gc
import torch
import logging
import importlib
@@ -447,6 +448,8 @@ def preprocess_state_dict(sd):
def split_state_dict(sd, additional_state_dicts: list = None):
+ gc.collect()
+
sd = load_torch_file(sd)
sd = preprocess_state_dict(sd)
guess = huggingface_guess.guess(sd)
@@ -456,6 +459,7 @@ def split_state_dict(sd, additional_state_dicts: list = None):
asd = load_torch_file(asd)
sd = replace_state_dict(sd, asd, guess)
del asd
+ gc.collect()
guess.clip_target = guess.clip_target(sd)
guess.model_type = guess.model_type(sd)

View File

@ -0,0 +1,12 @@
diff --git a/modules/sd_models.py b/modules/sd_models.py
index 76940ecc..b07d84b6 100644
--- a/modules/sd_models.py
+++ b/modules/sd_models.py
@@ -482,6 +482,7 @@ def forge_model_reload():
if model_data.sd_model:
model_data.sd_model = None
+ model_data.forge_hash = None
memory_management.unload_all_models()
memory_management.soft_empty_cache()
gc.collect()

View File

@ -0,0 +1,85 @@
diff --git a/launch.py b/launch.py
index c0568c7b..3919f7dd 100644
--- a/launch.py
+++ b/launch.py
@@ -2,6 +2,7 @@
# faulthandler.enable()
from modules import launch_utils
+from modules import parent_process_monitor
args = launch_utils.args
python = launch_utils.python
@@ -28,6 +29,10 @@ start = launch_utils.start
def main():
+ if args.parent_pid != -1:
+ print(f"Monitoring parent process for termination. Parent PID: {args.parent_pid}")
+ parent_process_monitor.start_monitor_thread(args.parent_pid)
+
if args.dump_sysinfo:
filename = launch_utils.dump_sysinfo()
diff --git a/modules/cmd_args.py b/modules/cmd_args.py
index fcd8a50f..7f684bec 100644
--- a/modules/cmd_args.py
+++ b/modules/cmd_args.py
@@ -148,3 +148,6 @@ parser.add_argument(
help="Path to directory with annotator model directories",
default=None,
)
+
+# Easy Diffusion arguments
+parser.add_argument("--parent-pid", type=int, default=-1, help='parent process id, if running webui as a sub-process')
diff --git a/modules/parent_process_monitor.py b/modules/parent_process_monitor.py
new file mode 100644
index 00000000..cc3e2049
--- /dev/null
+++ b/modules/parent_process_monitor.py
@@ -0,0 +1,45 @@
+# monitors and kills itself when the parent process dies. required when running Forge as a sub-process.
+# modified version of https://stackoverflow.com/a/23587108
+
+import sys
+import os
+import threading
+import platform
+import time
+
+
+def _monitor_parent_posix(parent_pid):
+ print(f"Monitoring parent pid: {parent_pid}")
+ while True:
+ if os.getppid() != parent_pid:
+ os._exit(0)
+ time.sleep(1)
+
+
+def _monitor_parent_windows(parent_pid):
+ from ctypes import WinDLL, WinError
+ from ctypes.wintypes import DWORD, BOOL, HANDLE
+
+ SYNCHRONIZE = 0x00100000 # Magic value from http://msdn.microsoft.com/en-us/library/ms684880.aspx
+ kernel32 = WinDLL("kernel32.dll")
+ kernel32.OpenProcess.argtypes = (DWORD, BOOL, DWORD)
+ kernel32.OpenProcess.restype = HANDLE
+
+ handle = kernel32.OpenProcess(SYNCHRONIZE, False, parent_pid)
+ if not handle:
+ raise WinError()
+
+ # Wait until parent exits
+ from ctypes import windll
+
+ print(f"Monitoring parent pid: {parent_pid}")
+ windll.kernel32.WaitForSingleObject(handle, -1)
+ os._exit(0)
+
+
+def start_monitor_thread(parent_pid):
+ if platform.system() == "Windows":
+ t = threading.Thread(target=_monitor_parent_windows, args=(parent_pid,), daemon=True)
+ else:
+ t = threading.Thread(target=_monitor_parent_posix, args=(parent_pid,), daemon=True)
+ t.start()

View File

@ -0,0 +1,661 @@
import os
import requests
from requests.exceptions import ConnectTimeout, ConnectionError, ReadTimeout
from typing import Union, List
from threading import local as Context
from threading import Thread
import uuid
import time
from copy import deepcopy
from sdkit.utils import base64_str_to_img, img_to_base64_str, log
WEBUI_HOST = "localhost"
WEBUI_PORT = "7860"
DEFAULT_WEBUI_OPTIONS = {
"show_progress_every_n_steps": 3,
"show_progress_grid": True,
"live_previews_enable": False,
"forge_additional_modules": [],
}
webui_opts: dict = None
curr_models = {
"stable-diffusion": None,
"vae": None,
"text-encoder": None,
}
def set_options(context, **kwargs):
changed_opts = {}
opts_mapping = {
"stream_image_progress": ("live_previews_enable", bool),
"stream_image_progress_interval": ("show_progress_every_n_steps", int),
"clip_skip": ("CLIP_stop_at_last_layers", int),
"clip_skip_sdxl": ("sdxl_clip_l_skip", bool),
"output_format": ("samples_format", str),
}
for ed_key, webui_key in opts_mapping.items():
webui_key, webui_type = webui_key
if ed_key in kwargs and (webui_opts is None or webui_opts.get(webui_key, False) != webui_type(kwargs[ed_key])):
changed_opts[webui_key] = webui_type(kwargs[ed_key])
if changed_opts:
changed_opts["sd_model_checkpoint"] = curr_models["stable-diffusion"]
print(f"Got options: {kwargs}. Sending options: {changed_opts}")
try:
res = webui_post("/sdapi/v1/options", json=changed_opts)
if res.status_code != 200:
raise Exception(res.text)
webui_opts.update(changed_opts)
except Exception as e:
print(f"Error setting options: {e}")
def ping(timeout=1):
"timeout (in seconds)"
global webui_opts
try:
res = webui_get("/internal/ping", timeout=timeout)
if res.status_code != 200:
raise ConnectTimeout(res.text)
if webui_opts is None:
try:
res = webui_post("/sdapi/v1/options", json=DEFAULT_WEBUI_OPTIONS)
if res.status_code != 200:
raise Exception(res.text)
except Exception as e:
print(f"Error setting options: {e}")
try:
res = webui_get("/sdapi/v1/options")
if res.status_code != 200:
raise Exception(res.text)
webui_opts = res.json()
except Exception as e:
print(f"Error getting options: {e}")
return True
except (ConnectTimeout, ConnectionError, ReadTimeout) as e:
raise TimeoutError(e)
def load_model(context, model_type, **kwargs):
from easydiffusion.app import ROOT_DIR, getConfig
config = getConfig()
models_dir = config.get("models_dir", os.path.join(ROOT_DIR, "models"))
model_path = context.model_paths[model_type]
if model_type == "stable-diffusion":
base_dir = os.path.join(models_dir, model_type)
model_path = os.path.relpath(model_path, base_dir)
# print(f"load model: {model_type=} {model_path=} {curr_models=}")
curr_models[model_type] = model_path
def unload_model(context, model_type, **kwargs):
# print(f"unload model: {model_type=} {curr_models=}")
curr_models[model_type] = None
def flush_model_changes(context):
if webui_opts is None:
print("Server not ready, can't set the model")
return
modules = []
for model_type in ("vae", "text-encoder"):
if curr_models[model_type]:
model_paths = curr_models[model_type]
model_paths = [model_paths] if not isinstance(model_paths, list) else model_paths
modules += model_paths
opts = {"sd_model_checkpoint": curr_models["stable-diffusion"], "forge_additional_modules": modules}
print("Setting backend models", opts)
try:
res = webui_post("/sdapi/v1/options", json=opts)
print("got res", res.status_code)
if res.status_code != 200:
raise Exception(res.text)
except Exception as e:
raise RuntimeError(
f"The engine failed to set the required options. Please check the logs in the command line window for more details."
)
def generate_images(
context: Context,
prompt: str = "",
negative_prompt: str = "",
seed: int = 42,
width: int = 512,
height: int = 512,
num_outputs: int = 1,
num_inference_steps: int = 25,
guidance_scale: float = 7.5,
distilled_guidance_scale: float = 3.5,
init_image=None,
init_image_mask=None,
control_image=None,
control_alpha=1.0,
controlnet_filter=None,
prompt_strength: float = 0.8,
preserve_init_image_color_profile=False,
strict_mask_border=False,
sampler_name: str = "euler_a",
scheduler_name: str = "simple",
hypernetwork_strength: float = 0,
tiling=None,
lora_alpha: Union[float, List[float]] = 0,
sampler_params={},
callback=None,
output_type="pil",
):
task_id = str(uuid.uuid4())
sampler_name = convert_ED_sampler_names(sampler_name)
controlnet_filter = convert_ED_controlnet_filter_name(controlnet_filter)
cmd = {
"force_task_id": task_id,
"prompt": prompt,
"negative_prompt": negative_prompt,
"sampler_name": sampler_name,
"scheduler": scheduler_name,
"steps": num_inference_steps,
"seed": seed,
"cfg_scale": guidance_scale,
"distilled_cfg_scale": distilled_guidance_scale,
"batch_size": num_outputs,
"width": width,
"height": height,
}
if init_image:
cmd["init_images"] = [init_image]
cmd["denoising_strength"] = prompt_strength
if init_image_mask:
cmd["mask"] = init_image_mask if isinstance(init_image_mask, str) else img_to_base64_str(init_image_mask)
cmd["include_init_images"] = True
cmd["inpainting_fill"] = 1
cmd["initial_noise_multiplier"] = 1
cmd["inpaint_full_res"] = 1
if context.model_paths.get("lora"):
lora_model = context.model_paths["lora"]
lora_model = lora_model if isinstance(lora_model, list) else [lora_model]
lora_alpha = lora_alpha if isinstance(lora_alpha, list) else [lora_alpha]
for lora, alpha in zip(lora_model, lora_alpha):
lora = os.path.basename(lora)
lora = os.path.splitext(lora)[0]
cmd["prompt"] += f" <lora:{lora}:{alpha}>"
if controlnet_filter and control_image and context.model_paths.get("controlnet"):
controlnet_model = context.model_paths["controlnet"]
model_hash = auto1111_hash(controlnet_model)
controlnet_model = os.path.basename(controlnet_model)
controlnet_model = os.path.splitext(controlnet_model)[0]
print(f"setting controlnet model: {controlnet_model}")
controlnet_model = f"{controlnet_model} [{model_hash}]"
cmd["alwayson_scripts"] = {
"controlnet": {
"args": [
{
"image": control_image,
"weight": control_alpha,
"module": controlnet_filter,
"model": controlnet_model,
"resize_mode": "Crop and Resize",
"threshold_a": 50,
"threshold_b": 130,
}
]
}
}
operation_to_apply = "img2img" if init_image else "txt2img"
stream_image_progress = webui_opts.get("live_previews_enable", False)
progress_thread = Thread(
target=image_progress_thread, args=(task_id, callback, stream_image_progress, num_outputs, num_inference_steps)
)
progress_thread.start()
print(f"task id: {task_id}")
print_request(operation_to_apply, cmd)
res = webui_post(f"/sdapi/v1/{operation_to_apply}", json=cmd)
if res.status_code == 200:
res = res.json()
else:
if res.status_code == 500:
res = res.json()
log.error(f"Server error: {res}")
raise Exception(f"{res['message']}. Please check the logs in the command-line window for more details.")
raise Exception(
f"HTTP Status {res.status_code}. The engine failed while generating this image. Please check the logs in the command-line window for more details."
)
import json
print(json.loads(res["info"])["infotexts"])
images = res["images"]
if output_type == "pil":
images = [base64_str_to_img(img) for img in images]
elif output_type == "base64":
images = [base64_buffer_to_base64_img(img) for img in images]
return images
def filter_images(context: Context, images, filters, filter_params={}, input_type="pil"):
"""
* context: Context
* images: str or PIL.Image or list of str/PIL.Image - image to filter. if a string is passed, it needs to be a base64-encoded image
* filters: filter_type (string) or list of strings
* filter_params: dict
returns: [PIL.Image] - list of filtered images
"""
images = images if isinstance(images, list) else [images]
filters = filters if isinstance(filters, list) else [filters]
if "nsfw_checker" in filters:
filters.remove("nsfw_checker") # handled by ED directly
args = {}
controlnet_filters = []
print(filter_params)
for filter_name in filters:
params = filter_params.get(filter_name, {})
if filter_name == "gfpgan":
args["gfpgan_visibility"] = 1
if filter_name in ("realesrgan", "esrgan_4x", "lanczos", "nearest", "scunet", "swinir"):
args["upscaler_1"] = params.get("upscaler", "RealESRGAN_x4plus")
args["upscaling_resize"] = params.get("scale", 4)
if args["upscaler_1"] == "RealESRGAN_x4plus":
args["upscaler_1"] = "R-ESRGAN 4x+"
elif args["upscaler_1"] == "RealESRGAN_x4plus_anime_6B":
args["upscaler_1"] = "R-ESRGAN 4x+ Anime6B"
if filter_name == "codeformer":
args["codeformer_visibility"] = 1
args["codeformer_weight"] = params.get("codeformer_fidelity", 0.5)
if filter_name.startswith("controlnet_"):
filter_name = convert_ED_controlnet_filter_name(filter_name)
controlnet_filters.append(filter_name)
print(f"filtering {len(images)} images with {args}. {controlnet_filters=}")
if len(filters) > len(controlnet_filters):
filtered_images = extra_batch_images(images, input_type=input_type, **args)
else:
filtered_images = images
for filter_name in controlnet_filters:
filtered_images = controlnet_filter(filtered_images, module=filter_name, input_type=input_type)
return filtered_images
def get_url():
return f"//{WEBUI_HOST}:{WEBUI_PORT}/?__theme=dark"
def stop_rendering(context):
try:
res = webui_post("/sdapi/v1/interrupt")
if res.status_code != 200:
raise Exception(res.text)
except Exception as e:
print(f"Error interrupting webui: {e}")
def refresh_models():
def make_refresh_call(type):
try:
webui_post(f"/sdapi/v1/refresh-{type}")
except:
pass
try:
for type in ("checkpoints", "vae-and-text-encoders"):
t = Thread(target=make_refresh_call, args=(type,))
t.start()
except Exception as e:
print(f"Error refreshing models: {e}")
def list_controlnet_filters():
return [
"openpose",
"openpose_face",
"openpose_faceonly",
"openpose_hand",
"openpose_full",
"animal_openpose",
"densepose_parula (black bg & blue torso)",
"densepose (pruple bg & purple torso)",
"dw_openpose_full",
"mediapipe_face",
"instant_id_face_keypoints",
"InsightFace+CLIP-H (IPAdapter)",
"InsightFace (InstantID)",
"canny",
"mlsd",
"scribble_hed",
"scribble_hedsafe",
"scribble_pidinet",
"scribble_pidsafe",
"scribble_xdog",
"softedge_hed",
"softedge_hedsafe",
"softedge_pidinet",
"softedge_pidsafe",
"softedge_teed",
"normal_bae",
"depth_midas",
"normal_midas",
"depth_zoe",
"depth_leres",
"depth_leres++",
"depth_anything_v2",
"depth_anything",
"depth_hand_refiner",
"depth_marigold",
"lineart_coarse",
"lineart_realistic",
"lineart_anime",
"lineart_standard (from white bg & black line)",
"lineart_anime_denoise",
"reference_adain",
"reference_only",
"reference_adain+attn",
"tile_colorfix",
"tile_resample",
"tile_colorfix+sharp",
"CLIP-ViT-H (IPAdapter)",
"CLIP-G (Revision)",
"CLIP-G (Revision ignore prompt)",
"CLIP-ViT-bigG (IPAdapter)",
"InsightFace+CLIP-H (IPAdapter)",
"inpaint_only",
"inpaint_only+lama",
"inpaint_global_harmonious",
"seg_ufade20k",
"seg_ofade20k",
"seg_anime_face",
"seg_ofcoco",
"shuffle",
"segment",
"invert (from white bg & black line)",
"threshold",
"t2ia_sketch_pidi",
"t2ia_color_grid",
"recolor_intensity",
"recolor_luminance",
"blur_gaussian",
]
def controlnet_filter(images, module="none", processor_res=512, threshold_a=64, threshold_b=64, input_type="pil"):
if input_type == "pil":
images = [img_to_base64_str(x) for x in images]
payload = {
"controlnet_module": module,
"controlnet_input_images": images,
"controlnet_processor_res": processor_res,
"controlnet_threshold_a": threshold_a,
"controlnet_threshold_b": threshold_b,
}
res = webui_post("/controlnet/detect", json=payload)
res = res.json()
filtered_images = res["images"]
if input_type == "pil":
filtered_images = [base64_str_to_img(img) for img in filtered_images]
elif input_type == "base64":
filtered_images = [base64_buffer_to_base64_img(img) for img in filtered_images]
return filtered_images
def image_progress_thread(task_id, callback, stream_image_progress, total_images, total_steps):
from PIL import Image
last_preview_id = -1
EMPTY_IMAGE = Image.new("RGB", (1, 1))
while True:
res = webui_post(
f"/internal/progress",
json={"id_task": task_id, "live_preview": stream_image_progress, "id_live_preview": last_preview_id},
)
if res.status_code == 200:
res = res.json()
else:
raise RuntimeError(f"Unexpected progress response. Status code: {res.status_code}. Res: {res.text}")
last_preview_id = res["id_live_preview"]
if res["progress"] is not None:
step_num = int(res["progress"] * total_steps)
if res["live_preview"] is not None:
img = res["live_preview"]
img = base64_str_to_img(img)
images = [EMPTY_IMAGE] * total_images
images[0] = img
else:
images = None
callback(images, step_num)
if res["completed"] == True:
print("Complete!")
break
time.sleep(0.5)
def webui_get(uri, *args, **kwargs):
url = f"http://{WEBUI_HOST}:{WEBUI_PORT}{uri}"
return requests.get(url, *args, **kwargs)
def webui_post(uri, *args, **kwargs):
url = f"http://{WEBUI_HOST}:{WEBUI_PORT}{uri}"
return requests.post(url, *args, **kwargs)
def print_request(operation_to_apply, args):
args = deepcopy(args)
if "init_images" in args:
args["init_images"] = ["img" for _ in args["init_images"]]
if "mask" in args:
args["mask"] = "mask_img"
controlnet_args = args.get("alwayson_scripts", {}).get("controlnet", {}).get("args", [])
if controlnet_args:
controlnet_args[0]["image"] = "control_image"
print(f"operation: {operation_to_apply}, args: {args}")
def auto1111_hash(file_path):
import hashlib
with open(file_path, "rb") as f:
f.seek(0x100000)
b = f.read(0x10000)
return hashlib.sha256(b).hexdigest()[:8]
def extra_batch_images(
images, # list of PIL images
name_list=None, # list of image names
resize_mode=0,
show_extras_results=True,
gfpgan_visibility=0,
codeformer_visibility=0,
codeformer_weight=0,
upscaling_resize=2,
upscaling_resize_w=512,
upscaling_resize_h=512,
upscaling_crop=True,
upscaler_1="None",
upscaler_2="None",
extras_upscaler_2_visibility=0,
upscale_first=False,
use_async=False,
input_type="pil",
):
if name_list is not None:
if len(name_list) != len(images):
raise RuntimeError("len(images) != len(name_list)")
else:
name_list = [f"image{i + 1:05}" for i in range(len(images))]
if input_type == "pil":
images = [img_to_base64_str(x) for x in images]
image_list = []
for name, image in zip(name_list, images):
image_list.append({"data": image, "name": name})
payload = {
"resize_mode": resize_mode,
"show_extras_results": show_extras_results,
"gfpgan_visibility": gfpgan_visibility,
"codeformer_visibility": codeformer_visibility,
"codeformer_weight": codeformer_weight,
"upscaling_resize": upscaling_resize,
"upscaling_resize_w": upscaling_resize_w,
"upscaling_resize_h": upscaling_resize_h,
"upscaling_crop": upscaling_crop,
"upscaler_1": upscaler_1,
"upscaler_2": upscaler_2,
"extras_upscaler_2_visibility": extras_upscaler_2_visibility,
"upscale_first": upscale_first,
"imageList": image_list,
}
res = webui_post("/sdapi/v1/extra-batch-images", json=payload)
if res.status_code == 200:
res = res.json()
else:
raise Exception(
"The engine failed while filtering this image. Please check the logs in the command-line window for more details."
)
images = res["images"]
if input_type == "pil":
images = [base64_str_to_img(img) for img in images]
elif input_type == "base64":
images = [base64_buffer_to_base64_img(img) for img in images]
return images
def base64_buffer_to_base64_img(img):
output_format = webui_opts.get("samples_format", "jpeg")
mime_type = f"image/{output_format.lower()}"
return f"data:{mime_type};base64," + img
def convert_ED_sampler_names(sampler_name):
name_mapping = {
"dpmpp_2m": "DPM++ 2M",
"dpmpp_sde": "DPM++ SDE",
"dpmpp_2m_sde": "DPM++ 2M SDE",
"dpmpp_2m_sde_heun": "DPM++ 2M SDE Heun",
"dpmpp_2s_a": "DPM++ 2S a",
"dpmpp_3m_sde": "DPM++ 3M SDE",
"euler_a": "Euler a",
"euler": "Euler",
"lms": "LMS",
"heun": "Heun",
"dpm2": "DPM2",
"dpm2_a": "DPM2 a",
"dpm_fast": "DPM fast",
"dpm_adaptive": "DPM adaptive",
"restart": "Restart",
"heun_pp2": "HeunPP2",
"ipndm": "IPNDM",
"ipndm_v": "IPNDM_V",
"deis": "DEIS",
"ddim": "DDIM",
"ddim_cfgpp": "DDIM CFG++",
"plms": "PLMS",
"unipc": "UniPC",
"lcm": "LCM",
"ddpm": "DDPM",
"forge_flux_realistic": "[Forge] Flux Realistic",
"forge_flux_realistic_slow": "[Forge] Flux Realistic (Slow)",
# deprecated samplers in 3.5
"dpm_solver_stability": None,
"unipc_snr": None,
"unipc_tu": None,
"unipc_snr_2": None,
"unipc_tu_2": None,
"unipc_tq": None,
}
return name_mapping.get(sampler_name)
def convert_ED_controlnet_filter_name(filter):
if filter is None:
return None
def cn(n):
if n.startswith("controlnet_"):
return n[len("controlnet_") :]
return n
mapping = {
"controlnet_scribble_hedsafe": None,
"controlnet_scribble_pidsafe": None,
"controlnet_softedge_pidsafe": "controlnet_softedge_pidisafe",
"controlnet_normal_bae": "controlnet_normalbae",
"controlnet_segment": None,
}
if isinstance(filter, list):
return [cn(mapping.get(f, f)) for f in filter]
return cn(mapping.get(filter, filter))

View File

@ -6,6 +6,15 @@ import traceback
import torch
from easydiffusion.utils import log
from torchruntime.utils import (
get_installed_torch_platform,
get_device,
get_device_count,
get_device_name,
SUPPORTED_BACKENDS,
)
from sdkit.utils import mem_get_info, is_cpu_device, has_half_precision_bug
"""
Set `FORCE_FULL_PRECISION` in the environment variables, or in `config.bat`/`config.sh` to set full precision (i.e. float32).
Otherwise the models will load at half-precision (i.e. float16).
@ -22,33 +31,15 @@ mem_free_threshold = 0
def get_device_delta(render_devices, active_devices):
"""
render_devices: 'cpu', or 'auto', or 'mps' or ['cuda:N'...]
active_devices: ['cpu', 'mps', 'cuda:N'...]
render_devices: 'auto' or backends listed in `torchruntime.utils.SUPPORTED_BACKENDS`
active_devices: [backends listed in `torchruntime.utils.SUPPORTED_BACKENDS`]
"""
if render_devices in ("cpu", "auto", "mps"):
render_devices = [render_devices]
elif render_devices is not None:
if isinstance(render_devices, str):
render_devices = [render_devices]
if isinstance(render_devices, list) and len(render_devices) > 0:
render_devices = list(filter(lambda x: x.startswith("cuda:") or x == "mps", render_devices))
if len(render_devices) == 0:
raise Exception(
'Invalid render_devices value in config.json. Valid: {"render_devices": ["cuda:0", "cuda:1"...]}, or {"render_devices": "cpu"} or {"render_devices": "mps"} or {"render_devices": "auto"}'
)
render_devices = render_devices or "auto"
render_devices = [render_devices] if isinstance(render_devices, str) else render_devices
render_devices = list(filter(lambda x: is_device_compatible(x), render_devices))
if len(render_devices) == 0:
raise Exception(
"Sorry, none of the render_devices configured in config.json are compatible with Stable Diffusion"
)
else:
raise Exception(
'Invalid render_devices value in config.json. Valid: {"render_devices": ["cuda:0", "cuda:1"...]}, or {"render_devices": "cpu"} or {"render_devices": "auto"}'
)
else:
render_devices = ["auto"]
# check for backend support
validate_render_devices(render_devices)
if "auto" in render_devices:
render_devices = auto_pick_devices(active_devices)
@ -64,47 +55,39 @@ def get_device_delta(render_devices, active_devices):
return devices_to_start, devices_to_stop
def is_mps_available():
return (
platform.system() == "Darwin"
and hasattr(torch.backends, "mps")
and torch.backends.mps.is_available()
and torch.backends.mps.is_built()
)
def validate_render_devices(render_devices):
supported_backends = ("auto",) + SUPPORTED_BACKENDS
unsupported_render_devices = [d for d in render_devices if not d.lower().startswith(supported_backends)]
def is_cuda_available():
return torch.cuda.is_available()
if unsupported_render_devices:
raise ValueError(
f"Invalid render devices in config: {unsupported_render_devices}. Valid render devices: {supported_backends}"
)
def auto_pick_devices(currently_active_devices):
global mem_free_threshold
if is_mps_available():
return ["mps"]
torch_platform_name = get_installed_torch_platform()[0]
if not is_cuda_available():
return ["cpu"]
device_count = torch.cuda.device_count()
if device_count == 1:
return ["cuda:0"] if is_device_compatible("cuda:0") else ["cpu"]
if is_cpu_device(torch_platform_name):
return [torch_platform_name]
device_count = get_device_count()
log.debug("Autoselecting GPU. Using most free memory.")
devices = []
for device in range(device_count):
device = f"cuda:{device}"
if not is_device_compatible(device):
continue
for device_id in range(device_count):
device_id = f"{torch_platform_name}:{device_id}" if device_count > 1 else torch_platform_name
device = get_device(device_id)
mem_free, mem_total = torch.cuda.mem_get_info(device)
mem_free, mem_total = mem_get_info(device)
mem_free /= float(10**9)
mem_total /= float(10**9)
device_name = torch.cuda.get_device_name(device)
device_name = get_device_name(device)
log.debug(
f"{device} detected: {device_name} - Memory (free/total): {round(mem_free, 2)}Gb / {round(mem_total, 2)}Gb"
f"{device_id} detected: {device_name} - Memory (free/total): {round(mem_free, 2)}Gb / {round(mem_total, 2)}Gb"
)
devices.append({"device": device, "device_name": device_name, "mem_free": mem_free})
devices.append({"device": device_id, "device_name": device_name, "mem_free": mem_free})
devices.sort(key=lambda x: x["mem_free"], reverse=True)
max_mem_free = devices[0]["mem_free"]
@ -117,69 +100,45 @@ def auto_pick_devices(currently_active_devices):
# always be very low (since their VRAM contains the model).
# These already-running devices probably aren't terrible, since they were picked in the past.
# Worst case, the user can restart the program and that'll get rid of them.
devices = list(
filter(
(lambda x: x["mem_free"] > mem_free_threshold or x["device"] in currently_active_devices),
devices,
)
)
devices = list(map(lambda x: x["device"], devices))
devices = [
x["device"] for x in devices if x["mem_free"] >= mem_free_threshold or x["device"] in currently_active_devices
]
return devices
def device_init(context, device):
"""
This function assumes the 'device' has already been verified to be compatible.
`get_device_delta()` has already filtered out incompatible devices.
"""
def device_init(context, device_id):
context.device = device_id
validate_device_id(device, log_prefix="device_init")
if "cuda" not in device:
context.device = device
if is_cpu_device(context.torch_device):
context.device_name = get_processor_name()
context.half_precision = False
log.debug(f"Render device available as {context.device_name}")
return
else:
context.device_name = get_device_name(context.torch_device)
context.device_name = torch.cuda.get_device_name(device)
context.device = device
# Some graphics cards have bugs in their firmware that prevent image generation at half precision
if needs_to_force_full_precision(context.device_name):
log.warn(f"forcing full precision on this GPU, to avoid corrupted images. GPU: {context.device_name}")
context.half_precision = False
# Force full precision on 1660 and 1650 NVIDIA cards to avoid creating green images
if needs_to_force_full_precision(context):
log.warn(f"forcing full precision on this GPU, to avoid green images. GPU detected: {context.device_name}")
# Apply force_full_precision now before models are loaded.
context.half_precision = False
log.info(f'Setting {device} as active, with precision: {"half" if context.half_precision else "full"}')
torch.cuda.device(device)
log.info(f'Setting {device_id} as active, with precision: {"half" if context.half_precision else "full"}')
def needs_to_force_full_precision(context):
def needs_to_force_full_precision(device_name):
if "FORCE_FULL_PRECISION" in os.environ:
return True
device_name = context.device_name.lower()
return (
("nvidia" in device_name or "geforce" in device_name or "quadro" in device_name)
and (
" 1660" in device_name
or " 1650" in device_name
or " 1630" in device_name
or " t400" in device_name
or " t550" in device_name
or " t600" in device_name
or " t1000" in device_name
or " t1200" in device_name
or " t2000" in device_name
)
) or ("tesla k40m" in device_name)
return has_half_precision_bug(device_name.lower())
def get_max_vram_usage_level(device):
if "cuda" in device:
_, mem_total = torch.cuda.mem_get_info(device)
else:
"Expects a torch.device as the argument"
if is_cpu_device(device):
return "high"
_, mem_total = mem_get_info(device)
if mem_total < 0.001: # probably a torch platform without a mem_get_info() implementation
return "high"
mem_total /= float(10**9)
@ -191,51 +150,6 @@ def get_max_vram_usage_level(device):
return "high"
def validate_device_id(device, log_prefix=""):
def is_valid():
if not isinstance(device, str):
return False
if device == "cpu" or device == "mps":
return True
if not device.startswith("cuda:") or not device[5:].isnumeric():
return False
return True
if not is_valid():
raise EnvironmentError(
f"{log_prefix}: device id should be 'cpu', 'mps', or 'cuda:N' (where N is an integer index for the GPU). Got: {device}"
)
def is_device_compatible(device):
"""
Returns True/False, and prints any compatibility errors
"""
# static variable "history".
is_device_compatible.history = getattr(is_device_compatible, "history", {})
try:
validate_device_id(device, log_prefix="is_device_compatible")
except:
log.error(str(e))
return False
if device in ("cpu", "mps"):
return True
# Memory check
try:
_, mem_total = torch.cuda.mem_get_info(device)
mem_total /= float(10**9)
if mem_total < 1.9:
if is_device_compatible.history.get(device) == None:
log.warn(f"GPU {device} with less than 2 GB of VRAM is not compatible with Stable Diffusion")
is_device_compatible.history[device] = 1
return False
except RuntimeError as e:
log.error(str(e))
return False
return True
def get_processor_name():
try:
import subprocess
@ -243,7 +157,8 @@ def get_processor_name():
if platform.system() == "Windows":
return platform.processor()
elif platform.system() == "Darwin":
os.environ["PATH"] = os.environ["PATH"] + os.pathsep + "/usr/sbin"
if "/usr/sbin" not in os.environ["PATH"].split(os.pathsep):
os.environ["PATH"] = os.environ["PATH"] + os.pathsep + "/usr/sbin"
command = "sysctl -n machdep.cpu.brand_string"
return subprocess.check_output(command, shell=True).decode().strip()
elif platform.system() == "Linux":

View File

@ -33,4 +33,3 @@ class Bucket(BucketBase):
class Config:
orm_mode = True

View File

@ -3,13 +3,13 @@ import shutil
from glob import glob
import traceback
from typing import Union
from os import path
from easydiffusion import app
from easydiffusion.types import ModelsData
from easydiffusion.utils import log
from sdkit import Context
from sdkit.models import load_model, scan_model, unload_model, download_model, get_model_info_from_db
from sdkit.models.model_loader.controlnet_filters import filters as cn_filters
from sdkit.models import scan_model, download_model, get_model_info_from_db
from sdkit.utils import hash_file_quick
from sdkit.models.model_loader.embeddings import get_embedding_token
@ -23,17 +23,19 @@ KNOWN_MODEL_TYPES = [
"codeformer",
"embeddings",
"controlnet",
"text-encoder",
]
MODEL_EXTENSIONS = {
"stable-diffusion": [".ckpt", ".safetensors"],
"vae": [".vae.pt", ".ckpt", ".safetensors"],
"hypernetwork": [".pt", ".safetensors"],
"stable-diffusion": [".ckpt", ".safetensors", ".sft", ".gguf"],
"vae": [".vae.pt", ".ckpt", ".safetensors", ".sft", ".gguf"],
"hypernetwork": [".pt", ".safetensors", ".sft"],
"gfpgan": [".pth"],
"realesrgan": [".pth"],
"lora": [".ckpt", ".safetensors", ".pt"],
"lora": [".ckpt", ".safetensors", ".sft", ".pt"],
"codeformer": [".pth"],
"embeddings": [".pt", ".bin", ".safetensors"],
"controlnet": [".pth", ".safetensors"],
"embeddings": [".pt", ".bin", ".safetensors", ".sft"],
"controlnet": [".pth", ".safetensors", ".sft"],
"text-encoder": [".safetensors", ".sft", ".gguf"],
}
DEFAULT_MODELS = {
"stable-diffusion": [
@ -51,6 +53,17 @@ DEFAULT_MODELS = {
],
}
MODELS_TO_LOAD_ON_START = ["stable-diffusion", "vae", "hypernetwork", "lora"]
ALTERNATE_FOLDER_NAMES = { # for WebUI compatibility
"stable-diffusion": "Stable-diffusion",
"vae": "VAE",
"hypernetwork": "hypernetworks",
"codeformer": "Codeformer",
"gfpgan": "GFPGAN",
"realesrgan": "RealESRGAN",
"lora": "Lora",
"controlnet": "ControlNet",
"text-encoder": "text_encoder",
}
known_models = {}
@ -63,6 +76,7 @@ def init():
def load_default_models(context: Context):
from easydiffusion import runtime
from easydiffusion.backend_manager import backend
runtime.set_vram_optimizations(context)
@ -70,13 +84,13 @@ def load_default_models(context: Context):
for model_type in MODELS_TO_LOAD_ON_START:
context.model_paths[model_type] = resolve_model_to_use(model_type=model_type, fail_if_not_found=False)
try:
load_model(
backend.load_model(
context,
model_type,
scan_model=context.model_paths[model_type] != None
and not context.model_paths[model_type].endswith(".safetensors"),
)
if model_type in context.model_load_errors:
if hasattr(context, "model_load_errors") and model_type in context.model_load_errors:
del context.model_load_errors[model_type]
except Exception as e:
log.error(f"[red]Error while loading {model_type} model: {context.model_paths[model_type]}[/red]")
@ -88,13 +102,17 @@ def load_default_models(context: Context):
log.exception(e)
del context.model_paths[model_type]
if not hasattr(context, "model_load_errors"):
context.model_load_errors = {}
context.model_load_errors[model_type] = str(e) # storing the entire Exception can lead to memory leaks
def unload_all(context: Context):
from easydiffusion.backend_manager import backend
for model_type in KNOWN_MODEL_TYPES:
unload_model(context, model_type)
if model_type in context.model_load_errors:
backend.unload_model(context, model_type)
if hasattr(context, "model_load_errors") and model_type in context.model_load_errors:
del context.model_load_errors[model_type]
@ -119,33 +137,33 @@ def resolve_model_to_use_single(model_name: str = None, model_type: str = None,
default_models = DEFAULT_MODELS.get(model_type, [])
config = app.getConfig()
model_dir = os.path.join(app.MODELS_DIR, model_type)
if not model_name: # When None try user configured model.
# config = getConfig()
if "model" in config and model_type in config["model"]:
model_name = config["model"][model_type]
if model_name:
# Check models directory
model_path = os.path.join(model_dir, model_name)
if os.path.exists(model_path):
return model_path
for model_extension in model_extensions:
if os.path.exists(model_path + model_extension):
return model_path + model_extension
if os.path.exists(model_name + model_extension):
return os.path.abspath(model_name + model_extension)
for model_dir in get_model_dirs(model_type):
if model_name:
# Check models directory
model_path = os.path.join(model_dir, model_name)
if os.path.exists(model_path):
return model_path
for model_extension in model_extensions:
if os.path.exists(model_path + model_extension):
return model_path + model_extension
if os.path.exists(model_name + model_extension):
return os.path.abspath(model_name + model_extension)
# Can't find requested model, check the default paths.
if model_type == "stable-diffusion" and not fail_if_not_found:
for default_model in default_models:
default_model_path = os.path.join(model_dir, default_model["file_name"])
if os.path.exists(default_model_path):
if model_name is not None:
log.warn(
f"Could not find the configured custom model {model_name}. Using the default one: {default_model_path}"
)
return default_model_path
# Can't find requested model, check the default paths.
if model_type == "stable-diffusion" and not fail_if_not_found:
for default_model in default_models:
default_model_path = os.path.join(model_dir, default_model["file_name"])
if os.path.exists(default_model_path):
if model_name is not None:
log.warn(
f"Could not find the configured custom model {model_name}. Using the default one: {default_model_path}"
)
return default_model_path
if model_name and fail_if_not_found:
raise FileNotFoundError(
@ -154,6 +172,8 @@ def resolve_model_to_use_single(model_name: str = None, model_type: str = None,
def reload_models_if_necessary(context: Context, models_data: ModelsData, models_to_force_reload: list = []):
from easydiffusion.backend_manager import backend
models_to_reload = {
model_type: path
for model_type, path in models_data.model_paths.items()
@ -175,39 +195,68 @@ def reload_models_if_necessary(context: Context, models_data: ModelsData, models
for model_type, model_path_in_req in models_to_reload.items():
context.model_paths[model_type] = model_path_in_req
action_fn = unload_model if context.model_paths[model_type] is None else load_model
action_fn = backend.unload_model if context.model_paths[model_type] is None else backend.load_model
extra_params = models_data.model_params.get(model_type, {})
try:
action_fn(context, model_type, scan_model=False, **extra_params) # we've scanned them already
if model_type in context.model_load_errors:
if hasattr(context, "model_load_errors") and model_type in context.model_load_errors:
del context.model_load_errors[model_type]
except Exception as e:
log.exception(e)
if action_fn == load_model:
if action_fn == backend.load_model:
if not hasattr(context, "model_load_errors"):
context.model_load_errors = {}
context.model_load_errors[model_type] = str(e) # storing the entire Exception can lead to memory leaks
if hasattr(backend, "flush_model_changes"):
backend.flush_model_changes(context)
def resolve_model_paths(models_data: ModelsData):
from easydiffusion.backend_manager import backend
cn_filters = backend.list_controlnet_filters()
model_paths = models_data.model_paths
skip_models = cn_filters + [
"latent_upscaler",
"nsfw_checker",
"esrgan_4x",
"lanczos",
"nearest",
"scunet",
"swinir",
]
for model_type in model_paths:
skip_models = cn_filters + ["latent_upscaler", "nsfw_checker"]
if model_type in skip_models: # doesn't use model paths
continue
if model_type == "codeformer" and model_paths[model_type]:
download_if_necessary("codeformer", "codeformer.pth", "codeformer-0.1.0")
elif model_type == "controlnet" and model_paths[model_type]:
model_id = model_paths[model_type]
model_info = get_model_info_from_db(model_type=model_type, model_id=model_id)
if model_info:
filename = model_info.get("url", "").split("/")[-1]
download_if_necessary("controlnet", filename, model_id, skip_if_others_exist=False)
if model_type in ("vae", "codeformer", "controlnet", "text-encoder") and model_paths[model_type]:
model_ids = model_paths[model_type]
model_ids = model_ids if isinstance(model_ids, list) else [model_ids]
new_model_paths = []
for model_id in model_ids:
# log.info(f"Checking for {model_id=}")
model_info = get_model_info_from_db(model_type=model_type, model_id=model_id)
if model_info:
filename = model_info.get("url", "").split("/")[-1]
download_if_necessary(model_type, filename, model_id, skip_if_others_exist=False)
new_model_paths.append(path.splitext(filename)[0])
else: # not in the model db, probably a regular file
new_model_paths.append(model_id)
model_paths[model_type] = new_model_paths
model_paths[model_type] = resolve_model_to_use(model_paths[model_type], model_type=model_type)
def fail_if_models_did_not_load(context: Context):
for model_type in KNOWN_MODEL_TYPES:
if model_type in context.model_load_errors:
if hasattr(context, "model_load_errors") and model_type in context.model_load_errors:
e = context.model_load_errors[model_type]
raise Exception(f"Could not load the {model_type} model! Reason: " + e)
@ -225,16 +274,31 @@ def download_default_models_if_necessary():
def download_if_necessary(model_type: str, file_name: str, model_id: str, skip_if_others_exist=True):
model_path = os.path.join(app.MODELS_DIR, model_type, file_name)
from easydiffusion.backend_manager import backend
expected_hash = get_model_info_from_db(model_type=model_type, model_id=model_id)["quick_hash"]
other_models_exist = any_model_exists(model_type) and skip_if_others_exist
known_model_exists = os.path.exists(model_path)
known_model_is_corrupt = known_model_exists and hash_file_quick(model_path) != expected_hash
if known_model_is_corrupt or (not other_models_exist and not known_model_exists):
print("> download", model_type, model_id)
download_model(model_type, model_id, download_base_dir=app.MODELS_DIR, download_config_if_available=False)
for model_dir in get_model_dirs(model_type):
model_path = os.path.join(model_dir, file_name)
known_model_exists = os.path.exists(model_path)
known_model_is_corrupt = known_model_exists and hash_file_quick(model_path) != expected_hash
needs_download = known_model_is_corrupt or (not other_models_exist and not known_model_exists)
# log.info(f"{model_path=} {needs_download=}")
# if known_model_exists:
# log.info(f"{expected_hash=} {hash_file_quick(model_path)=}")
# log.info(f"{known_model_is_corrupt=} {other_models_exist=} {known_model_exists=}")
if not needs_download:
return
print("> download", model_type, model_id)
download_model(model_type, model_id, download_base_dir=app.MODELS_DIR, download_config_if_available=False)
backend.refresh_models()
def migrate_legacy_model_location():
@ -245,21 +309,23 @@ def migrate_legacy_model_location():
file_name = model["file_name"]
legacy_path = os.path.join(app.SD_DIR, file_name)
if os.path.exists(legacy_path):
shutil.move(legacy_path, os.path.join(app.MODELS_DIR, model_type, file_name))
model_dir = get_model_dirs(model_type)[0]
shutil.move(legacy_path, os.path.join(model_dir, file_name))
def any_model_exists(model_type: str) -> bool:
extensions = MODEL_EXTENSIONS.get(model_type, [])
for ext in extensions:
if any(glob(f"{app.MODELS_DIR}/{model_type}/**/*{ext}", recursive=True)):
return True
for model_dir in get_model_dirs(model_type):
for ext in extensions:
if any(glob(f"{model_dir}/**/*{ext}", recursive=True)):
return True
return False
def make_model_folders():
for model_type in KNOWN_MODEL_TYPES:
model_dir_path = os.path.join(app.MODELS_DIR, model_type)
model_dir_path = get_model_dirs(model_type)[0]
try:
os.makedirs(model_dir_path, exist_ok=True)
@ -282,14 +348,16 @@ def make_model_folders():
help_file_name = f"Place your {model_type} model files here.txt"
help_file_contents = f'Supported extensions: {" or ".join(MODEL_EXTENSIONS.get(model_type))}'
with open(os.path.join(model_dir_path, help_file_name), "w", encoding="utf-8") as f:
f.write(help_file_contents)
try:
with open(os.path.join(model_dir_path, help_file_name), "w", encoding="utf-8") as f:
f.write(help_file_contents)
except Exception as e:
log.exception(e)
def is_malicious_model(file_path):
try:
if file_path.endswith(".safetensors"):
if file_path.endswith((".safetensors", ".sft", ".gguf")):
return False
scan_result = scan_model(file_path)
if scan_result.issues_count > 0 or scan_result.infected_files > 0:
@ -320,28 +388,37 @@ def is_malicious_model(file_path):
def getModels(scan_for_malicious: bool = True):
from easydiffusion.backend_manager import backend
backend.refresh_models()
models = {
"options": {
"stable-diffusion": [],
"vae": [],
"vae": [{"ae": "ae (Flux VAE fp16)"}],
"hypernetwork": [],
"lora": [],
"codeformer": [{"codeformer": "CodeFormer"}],
"embeddings": [],
"controlnet": [
{"control_v11p_sd15_canny": "Canny (*)"},
{"control_v11p_sd15_openpose": "OpenPose (*)"},
{"control_v11p_sd15_normalbae": "Normal BAE (*)"},
{"control_v11f1p_sd15_depth": "Depth (*)"},
{"control_v11p_sd15_scribble": "Scribble"},
{"control_v11p_sd15_softedge": "Soft Edge"},
{"control_v11p_sd15_inpaint": "Inpaint"},
{"control_v11p_sd15_lineart": "Line Art"},
{"control_v11p_sd15s2_lineart_anime": "Line Art Anime"},
{"control_v11p_sd15_mlsd": "Straight Lines"},
{"control_v11p_sd15_seg": "Segment"},
{"control_v11e_sd15_shuffle": "Shuffle"},
{"control_v11f1e_sd15_tile": "Tile"},
# {"control_v11p_sd15_canny": "Canny (*)"},
# {"control_v11p_sd15_openpose": "OpenPose (*)"},
# {"control_v11p_sd15_normalbae": "Normal BAE (*)"},
# {"control_v11f1p_sd15_depth": "Depth (*)"},
# {"control_v11p_sd15_scribble": "Scribble"},
# {"control_v11p_sd15_softedge": "Soft Edge"},
# {"control_v11p_sd15_inpaint": "Inpaint"},
# {"control_v11p_sd15_lineart": "Line Art"},
# {"control_v11p_sd15s2_lineart_anime": "Line Art Anime"},
# {"control_v11p_sd15_mlsd": "Straight Lines"},
# {"control_v11p_sd15_seg": "Segment"},
# {"control_v11e_sd15_shuffle": "Shuffle"},
# {"control_v11f1e_sd15_tile": "Tile"},
],
"text-encoder": [
{"t5xxl_fp16": "T5 XXL fp16"},
{"clip_l": "CLIP L"},
{"clip_g": "CLIP G"},
],
},
}
@ -356,6 +433,9 @@ def getModels(scan_for_malicious: bool = True):
tree = list(default_entries)
if not os.path.exists(directory):
return tree
for entry in sorted(
os.scandir(directory),
key=lambda entry: (entry.is_file() == directoriesFirst, entry.name.lower()),
@ -378,6 +458,8 @@ def getModels(scan_for_malicious: bool = True):
model_id = entry.name[: -len(matching_suffix)]
if callable(nameFilter):
model_id = nameFilter(model_id)
if model_id is None:
continue
model_exists = False
for m in tree: # allows default "named" models, like CodeFormer and known ControlNet models
@ -398,17 +480,18 @@ def getModels(scan_for_malicious: bool = True):
nonlocal models_scanned
model_extensions = MODEL_EXTENSIONS.get(model_type, [])
models_dir = os.path.join(app.MODELS_DIR, model_type)
if not os.path.exists(models_dir):
os.makedirs(models_dir)
models_dirs = get_model_dirs(model_type)
if not os.path.exists(models_dirs[0]):
os.makedirs(models_dirs[0])
try:
default_tree = models["options"].get(model_type, [])
models["options"][model_type] = scan_directory(
models_dir, model_extensions, default_entries=default_tree, nameFilter=nameFilter
)
except MaliciousModelException as e:
models["scan-error"] = str(e)
for model_dir in models_dirs:
try:
default_tree = models["options"].get(model_type, [])
models["options"][model_type] = scan_directory(
model_dir, model_extensions, default_entries=default_tree, nameFilter=nameFilter
)
except MaliciousModelException as e:
models["scan-error"] = str(e)
if scan_for_malicious:
log.info(f"[green]Scanning all model folders for models...[/]")
@ -416,12 +499,30 @@ def getModels(scan_for_malicious: bool = True):
listModels(model_type="stable-diffusion")
listModels(model_type="vae")
listModels(model_type="hypernetwork")
listModels(model_type="gfpgan")
listModels(model_type="gfpgan", nameFilter=lambda x: (x if "gfpgan" in x.lower() else None))
listModels(model_type="lora")
listModels(model_type="embeddings", nameFilter=get_embedding_token)
listModels(model_type="controlnet")
listModels(model_type="text-encoder")
if scan_for_malicious and models_scanned > 0:
log.info(f"[green]Scanned {models_scanned} models. Nothing infected[/]")
return models
def get_model_dirs(model_type: str, base_dir=None):
"Returns the possible model directory paths for the given model type. Mainly used for WebUI compatibility"
if base_dir is None:
base_dir = app.MODELS_DIR
dirs = [os.path.join(base_dir, model_type)]
if model_type in ALTERNATE_FOLDER_NAMES:
alt_dir = ALTERNATE_FOLDER_NAMES[model_type]
alt_dir = os.path.join(base_dir, alt_dir)
if os.path.exists(alt_dir) and os.path.isdir(alt_dir):
dirs.append(alt_dir)
return dirs

View File

@ -3,11 +3,10 @@ import os
import platform
from importlib.metadata import version as pkg_version
from sdkit.utils import log
from easydiffusion import app
# future home of scripts/check_modules.py
# was meant to be a rewrite of scripts/check_modules.py
# but probably dead for now
manifest = {
"tensorrt": {
@ -50,6 +49,8 @@ def is_installed(module_name) -> bool:
def install(module_name):
from easydiffusion.utils import log
if is_installed(module_name):
log.info(f"{module_name} has already been installed!")
return
@ -79,6 +80,8 @@ def install(module_name):
def uninstall(module_name):
from easydiffusion.utils import log
if not is_installed(module_name):
log.info(f"{module_name} hasn't been installed!")
return

View File

@ -1,4 +1,5 @@
"""
(OUTDATED DOC)
A runtime that runs on a specific device (in a thread).
It can run various tasks like image generation, image filtering, model merge etc by using that thread-local context.
@ -6,45 +7,38 @@ It can run various tasks like image generation, image filtering, model merge etc
This creates an `sdkit.Context` that's bound to the device specified while calling the `init()` function.
"""
from easydiffusion import device_manager
from easydiffusion.utils import log
from sdkit import Context
from sdkit.utils import get_device_usage
context = Context() # thread-local
"""
runtime data (bound locally to this thread), for e.g. device, references to loaded models, optimization flags etc
"""
context = None
def init(device):
"""
Initializes the fields that will be bound to this runtime's context, and sets the current torch device
"""
global context
from easydiffusion import device_manager
from easydiffusion.backend_manager import backend
from easydiffusion.app import getConfig
context = backend.create_context()
context.stop_processing = False
context.temp_images = {}
context.partial_x_samples = None
context.model_load_errors = {}
context.enable_codeformer = True
from easydiffusion import app
app_config = app.getConfig()
context.test_diffusers = app_config.get("use_v3_engine", True)
log.info("Device usage during initialization:")
get_device_usage(device, log_info=True, process_usage_only=False)
device_manager.device_init(context, device)
def set_vram_optimizations(context: Context):
from easydiffusion import app
def set_vram_optimizations(context):
from easydiffusion.app import getConfig
config = app.getConfig()
config = getConfig()
vram_usage_level = config.get("vram_usage_level", "balanced")
if vram_usage_level != context.vram_usage_level:
if hasattr(context, "vram_usage_level") and vram_usage_level != context.vram_usage_level:
context.vram_usage_level = vram_usage_level
return True

View File

@ -2,6 +2,7 @@
Notes:
async endpoints always run on the main thread. Without they run on the thread pool.
"""
import datetime
import mimetypes
import os
@ -20,6 +21,7 @@ from easydiffusion.types import (
OutputFormatData,
SaveToDiskData,
convert_legacy_render_req_to_new,
convert_legacy_controlnet_filter_name,
)
from easydiffusion.utils import log
from fastapi import FastAPI, HTTPException
@ -67,7 +69,9 @@ class SetAppConfigRequest(BaseModel, extra=Extra.allow):
listen_to_network: bool = None
listen_port: int = None
use_v3_engine: bool = True
backend: str = "ed_diffusers"
models_dir: str = None
vram_usage_level: str = "balanced"
def init():
@ -155,6 +159,12 @@ def init():
def shutdown_event(): # Signal render thread to close on shutdown
task_manager.current_state_error = SystemExit("Application shutting down.")
@server_api.on_event("startup")
def start_event():
from easydiffusion.app import open_browser
open_browser()
# API implementations
def set_app_config_internal(req: SetAppConfigRequest):
@ -176,8 +186,10 @@ def set_app_config_internal(req: SetAppConfigRequest):
config["net"] = {}
config["net"]["listen_port"] = int(req.listen_port)
config["use_v3_engine"] = req.use_v3_engine
config["use_v3_engine"] = req.backend == "ed_diffusers"
config["backend"] = req.backend
config["models_dir"] = req.models_dir
config["vram_usage_level"] = req.vram_usage_level
for property, property_value in req.dict().items():
if property_value is not None and property not in req.__fields__ and property not in PROTECTED_CONFIG_KEYS:
@ -196,11 +208,13 @@ def set_app_config_internal(req: SetAppConfigRequest):
def update_render_devices_in_config(config, render_devices):
if render_devices not in ("cpu", "auto") and not render_devices.startswith("cuda:"):
raise HTTPException(status_code=400, detail=f"Invalid render device requested: {render_devices}")
from easydiffusion.device_manager import validate_render_devices
if render_devices.startswith("cuda:"):
try:
render_devices = render_devices.split(",")
validate_render_devices(render_devices)
except ValueError as e:
raise HTTPException(status_code=400, detail=str(e))
config["render_devices"] = render_devices
@ -216,6 +230,8 @@ def read_web_data_internal(key: str = None, **kwargs):
return JSONResponse(config, headers=NOCACHE_HEADERS)
elif key == "system_info":
from easydiffusion.backend_manager import backend
config = app.getConfig()
output_dir = config.get("force_save_path", os.path.join(os.path.expanduser("~"), app.OUTPUT_DIRNAME))
@ -226,6 +242,7 @@ def read_web_data_internal(key: str = None, **kwargs):
"default_output_dir": output_dir,
"enforce_output_dir": ("force_save_path" in config),
"enforce_output_metadata": ("force_save_metadata" in config),
"backend_url": backend.get_url(),
}
system_info["devices"]["config"] = config.get("render_devices", "auto")
return JSONResponse(system_info, headers=NOCACHE_HEADERS)
@ -309,6 +326,15 @@ def filter_internal(req: dict):
output_format: OutputFormatData = OutputFormatData.parse_obj(req)
save_data: SaveToDiskData = SaveToDiskData.parse_obj(req)
filter_req.filter = convert_legacy_controlnet_filter_name(filter_req.filter)
for model_name in ("realesrgan", "esrgan_4x", "lanczos", "nearest", "scunet", "swinir"):
if models_data.model_paths.get(model_name):
if model_name not in filter_req.filter_params:
filter_req.filter_params[model_name] = {}
filter_req.filter_params[model_name]["upscaler"] = models_data.model_paths[model_name]
# enqueue the task
task = FilterTask(filter_req, task_data, models_data, output_format, save_data)
return enqueue_task(task)
@ -342,15 +368,13 @@ def model_merge_internal(req: dict):
mergeReq: MergeRequest = MergeRequest.parse_obj(req)
sd_model_dir = model_manager.get_model_dirs("stable-diffusion")[0]
merge_models(
model_manager.resolve_model_to_use(mergeReq.model0, "stable-diffusion"),
model_manager.resolve_model_to_use(mergeReq.model1, "stable-diffusion"),
mergeReq.ratio,
os.path.join(
app.MODELS_DIR,
"stable-diffusion",
filename_regex.sub("_", mergeReq.out_path),
),
os.path.join(sd_model_dir, filename_regex.sub("_", mergeReq.out_path)),
mergeReq.use_fp16,
)
return JSONResponse({"status": "OK"}, headers=NOCACHE_HEADERS)

View File

@ -4,6 +4,7 @@ Notes:
Use weak_thread_data to store all other data using weak keys.
This will allow for garbage collection after the thread dies.
"""
import json
import traceback
@ -19,7 +20,9 @@ import torch
from easydiffusion import device_manager
from easydiffusion.tasks import Task
from easydiffusion.utils import log
from sdkit.utils import gc
from torchruntime.utils import get_device_count, get_device, get_device_name, get_installed_torch_platform
from sdkit.utils import is_cpu_device, mem_get_info
THREAD_NAME_PREFIX = ""
ERR_LOCK_FAILED = " failed to acquire lock within timeout."
@ -233,6 +236,8 @@ def thread_render(device):
global current_state, current_state_error
from easydiffusion import model_manager, runtime
from easydiffusion.backend_manager import backend
from requests import ConnectionError
try:
runtime.init(device)
@ -244,8 +249,17 @@ def thread_render(device):
}
current_state = ServerStates.LoadingModel
model_manager.load_default_models(runtime.context)
while True:
try:
if backend.ping(timeout=1):
break
time.sleep(1)
except (TimeoutError, ConnectionError):
time.sleep(1)
model_manager.load_default_models(runtime.context)
current_state = ServerStates.Online
except Exception as e:
log.error(traceback.format_exc())
@ -291,7 +305,6 @@ def thread_render(device):
task.buffer_queue.put(json.dumps(task.response))
log.error(traceback.format_exc())
finally:
gc(runtime.context)
task.lock.release()
keep_task_alive(task)
@ -329,34 +342,33 @@ def get_devices():
"active": {},
}
def get_device_info(device):
if device in ("cpu", "mps"):
def get_device_info(device_id):
if is_cpu_device(device_id):
return {"name": device_manager.get_processor_name()}
mem_free, mem_total = torch.cuda.mem_get_info(device)
device = get_device(device_id)
mem_free, mem_total = mem_get_info(device)
mem_free /= float(10**9)
mem_total /= float(10**9)
return {
"name": torch.cuda.get_device_name(device),
"name": get_device_name(device),
"mem_free": mem_free,
"mem_total": mem_total,
"max_vram_usage_level": device_manager.get_max_vram_usage_level(device),
}
# list the compatible devices
cuda_count = torch.cuda.device_count()
for device in range(cuda_count):
device = f"cuda:{device}"
if not device_manager.is_device_compatible(device):
continue
torch_platform_name = get_installed_torch_platform()[0]
device_count = get_device_count()
for device_id in range(device_count):
device_id = f"{torch_platform_name}:{device_id}" if device_count > 1 else torch_platform_name
devices["all"].update({device: get_device_info(device)})
devices["all"].update({device_id: get_device_info(device_id)})
if device_manager.is_mps_available():
devices["all"].update({"mps": get_device_info("mps")})
devices["all"].update({"cpu": get_device_info("cpu")})
if torch_platform_name != "cpu":
devices["all"].update({"cpu": get_device_info("cpu")})
# list the activated devices
if not manager_lock.acquire(blocking=True, timeout=LOCK_TIMEOUT):
@ -368,8 +380,8 @@ def get_devices():
weak_data = weak_thread_data.get(rthread)
if not weak_data or not "device" in weak_data or not "device_name" in weak_data:
continue
device = weak_data["device"]
devices["active"].update({device: get_device_info(device)})
device_id = weak_data["device"]
devices["active"].update({device_id: get_device_info(device_id)})
finally:
manager_lock.release()
@ -427,12 +439,6 @@ def start_render_thread(device):
def stop_render_thread(device):
try:
device_manager.validate_device_id(device, log_prefix="stop_render_thread")
except:
log.error(traceback.format_exc())
return False
if not manager_lock.acquire(blocking=True, timeout=LOCK_TIMEOUT):
raise Exception("stop_render_thread" + ERR_LOCK_FAILED)
log.info(f"Stopping Rendering Thread on device: {device}")

View File

@ -5,9 +5,7 @@ import time
from numpy import base_repr
from sdkit.filter import apply_filters
from sdkit.models import load_model
from sdkit.utils import img_to_base64_str, get_image, log, save_images
from sdkit.utils import img_to_base64_str, log, save_images, base64_str_to_img
from easydiffusion import model_manager, runtime
from easydiffusion.types import (
@ -19,6 +17,7 @@ from easydiffusion.types import (
TaskData,
GenerateImageRequest,
)
from easydiffusion.utils import filter_nsfw
from easydiffusion.utils.save_utils import format_folder_name
from .task import Task
@ -47,7 +46,9 @@ class FilterTask(Task):
# convert to multi-filter format, if necessary
if isinstance(req.filter, str):
req.filter_params = {req.filter: req.filter_params}
if req.filter not in req.filter_params:
req.filter_params = {req.filter: req.filter_params}
req.filter = [req.filter]
if not isinstance(req.image, list):
@ -57,6 +58,7 @@ class FilterTask(Task):
"Runs the image filtering task on the assigned thread"
from easydiffusion import app
from easydiffusion.backend_manager import backend
context = runtime.context
@ -66,15 +68,24 @@ class FilterTask(Task):
print_task_info(self.request, self.models_data, self.output_format, self.save_data)
if isinstance(self.request.image, list):
images = [get_image(img) for img in self.request.image]
else:
images = get_image(self.request.image)
images = filter_images(context, images, self.request.filter, self.request.filter_params)
has_nsfw_filter = "nsfw_filter" in self.request.filter
output_format = self.output_format
backend.set_options(
context,
output_format=output_format.output_format,
output_quality=output_format.output_quality,
output_lossless=output_format.output_lossless,
)
images = backend.filter_images(
context, self.request.image, self.request.filter, self.request.filter_params, input_type="base64"
)
if has_nsfw_filter:
images = filter_nsfw(images)
if self.save_data.save_to_disk_path is not None:
app_config = app.getConfig()
folder_format = app_config.get("folder_format", "$id")
@ -85,8 +96,9 @@ class FilterTask(Task):
save_dir_path = os.path.join(
self.save_data.save_to_disk_path, format_folder_name(folder_format, dummy_req, self.task_data)
)
images_pil = [base64_str_to_img(img) for img in images]
save_images(
images,
images_pil,
save_dir_path,
file_name=img_id,
output_format=output_format.output_format,
@ -94,13 +106,6 @@ class FilterTask(Task):
output_lossless=output_format.output_lossless,
)
images = [
img_to_base64_str(
img, output_format.output_format, output_format.output_quality, output_format.output_lossless
)
for img in images
]
res = FilterImageResponse(self.request, self.models_data, images=images)
res = res.json()
self.buffer_queue.put(json.dumps(res))
@ -110,46 +115,6 @@ class FilterTask(Task):
self.response = res
def filter_images(context, images, filters, filter_params={}):
filters = filters if isinstance(filters, list) else [filters]
for filter_name in filters:
params = filter_params.get(filter_name, {})
previous_state = before_filter(context, filter_name, params)
try:
images = apply_filters(context, filter_name, images, **params)
finally:
after_filter(context, filter_name, params, previous_state)
return images
def before_filter(context, filter_name, filter_params):
if filter_name == "codeformer":
from easydiffusion.model_manager import DEFAULT_MODELS, resolve_model_to_use
default_realesrgan = DEFAULT_MODELS["realesrgan"][0]["file_name"]
prev_realesrgan_path = None
upscale_faces = filter_params.get("upscale_faces", False)
if upscale_faces and default_realesrgan not in context.model_paths["realesrgan"]:
prev_realesrgan_path = context.model_paths.get("realesrgan")
context.model_paths["realesrgan"] = resolve_model_to_use(default_realesrgan, "realesrgan")
load_model(context, "realesrgan")
return prev_realesrgan_path
def after_filter(context, filter_name, filter_params, previous_state):
if filter_name == "codeformer":
prev_realesrgan_path = previous_state
if prev_realesrgan_path:
context.model_paths["realesrgan"] = prev_realesrgan_path
load_model(context, "realesrgan")
def print_task_info(
req: FilterImageRequest, models_data: ModelsData, output_format: OutputFormatData, save_data: SaveToDiskData
):

View File

@ -2,26 +2,23 @@ import json
import pprint
import queue
import time
from PIL import Image
from easydiffusion import model_manager, runtime
from easydiffusion.types import GenerateImageRequest, ModelsData, OutputFormatData, SaveToDiskData
from easydiffusion.types import Image as ResponseImage
from easydiffusion.types import GenerateImageResponse, RenderTaskData, UserInitiatedStop
from easydiffusion.utils import get_printable_request, log, save_images_to_disk
from sdkit.generate import generate_images
from easydiffusion.types import GenerateImageResponse, RenderTaskData
from easydiffusion.utils import get_printable_request, log, save_images_to_disk, filter_nsfw
from sdkit.utils import (
diffusers_latent_samples_to_images,
gc,
img_to_base64_str,
base64_str_to_img,
img_to_buffer,
latent_samples_to_images,
resize_img,
get_image,
log,
)
from .task import Task
from .filter_images import filter_images
class RenderTask(Task):
@ -51,15 +48,13 @@ class RenderTask(Task):
"Runs the image generation task on the assigned thread"
from easydiffusion import task_manager, app
from easydiffusion.backend_manager import backend
context = runtime.context
config = app.getConfig()
if config.get("block_nsfw", False): # override if set on the server
self.task_data.block_nsfw = True
if "nsfw_checker" not in self.task_data.filters:
self.task_data.filters.append("nsfw_checker")
self.models_data.model_paths["nsfw_checker"] = "nsfw_checker"
def step_callback():
task_manager.keep_task_alive(self)
@ -68,7 +63,7 @@ class RenderTask(Task):
if isinstance(task_manager.current_state_error, (SystemExit, StopAsyncIteration)) or isinstance(
self.error, StopAsyncIteration
):
context.stop_processing = True
backend.stop_rendering(context)
if isinstance(task_manager.current_state_error, StopAsyncIteration):
self.error = task_manager.current_state_error
task_manager.current_state_error = None
@ -78,11 +73,7 @@ class RenderTask(Task):
model_manager.resolve_model_paths(self.models_data)
models_to_force_reload = []
if (
runtime.set_vram_optimizations(context)
or self.has_param_changed(context, "clip_skip")
or self.trt_needs_reload(context)
):
if runtime.set_vram_optimizations(context) or self.has_param_changed(context, "clip_skip"):
models_to_force_reload.append("stable-diffusion")
model_manager.reload_models_if_necessary(context, self.models_data, models_to_force_reload)
@ -99,10 +90,11 @@ class RenderTask(Task):
self.buffer_queue,
self.temp_images,
step_callback,
self,
)
def has_param_changed(self, context, param_name):
if not context.test_diffusers:
if not getattr(context, "test_diffusers", False):
return False
if "stable-diffusion" not in context.models or "params" not in context.models["stable-diffusion"]:
return True
@ -111,29 +103,6 @@ class RenderTask(Task):
new_val = self.models_data.model_params.get("stable-diffusion", {}).get(param_name, False)
return model["params"].get(param_name) != new_val
def trt_needs_reload(self, context):
if not context.test_diffusers:
return False
if "stable-diffusion" not in context.models or "params" not in context.models["stable-diffusion"]:
return True
model = context.models["stable-diffusion"]
# curr_convert_to_trt = model["params"].get("convert_to_tensorrt")
new_convert_to_trt = self.models_data.model_params.get("stable-diffusion", {}).get("convert_to_tensorrt", False)
pipe = model["default"]
is_trt_loaded = hasattr(pipe.unet, "_allocate_trt_buffers") or hasattr(
pipe.unet, "_allocate_trt_buffers_backup"
)
if new_convert_to_trt and not is_trt_loaded:
return True
curr_build_config = model["params"].get("trt_build_config")
new_build_config = self.models_data.model_params.get("stable-diffusion", {}).get("trt_build_config", {})
return new_convert_to_trt and curr_build_config != new_build_config
def make_images(
context,
@ -145,12 +114,21 @@ def make_images(
data_queue: queue.Queue,
task_temp_images: list,
step_callback,
task,
):
context.stop_processing = False
print_task_info(req, task_data, models_data, output_format, save_data)
images, seeds = make_images_internal(
context, req, task_data, models_data, output_format, save_data, data_queue, task_temp_images, step_callback
context,
req,
task_data,
models_data,
output_format,
save_data,
data_queue,
task_temp_images,
step_callback,
task,
)
res = GenerateImageResponse(
@ -170,7 +148,9 @@ def print_task_info(
output_format: OutputFormatData,
save_data: SaveToDiskData,
):
req_str = pprint.pformat(get_printable_request(req, task_data, models_data, output_format, save_data)).replace("[", "\[")
req_str = pprint.pformat(get_printable_request(req, task_data, models_data, output_format, save_data)).replace(
"[", "\["
)
task_str = pprint.pformat(task_data.dict()).replace("[", "\[")
models_data = pprint.pformat(models_data.dict()).replace("[", "\[")
output_format = pprint.pformat(output_format.dict()).replace("[", "\[")
@ -178,7 +158,7 @@ def print_task_info(
log.info(f"request: {req_str}")
log.info(f"task data: {task_str}")
# log.info(f"models data: {models_data}")
log.info(f"models data: {models_data}")
log.info(f"output format: {output_format}")
log.info(f"save data: {save_data}")
@ -193,26 +173,41 @@ def make_images_internal(
data_queue: queue.Queue,
task_temp_images: list,
step_callback,
task,
):
images, user_stopped = generate_images_internal(
from easydiffusion.backend_manager import backend
# prep the nsfw_filter
if task_data.block_nsfw:
filter_nsfw([Image.new("RGB", (1, 1))]) # hack - ensures that the model is available
images = generate_images_internal(
context,
req,
task_data,
models_data,
output_format,
data_queue,
task_temp_images,
step_callback,
task_data.stream_image_progress,
task_data.stream_image_progress_interval,
)
gc(context)
user_stopped = isinstance(task.error, StopAsyncIteration)
filters, filter_params = task_data.filters, task_data.filter_params
filtered_images = filter_images(context, images, filters, filter_params) if not user_stopped else images
if len(filters) > 0 and not user_stopped:
filtered_images = backend.filter_images(context, images, filters, filter_params, input_type="base64")
else:
filtered_images = images
if task_data.block_nsfw:
filtered_images = filter_nsfw(filtered_images)
if save_data.save_to_disk_path is not None:
save_images_to_disk(images, filtered_images, req, task_data, models_data, output_format, save_data)
images_pil = [base64_str_to_img(img) for img in images]
filtered_images_pil = [base64_str_to_img(img) for img in filtered_images]
save_images_to_disk(images_pil, filtered_images_pil, req, task_data, models_data, output_format, save_data)
seeds = [*range(req.seed, req.seed + len(images))]
if task_data.show_only_filtered_image or filtered_images is images:
@ -226,97 +221,47 @@ def generate_images_internal(
req: GenerateImageRequest,
task_data: RenderTaskData,
models_data: ModelsData,
output_format: OutputFormatData,
data_queue: queue.Queue,
task_temp_images: list,
step_callback,
stream_image_progress: bool,
stream_image_progress_interval: int,
):
context.temp_images.clear()
from easydiffusion.backend_manager import backend
callback = make_step_callback(
callback = make_step_callback(context, req, task_data, data_queue, task_temp_images, step_callback)
req.width, req.height = map(lambda x: x - x % 8, (req.width, req.height)) # clamp to 8
if req.control_image and task_data.control_filter_to_apply:
req.controlnet_filter = task_data.control_filter_to_apply
if req.init_image is not None and int(req.num_inference_steps * req.prompt_strength) == 0:
req.prompt_strength = 1 / req.num_inference_steps if req.num_inference_steps > 0 else 1
if req.init_image_mask:
req.init_image_mask = get_image(req.init_image_mask)
req.init_image_mask = resize_img(req.init_image_mask.convert("RGB"), req.width, req.height, clamp_to_8=True)
backend.set_options(
context,
req,
task_data,
data_queue,
task_temp_images,
step_callback,
stream_image_progress,
stream_image_progress_interval,
output_format=output_format.output_format,
output_quality=output_format.output_quality,
output_lossless=output_format.output_lossless,
vae_tiling=task_data.enable_vae_tiling,
stream_image_progress=stream_image_progress,
stream_image_progress_interval=stream_image_progress_interval,
clip_skip=2 if task_data.clip_skip else 1,
)
try:
if req.init_image is not None and not context.test_diffusers:
req.sampler_name = "ddim"
images = backend.generate_images(context, callback=callback, output_type="base64", **req.dict())
req.width, req.height = map(lambda x: x - x % 8, (req.width, req.height)) # clamp to 8
if req.control_image and task_data.control_filter_to_apply:
req.control_image = get_image(req.control_image)
req.control_image = resize_img(req.control_image.convert("RGB"), req.width, req.height, clamp_to_8=True)
req.control_image = filter_images(context, req.control_image, task_data.control_filter_to_apply)[0]
if req.init_image is not None and int(req.num_inference_steps * req.prompt_strength) == 0:
req.prompt_strength = 1 / req.num_inference_steps if req.num_inference_steps > 0 else 1
if context.test_diffusers:
pipe = context.models["stable-diffusion"]["default"]
if hasattr(pipe.unet, "_allocate_trt_buffers_backup"):
setattr(pipe.unet, "_allocate_trt_buffers", pipe.unet._allocate_trt_buffers_backup)
delattr(pipe.unet, "_allocate_trt_buffers_backup")
if hasattr(pipe.unet, "_allocate_trt_buffers"):
convert_to_trt = models_data.model_params["stable-diffusion"].get("convert_to_tensorrt", False)
if convert_to_trt:
pipe.unet.forward = pipe.unet._trt_forward
# pipe.vae.decoder.forward = pipe.vae.decoder._trt_forward
log.info(f"Setting unet.forward to TensorRT")
else:
log.info(f"Not using TensorRT for unet.forward")
pipe.unet.forward = pipe.unet._non_trt_forward
# pipe.vae.decoder.forward = pipe.vae.decoder._non_trt_forward
setattr(pipe.unet, "_allocate_trt_buffers_backup", pipe.unet._allocate_trt_buffers)
delattr(pipe.unet, "_allocate_trt_buffers")
if task_data.enable_vae_tiling:
if hasattr(pipe, "enable_vae_tiling"):
pipe.enable_vae_tiling()
else:
if hasattr(pipe, "disable_vae_tiling"):
pipe.disable_vae_tiling()
images = generate_images(context, callback=callback, **req.dict())
user_stopped = False
except UserInitiatedStop:
images = []
user_stopped = True
if context.partial_x_samples is not None:
if context.test_diffusers:
images = diffusers_latent_samples_to_images(context, context.partial_x_samples)
else:
images = latent_samples_to_images(context, context.partial_x_samples)
finally:
if hasattr(context, "partial_x_samples") and context.partial_x_samples is not None:
if not context.test_diffusers:
del context.partial_x_samples
context.partial_x_samples = None
return images, user_stopped
return images
def construct_response(images: list, seeds: list, output_format: OutputFormatData):
return [
ResponseImage(
data=img_to_base64_str(
img,
output_format.output_format,
output_format.output_quality,
output_format.output_lossless,
),
seed=seed,
)
for img, seed in zip(images, seeds)
]
return [ResponseImage(data=img, seed=seed) for img, seed in zip(images, seeds)]
def make_step_callback(
@ -326,53 +271,44 @@ def make_step_callback(
data_queue: queue.Queue,
task_temp_images: list,
step_callback,
stream_image_progress: bool,
stream_image_progress_interval: int,
):
from easydiffusion.backend_manager import backend
n_steps = req.num_inference_steps if req.init_image is None else int(req.num_inference_steps * req.prompt_strength)
last_callback_time = -1
def update_temp_img(x_samples, task_temp_images: list):
def update_temp_img(images, task_temp_images: list):
partial_images = []
if context.test_diffusers:
images = diffusers_latent_samples_to_images(context, x_samples)
else:
images = latent_samples_to_images(context, x_samples)
if images is None:
return []
if task_data.block_nsfw:
images = filter_images(context, images, "nsfw_checker")
images = filter_nsfw(images, print_log=False)
for i, img in enumerate(images):
img = img.convert("RGB")
img = resize_img(img, req.width, req.height)
buf = img_to_buffer(img, output_format="JPEG")
context.temp_images[f"{task_data.request_id}/{i}"] = buf
task_temp_images[i] = buf
partial_images.append({"path": f"/image/tmp/{task_data.request_id}/{i}"})
del images
return partial_images
def on_image_step(x_samples, i, *args):
def on_image_step(images, i, *args):
nonlocal last_callback_time
if context.test_diffusers:
context.partial_x_samples = (x_samples, args[0])
else:
context.partial_x_samples = x_samples
step_time = time.time() - last_callback_time if last_callback_time != -1 else -1
last_callback_time = time.time()
progress = {"step": i, "step_time": step_time, "total_steps": n_steps}
if stream_image_progress and stream_image_progress_interval > 0 and i % stream_image_progress_interval == 0:
progress["output"] = update_temp_img(context.partial_x_samples, task_temp_images)
if images is not None:
progress["output"] = update_temp_img(images, task_temp_images)
data_queue.put(json.dumps(progress))
step_callback()
if context.stop_processing:
raise UserInitiatedStop("User requested that we stop processing")
return on_image_step

View File

@ -14,16 +14,19 @@ class GenerateImageRequest(BaseModel):
num_outputs: int = 1
num_inference_steps: int = 50
guidance_scale: float = 7.5
distilled_guidance_scale: float = 3.5
init_image: Any = None
init_image_mask: Any = None
control_image: Any = None
control_alpha: Union[float, List[float]] = None
controlnet_filter: str = None
prompt_strength: float = 0.8
preserve_init_image_color_profile = False
strict_mask_border = False
preserve_init_image_color_profile: bool = False
strict_mask_border: bool = False
sampler_name: str = None # "ddim", "plms", "heun", "euler", "euler_a", "dpm2", "dpm2_a", "lms"
scheduler_name: str = None
hypernetwork_strength: float = 0
lora_alpha: Union[float, List[float]] = 0
tiling: str = None # None, "x", "y", "xy"
@ -77,6 +80,7 @@ class RenderTaskData(TaskData):
latent_upscaler_steps: int = 10
use_stable_diffusion_model: Union[str, List[str]] = "sd-v1-4"
use_vae_model: Union[str, List[str]] = None
use_text_encoder_model: Union[str, List[str]] = None
use_hypernetwork_model: Union[str, List[str]] = None
use_lora_model: Union[str, List[str]] = None
use_controlnet_model: Union[str, List[str]] = None
@ -100,7 +104,7 @@ class MergeRequest(BaseModel):
model1: str = None
ratio: float = None
out_path: str = "mix"
use_fp16 = True
use_fp16: bool = True
class Image:
@ -208,27 +212,25 @@ def convert_legacy_render_req_to_new(old_req: dict):
# move the model info
model_paths["stable-diffusion"] = old_req.get("use_stable_diffusion_model")
model_paths["vae"] = old_req.get("use_vae_model")
model_paths["text-encoder"] = old_req.get("use_text_encoder_model")
model_paths["hypernetwork"] = old_req.get("use_hypernetwork_model")
model_paths["lora"] = old_req.get("use_lora_model")
model_paths["controlnet"] = old_req.get("use_controlnet_model")
model_paths["embeddings"] = old_req.get("use_embeddings_model")
model_paths["gfpgan"] = old_req.get("use_face_correction", "")
model_paths["gfpgan"] = model_paths["gfpgan"] if "gfpgan" in model_paths["gfpgan"].lower() else None
## ensure that the model name is in the model path
for model_name in ("gfpgan", "codeformer"):
model_paths[model_name] = old_req.get("use_face_correction", "")
model_paths[model_name] = model_paths[model_name] if model_name in model_paths[model_name].lower() else None
model_paths["codeformer"] = old_req.get("use_face_correction", "")
model_paths["codeformer"] = model_paths["codeformer"] if "codeformer" in model_paths["codeformer"].lower() else None
for model_name in ("realesrgan", "latent_upscaler", "esrgan_4x", "lanczos", "nearest", "scunet", "swinir"):
model_paths[model_name] = old_req.get("use_upscale", "")
model_paths[model_name] = model_paths[model_name] if model_name in model_paths[model_name].lower() else None
model_paths["realesrgan"] = old_req.get("use_upscale", "")
model_paths["realesrgan"] = model_paths["realesrgan"] if "realesrgan" in model_paths["realesrgan"].lower() else None
model_paths["latent_upscaler"] = old_req.get("use_upscale", "")
model_paths["latent_upscaler"] = (
model_paths["latent_upscaler"] if "latent_upscaler" in model_paths["latent_upscaler"].lower() else None
)
if "control_filter_to_apply" in old_req:
filter_model = old_req["control_filter_to_apply"]
model_paths[filter_model] = filter_model
old_req["control_filter_to_apply"] = convert_legacy_controlnet_filter_name(old_req["control_filter_to_apply"])
if old_req.get("block_nsfw"):
model_paths["nsfw_checker"] = "nsfw_checker"
@ -244,8 +246,12 @@ def convert_legacy_render_req_to_new(old_req: dict):
}
# move the filter params
if model_paths["realesrgan"]:
filter_params["realesrgan"] = {"scale": int(old_req.get("upscale_amount", 4))}
for model_name in ("realesrgan", "esrgan_4x", "lanczos", "nearest", "scunet", "swinir"):
if model_paths[model_name]:
filter_params[model_name] = {
"upscaler": model_paths[model_name],
"scale": int(old_req.get("upscale_amount", 4)),
}
if model_paths["latent_upscaler"]:
filter_params["latent_upscaler"] = {
"prompt": old_req["prompt"],
@ -264,14 +270,31 @@ def convert_legacy_render_req_to_new(old_req: dict):
if old_req.get("block_nsfw"):
filters.append("nsfw_checker")
if model_paths["codeformer"]:
filters.append("codeformer")
elif model_paths["gfpgan"]:
filters.append("gfpgan")
for model_name in ("gfpgan", "codeformer"):
if model_paths[model_name]:
filters.append(model_name)
break
if model_paths["realesrgan"]:
filters.append("realesrgan")
elif model_paths["latent_upscaler"]:
filters.append("latent_upscaler")
for model_name in ("realesrgan", "latent_upscaler", "esrgan_4x", "lanczos", "nearest", "scunet", "swinir"):
if model_paths[model_name]:
filters.append(model_name)
break
return new_req
def convert_legacy_controlnet_filter_name(filter):
from easydiffusion.backend_manager import backend
if filter is None:
return None
controlnet_filter_names = backend.list_controlnet_filters()
def apply(f):
return f"controlnet_{f}" if f in controlnet_filter_names else f
if isinstance(filter, list):
return [apply(f) for f in filter]
return apply(filter)

View File

@ -7,6 +7,8 @@ from .save_utils import (
save_images_to_disk,
get_printable_request,
)
from .nsfw_checker import filter_nsfw
def sha256sum(filename):
sha256 = hashlib.sha256()
@ -18,4 +20,3 @@ def sha256sum(filename):
sha256.update(data)
return sha256.hexdigest()

View File

@ -0,0 +1,80 @@
# possibly move this to sdkit in the future
import os
# mirror of https://huggingface.co/AdamCodd/vit-base-nsfw-detector/blob/main/onnx/model_quantized.onnx
NSFW_MODEL_URL = (
"https://github.com/easydiffusion/sdkit-test-data/releases/download/assets/vit-base-nsfw-detector-quantized.onnx"
)
MODEL_HASH_QUICK = "220123559305b1b07b7a0894c3471e34dccd090d71cdf337dd8012f9e40d6c28"
nsfw_check_model = None
def filter_nsfw(images, blur_radius: float = 75, print_log=True):
global nsfw_check_model
from easydiffusion.model_manager import get_model_dirs
from sdkit.utils import base64_str_to_img, img_to_base64_str, download_file, log, hash_file_quick
import onnxruntime as ort
from PIL import ImageFilter
import numpy as np
if nsfw_check_model is None:
model_dir = get_model_dirs("nsfw-checker")[0]
model_path = os.path.join(model_dir, "vit-base-nsfw-detector-quantized.onnx")
os.makedirs(model_dir, exist_ok=True)
if not os.path.exists(model_path) or hash_file_quick(model_path) != MODEL_HASH_QUICK:
download_file(NSFW_MODEL_URL, model_path)
nsfw_check_model = ort.InferenceSession(model_path, providers=["CPUExecutionProvider"])
# Preprocess the input image
def preprocess_image(img):
img = img.convert("RGB")
# config based on based on https://huggingface.co/AdamCodd/vit-base-nsfw-detector/blob/main/onnx/preprocessor_config.json
# Resize the image
img = img.resize((384, 384))
# Normalize the image
img = np.array(img) / 255.0 # Scale pixel values to [0, 1]
mean = np.array([0.5, 0.5, 0.5])
std = np.array([0.5, 0.5, 0.5])
img = (img - mean) / std
# Transpose to match input shape (batch_size, channels, height, width)
img = np.transpose(img, (2, 0, 1)).astype(np.float32)
# Add batch dimension
img = np.expand_dims(img, axis=0)
return img
# Run inference
input_name = nsfw_check_model.get_inputs()[0].name
output_name = nsfw_check_model.get_outputs()[0].name
if print_log:
log.info("Running NSFW checker (onnx)")
results = []
for img in images:
is_base64 = isinstance(img, str)
input_img = base64_str_to_img(img) if is_base64 else img
result = nsfw_check_model.run([output_name], {input_name: preprocess_image(input_img)})
is_nsfw = [np.argmax(arr) == 1 for arr in result][0]
if is_nsfw:
output_img = input_img.filter(ImageFilter.GaussianBlur(blur_radius))
output_img = img_to_base64_str(output_img) if is_base64 else output_img
else:
output_img = img
results.append(output_img)
return results

View File

@ -31,12 +31,15 @@ TASK_TEXT_MAPPING = {
"clip_skip": "Clip Skip",
"use_controlnet_model": "ControlNet model",
"control_filter_to_apply": "ControlNet Filter",
"control_alpha": "ControlNet Strength",
"use_vae_model": "VAE model",
"sampler_name": "Sampler",
"scheduler_name": "Scheduler",
"width": "Width",
"height": "Height",
"num_inference_steps": "Steps",
"guidance_scale": "Guidance Scale",
"distilled_guidance_scale": "Distilled Guidance",
"prompt_strength": "Prompt Strength",
"use_lora_model": "LoRA model",
"lora_alpha": "LoRA Strength",
@ -246,7 +249,7 @@ def get_printable_request(
task_data_metadata.update(save_data.dict())
app_config = app.getConfig()
using_diffusers = app_config.get("use_v3_engine", True)
using_diffusers = app_config.get("backend", "ed_diffusers") in ("ed_diffusers", "webui")
# Save the metadata in the order defined in TASK_TEXT_MAPPING
metadata = {}

View File

@ -35,7 +35,13 @@
<h1>
<img id="logo_img" src="/media/images/icon-512x512.png" >
Easy Diffusion
<small><span id="version">v3.0.7</span> <span id="updateBranchLabel"></span></small>
<small>
<span id="version">
<span class="gated-feature" data-feature-keys="backend_ed_classic backend_ed_diffusers">v3.0.13</span>
<span class="gated-feature" data-feature-keys="backend_webui">v3.5.9</span>
</span> <span id="updateBranchLabel"></span>
<div id="engine-logo" class="gated-feature" data-feature-keys="backend_webui">(Powered by <a id="backend-url" href="https://github.com/lllyasviel/stable-diffusion-webui-forge" target="_blank">Stable Diffusion WebUI Forge</a>)</div>
</small>
</h1>
</div>
<div id="server-status">
@ -73,7 +79,7 @@
</div>
<div id="prompt-toolbar-right" class="toolbar-right">
<button id="image-modifier-dropdown" class="tertiaryButton smallButton">+ Image Modifiers</button>
<button id="embeddings-button" class="tertiaryButton smallButton displayNone">+ Embedding</button>
<button id="embeddings-button" class="tertiaryButton smallButton gated-feature" data-feature-keys="backend_ed_diffusers backend_webui">+ Embedding</button>
</div>
</div>
<textarea id="prompt" class="col-free">a photograph of an astronaut riding a horse</textarea>
@ -83,7 +89,7 @@
<a href="https://github.com/easydiffusion/easydiffusion/wiki/Writing-prompts#negative-prompts" target="_blank"><i class="fa-solid fa-circle-question help-btn"><span class="simple-tooltip top">Click to learn more about Negative Prompts</span></i></a>
<small>(optional)</small>
</label>
<button id="negative-embeddings-button" class="tertiaryButton smallButton displayNone">+ Negative Embedding</button>
<button id="negative-embeddings-button" class="tertiaryButton smallButton gated-feature" data-feature-keys="backend_ed_diffusers backend_webui">+ Negative Embedding</button>
<div class="collapsible-content">
<textarea id="negative_prompt" name="negative_prompt" placeholder="list the things to remove from the image (e.g. fog, green)"></textarea>
</div>
@ -174,14 +180,14 @@
<!-- <label><small>Takes upto 20 mins the first time</small></label> -->
</td>
</tr>
<tr class="pl-5 displayNone" id="clip_skip_config">
<tr class="pl-5 gated-feature" id="clip_skip_config" data-feature-keys="backend_ed_diffusers backend_webui">
<td><label for="clip_skip">Clip Skip:</label></td>
<td class="diffusers-restart-needed">
<input id="clip_skip" name="clip_skip" type="checkbox">
<a href="https://github.com/easydiffusion/easydiffusion/wiki/Clip-Skip" target="_blank"><i class="fa-solid fa-circle-question help-btn"><span class="simple-tooltip top-left">Click to learn more about Clip Skip</span></i></a>
</td>
</tr>
<tr id="controlnet_model_container" class="pl-5">
<tr id="controlnet_model_container" class="pl-5 gated-feature" data-feature-keys="backend_ed_diffusers backend_webui">
<td><label for="controlnet_model">ControlNet Image:</label></td>
<td class="diffusers-restart-needed">
<div id="control_image_wrapper" class="preview_image_wrapper">
@ -201,40 +207,94 @@
<option value="openpose_faceonly">OpenPose face-only</option>
<option value="openpose_hand">OpenPose hand</option>
<option value="openpose_full">OpenPose full</option>
<option value="animal_openpose" class="gated-feature" data-feature-keys="backend_webui">animal_openpose</option>
<option value="densepose_parula (black bg & blue torso)" class="gated-feature" data-feature-keys="backend_webui">densepose_parula (black bg & blue torso)</option>
<option value="densepose (pruple bg & purple torso)" class="gated-feature" data-feature-keys="backend_webui">densepose (pruple bg & purple torso)</option>
<option value="dw_openpose_full" class="gated-feature" data-feature-keys="backend_webui">dw_openpose_full</option>
<option value="mediapipe_face" class="gated-feature" data-feature-keys="backend_webui">mediapipe_face</option>
<option value="instant_id_face_keypoints" class="gated-feature" data-feature-keys="backend_webui">instant_id_face_keypoints</option>
<option value="InsightFace+CLIP-H (IPAdapter)" class="gated-feature" data-feature-keys="backend_webui">InsightFace+CLIP-H (IPAdapter)</option>
<option value="InsightFace (InstantID)" class="gated-feature" data-feature-keys="backend_webui">InsightFace (InstantID)</option>
</optgroup>
<optgroup label="Outline">
<option value="canny">Canny (*)</option>
<option value="mlsd">Straight lines</option>
<option value="scribble_hed">Scribble hed (*)</option>
<option value="scribble_hedsafe">Scribble hedsafe</option>
<option value="scribble_hedsafe" class="gated-feature" data-feature-keys="backend_diffusers">Scribble hedsafe</option>
<option value="scribble_pidinet">Scribble pidinet</option>
<option value="scribble_pidsafe">Scribble pidsafe</option>
<option value="scribble_pidsafe" class="gated-feature" data-feature-keys="backend_diffusers">Scribble pidsafe</option>
<option value="scribble_xdog" class="gated-feature" data-feature-keys="backend_webui">scribble_xdog</option>
<option value="softedge_hed">Softedge hed</option>
<option value="softedge_hedsafe">Softedge hedsafe</option>
<option value="softedge_pidinet">Softedge pidinet</option>
<option value="softedge_pidsafe">Softedge pidsafe</option>
<option value="softedge_teed" class="gated-feature" data-feature-keys="backend_webui">softedge_teed</option>
</optgroup>
<optgroup label="Depth">
<option value="normal_bae">Normal bae (*)</option>
<option value="depth_midas">Depth midas</option>
<option value="normal_midas" class="gated-feature" data-feature-keys="backend_webui">normal_midas</option>
<option value="depth_zoe">Depth zoe</option>
<option value="depth_leres">Depth leres</option>
<option value="depth_leres++">Depth leres++</option>
<option value="depth_anything_v2" class="gated-feature" data-feature-keys="backend_webui">depth_anything_v2</option>
<option value="depth_anything" class="gated-feature" data-feature-keys="backend_webui">depth_anything</option>
<option value="depth_hand_refiner" class="gated-feature" data-feature-keys="backend_webui">depth_hand_refiner</option>
<option value="depth_marigold" class="gated-feature" data-feature-keys="backend_webui">depth_marigold</option>
</optgroup>
<optgroup label="Line art">
<option value="lineart_coarse">Lineart coarse</option>
<option value="lineart_realistic">Lineart realistic</option>
<option value="lineart_anime">Lineart anime</option>
<option value="lineart_standard (from white bg & black line)" class="gated-feature" data-feature-keys="backend_webui">lineart_standard (from white bg & black line)</option>
<option value="lineart_anime_denoise" class="gated-feature" data-feature-keys="backend_webui">lineart_anime_denoise</option>
</optgroup>
<optgroup label="Reference" class="gated-feature" data-feature-keys="backend_webui">
<option value="reference_adain">reference_adain</option>
<option value="reference_only">reference_only</option>
<option value="reference_adain+attn">reference_adain+attn</option>
</optgroup>
<optgroup label="Tile" class="gated-feature" data-feature-keys="backend_webui">
<option value="tile_colorfix">tile_colorfix</option>
<option value="tile_resample">tile_resample</option>
<option value="tile_colorfix+sharp">tile_colorfix+sharp</option>
</optgroup>
<optgroup label="CLIP (IPAdapter)" class="gated-feature" data-feature-keys="backend_webui">
<option value="CLIP-ViT-H (IPAdapter)">CLIP-ViT-H (IPAdapter)</option>
<option value="CLIP-G (Revision)">CLIP-G (Revision)</option>
<option value="CLIP-G (Revision ignore prompt)">CLIP-G (Revision ignore prompt)</option>
<option value="CLIP-ViT-bigG (IPAdapter)">CLIP-ViT-bigG (IPAdapter)</option>
<option value="InsightFace+CLIP-H (IPAdapter)">InsightFace+CLIP-H (IPAdapter)</option>
</optgroup>
<optgroup label="Inpaint" class="gated-feature" data-feature-keys="backend_webui">
<option value="inpaint_only">inpaint_only</option>
<option value="inpaint_only+lama">inpaint_only+lama</option>
<option value="inpaint_global_harmonious">inpaint_global_harmonious</option>
</optgroup>
<optgroup label="Segment" class="gated-feature" data-feature-keys="backend_webui">
<option value="seg_ufade20k">seg_ufade20k</option>
<option value="seg_ofade20k">seg_ofade20k</option>
<option value="seg_anime_face">seg_anime_face</option>
<option value="seg_ofcoco">seg_ofcoco</option>
</optgroup>
<optgroup label="Misc">
<option value="shuffle">Shuffle</option>
<option value="segment">Segment</option>
<option value="segment" class="gated-feature" data-feature-keys="backend_diffusers">Segment</option>
<option value="invert (from white bg & black line)" class="gated-feature" data-feature-keys="backend_webui">invert (from white bg & black line)</option>
<option value="threshold" class="gated-feature" data-feature-keys="backend_webui">threshold</option>
<option value="t2ia_sketch_pidi" class="gated-feature" data-feature-keys="backend_webui">t2ia_sketch_pidi</option>
<option value="t2ia_color_grid" class="gated-feature" data-feature-keys="backend_webui">t2ia_color_grid</option>
<option value="recolor_intensity" class="gated-feature" data-feature-keys="backend_webui">recolor_intensity</option>
<option value="recolor_luminance" class="gated-feature" data-feature-keys="backend_webui">recolor_luminance</option>
<option value="blur_gaussian" class="gated-feature" data-feature-keys="backend_webui">blur_gaussian</option>
</optgroup>
</select>
<br/>
<label for="controlnet_model"><small>Model:</small></label> <input id="controlnet_model" type="text" spellcheck="false" autocomplete="off" class="model-filter" data-path="" />
<!-- <br/>
<label><small>Will download the necessary models, the first time.</small></label> -->
<br/>
<label><small>Will download the necessary models, the first time.</small></label>
<label for="controlnet_alpha_slider"><small>Strength:</small></label> <input id="controlnet_alpha_slider" name="controlnet_alpha_slider" class="editor-slider" value="10" type="range" min="0" max="10"> <input id="controlnet_alpha" name="controlnet_alpha" size="4" pattern="^[0-9\.]+$" onkeypress="preventNonNumericalInput(event)" inputmode="decimal">
</div>
</td>
</tr>
@ -242,32 +302,73 @@
<input id="vae_model" type="text" spellcheck="false" autocomplete="off" class="model-filter" data-path="" />
<a href="https://github.com/easydiffusion/easydiffusion/wiki/VAE-Variational-Auto-Encoder" target="_blank"><i class="fa-solid fa-circle-question help-btn"><span class="simple-tooltip top-left">Click to learn more about VAEs</span></i></a>
</td></tr>
<tr id="text_encoder_model_container" class="pl-5 gated-feature" data-feature-keys="backend_webui">
<td>
<label for="text_encoder_model">Text Encoder:</label>
</td>
<td>
<div id="text_encoder_model" data-path=""></div>
</td>
</tr>
<tr id="samplerSelection" class="pl-5"><td><label for="sampler_name">Sampler:</label></td><td>
<select id="sampler_name" name="sampler_name">
<option value="plms">PLMS</option>
<option value="ddim">DDIM</option>
<option value="ddim_cfgpp" class="gated-feature" data-feature-keys="backend_webui">DDIM CFG++</option>
<option value="heun">Heun</option>
<option value="euler">Euler</option>
<option value="euler_a" selected>Euler Ancestral</option>
<option value="dpm2">DPM2</option>
<option value="dpm2_a">DPM2 Ancestral</option>
<option value="dpm_fast" class="gated-feature" data-feature-keys="backend_webui">DPM Fast</option>
<option value="dpm_adaptive" class="gated-feature" data-feature-keys="backend_ed_classic backend_webui">DPM Adaptive</option>
<option value="lms">LMS</option>
<option value="dpm_solver_stability">DPM Solver (Stability AI)</option>
<option value="dpm_solver_stability" class="gated-feature" data-feature-keys="backend_ed_classic backend_ed_diffusers">DPM Solver (Stability AI)</option>
<option value="dpmpp_2s_a">DPM++ 2s Ancestral (Karras)</option>
<option value="dpmpp_2m">DPM++ 2m (Karras)</option>
<option value="dpmpp_2m_sde" class="diffusers-only">DPM++ 2m SDE (Karras)</option>
<option value="dpmpp_2m_sde" class="gated-feature" data-feature-keys="backend_ed_diffusers backend_webui">DPM++ 2m SDE</option>
<option value="dpmpp_2m_sde_heun" class="gated-feature" data-feature-keys="backend_webui">DPM++ 2m SDE Heun</option>
<option value="dpmpp_3m_sde" class="gated-feature" data-feature-keys="backend_webui">DPM++ 3M SDE</option>
<option value="dpmpp_sde">DPM++ SDE (Karras)</option>
<option value="dpm_adaptive" class="k_diffusion-only">DPM Adaptive (Karras)</option>
<option value="ddpm" class="diffusers-only">DDPM</option>
<option value="deis" class="diffusers-only">DEIS</option>
<option value="unipc_snr" class="k_diffusion-only">UniPC SNR</option>
<option value="unipc_tu">UniPC TU</option>
<option value="unipc_snr_2" class="k_diffusion-only">UniPC SNR 2</option>
<option value="unipc_tu_2" class="k_diffusion-only">UniPC TU 2</option>
<option value="unipc_tq" class="k_diffusion-only">UniPC TQ</option>
<option value="restart" class="gated-feature" data-feature-keys="backend_webui">Restart</option>
<option value="heun_pp2" class="gated-feature" data-feature-keys="backend_webui">Heun PP2</option>
<option value="ipndm" class="gated-feature" data-feature-keys="backend_webui">IPNDM</option>
<option value="ipndm_v" class="gated-feature" data-feature-keys="backend_webui">IPNDM_V</option>
<option value="ddpm" class="gated-feature" data-feature-keys="backend_ed_diffusers backend_webui">DDPM</option>
<option value="deis" class="gated-feature" data-feature-keys="backend_ed_diffusers backend_webui">DEIS</option>
<option value="lcm" class="gated-feature" data-feature-keys="backend_webui">LCM</option>
<option value="forge_flux_realistic" class="gated-feature" data-feature-keys="backend_webui">[Forge] Flux Realistic</option>
<option value="forge_flux_realistic_slow" class="gated-feature" data-feature-keys="backend_webui">[Forge] Flux Realistic (Slow)</option>
<option value="unipc_snr" class="gated-feature" data-feature-keys="backend_ed_classic">UniPC SNR</option>
<option value="unipc_tu" class="gated-feature" data-feature-keys="backend_ed_classic backend_ed_diffusers">UniPC TU</option>
<option value="unipc_snr_2" class="gated-feature" data-feature-keys="backend_ed_classic">UniPC SNR 2</option>
<option value="unipc_tu_2" class="gated-feature" data-feature-keys="backend_ed_classic">UniPC TU 2</option>
<option value="unipc_tq" class="gated-feature" data-feature-keys="backend_ed_classic">UniPC TQ</option>
</select>
<a href="https://github.com/easydiffusion/easydiffusion/wiki/How-to-Use#samplers" target="_blank"><i class="fa-solid fa-circle-question help-btn"><span class="simple-tooltip top-left">Click to learn more about samplers</span></i></a>
</td></tr>
<tr class="pl-5 warning-label displayNone" id="fluxSamplerWarning"><td>Tip:</td><td>This sampler does not work well with Flux!</td></tr>
<tr id="schedulerSelection" class="pl-5 gated-feature" data-feature-keys="backend_webui"><td><label for="scheduler_name">Scheduler:</label></td><td>
<select id="scheduler_name" name="scheduler_name">
<option value="automatic">Automatic</option>
<option value="uniform">Uniform</option>
<option value="karras">Karras</option>
<option value="exponential">Exponential</option>
<option value="polyexponential">Polyexponential</option>
<option value="sgm_uniform">SGM Uniform</option>
<option value="kl_optimal">KL Optimal</option>
<option value="simple" selected>Simple</option>
<option value="normal">Normal</option>
<option value="ddim">DDIM</option>
<option value="beta">Beta</option>
<option value="turbo">Turbo</option>
<option value="align_your_steps">Align Your Steps</option>
<option value="align_your_steps_GITS">Align Your Steps GITS</option>
<option value="align_your_steps_11">Align Your Steps 11</option>
<option value="align_your_steps_32">Align Your Steps 32</option>
</select>
</td></tr>
<tr class="pl-5 warning-label displayNone" id="fluxSchedulerWarning"><td>Tip:</td><td>This scheduler does not work well with Flux!</td></tr>
<tr class="pl-5"><td><label>Image Size: </label></td><td id="image-size-options">
<select id="width" name="width" value="512">
<option value="128">128</option>
@ -342,12 +443,15 @@
</div>
</div>
</div>
<div id="small_image_warning" class="displayNone">Small image sizes can cause bad image quality</div>
<div id="small_image_warning" class="displayNone warning-label">Small image sizes can cause bad image quality</div>
</td></tr>
<tr class="pl-5"><td><label for="num_inference_steps">Inference Steps:</label></td><td> <input id="num_inference_steps" name="num_inference_steps" type="number" min="1" step="1" style="width: 42pt" value="25" onkeypress="preventNonNumericalInput(event)" inputmode="numeric"></td></tr>
<tr class="pl-5 warning-label displayNone" id="fluxSchedulerStepsWarning"><td>Tip:</td><td>This scheduler needs 15 steps or more</td></tr>
<tr class="pl-5"><td><label for="guidance_scale_slider">Guidance Scale:</label></td><td> <input id="guidance_scale_slider" name="guidance_scale_slider" class="editor-slider" value="75" type="range" min="11" max="500"> <input id="guidance_scale" name="guidance_scale" size="4" pattern="^[0-9\.]+$" onkeypress="preventNonNumericalInput(event)" inputmode="decimal"></td></tr>
<tr class="pl-5 displayNone warning-label" id="guidanceWarning"><td></td><td id="guidanceWarningText"></td></tr>
<tr id="prompt_strength_container" class="pl-5"><td><label for="prompt_strength_slider">Prompt Strength:</label></td><td> <input id="prompt_strength_slider" name="prompt_strength_slider" class="editor-slider" value="80" type="range" min="0" max="99"> <input id="prompt_strength" name="prompt_strength" size="4" pattern="^[0-9\.]+$" onkeypress="preventNonNumericalInput(event)" inputmode="decimal"><br/></td></tr>
<tr id="lora_model_container" class="pl-5">
<tr id="distilled_guidance_scale_container" class="pl-5 gated-feature" data-feature-keys="backend_webui"><td><label for="distilled_guidance_scale_slider">Distilled Guidance:</label></td><td> <input id="distilled_guidance_scale_slider" name="distilled_guidance_scale_slider" class="editor-slider" value="35" type="range" min="11" max="500"> <input id="distilled_guidance_scale" name="distilled_guidance_scale" size="4" pattern="^[0-9\.]+$" onkeypress="preventNonNumericalInput(event)" inputmode="decimal"></td></tr>
<tr id="lora_model_container" class="pl-5 gated-feature" data-feature-keys="backend_ed_diffusers backend_webui">
<td>
<label for="lora_model">LoRA:</label>
</td>
@ -355,14 +459,14 @@
<div id="lora_model" data-path=""></div>
</td>
</tr>
<tr id="hypernetwork_model_container" class="pl-5"><td><label for="hypernetwork_model">Hypernetwork:</label></td><td>
<tr id="hypernetwork_model_container" class="pl-5 gated-feature" data-feature-keys="backend_ed_classic"><td><label for="hypernetwork_model">Hypernetwork:</label></td><td>
<input id="hypernetwork_model" type="text" spellcheck="false" autocomplete="off" class="model-filter" data-path="" />
</td></tr>
<tr id="hypernetwork_strength_container" class="pl-5">
<tr id="hypernetwork_strength_container" class="pl-5 gated-feature" data-feature-keys="backend_ed_classic">
<td><label for="hypernetwork_strength_slider">Hypernetwork Strength:</label></td>
<td> <input id="hypernetwork_strength_slider" name="hypernetwork_strength_slider" class="editor-slider" value="100" type="range" min="0" max="100"> <input id="hypernetwork_strength" name="hypernetwork_strength" size="4" pattern="^[0-9\.]+$" onkeypress="preventNonNumericalInput(event)" inputmode="decimal"><br/></td>
</tr>
<tr id="tiling_container" class="pl-5">
<tr id="tiling_container" class="pl-5 gated-feature" data-feature-keys="backend_ed_diffusers">
<td><label for="tiling">Seamless Tiling:</label></td>
<td class="diffusers-restart-needed">
<select id="tiling" name="tiling">
@ -387,7 +491,7 @@
<tr class="pl-5" id="output_quality_row"><td><label for="output_quality">Image Quality:</label></td><td>
<input id="output_quality_slider" name="output_quality" class="editor-slider" value="75" type="range" min="10" max="95"> <input id="output_quality" name="output_quality" size="4" pattern="^[0-9\.]+$" onkeypress="preventNonNumericalInput(event)" inputmode="numeric">
</td></tr>
<tr class="pl-5">
<tr class="pl-5 gated-feature" data-feature-keys="backend_ed_diffusers">
<td><label for="tiling">Enable VAE Tiling:</label></td>
<td class="diffusers-restart-needed">
<input id="enable_vae_tiling" name="enable_vae_tiling" type="checkbox" checked>
@ -403,7 +507,7 @@
<input id="use_face_correction" name="use_face_correction" type="checkbox"> <label for="use_face_correction">Fix incorrect faces and eyes</label> <div style="display:inline-block;"><input id="gfpgan_model" type="text" spellcheck="false" autocomplete="off" class="model-filter" data-path="" /></div>
<table id="codeformer_settings" class="displayNone sub-settings">
<tr class="pl-5"><td><label for="codeformer_fidelity_slider">Strength:</label></td><td><input id="codeformer_fidelity_slider" name="codeformer_fidelity_slider" class="editor-slider" value="5" type="range" min="0" max="10"> <input id="codeformer_fidelity" name="codeformer_fidelity" size="4" pattern="^[0-9\.]+$" onkeypress="preventNonNumericalInput(event)" inputmode="decimal"></td></tr>
<tr class="pl-5"><td><label for="codeformer_upscale_faces">Upscale Faces:</label></td><td><input id="codeformer_upscale_faces" name="codeformer_upscale_faces" type="checkbox" checked> <label><small>(improves the resolution of faces)</small></label></td></tr>
<tr class="pl-5 gated-feature" data-feature-keys="backend_ed_diffusers"><td><label for="codeformer_upscale_faces">Upscale Faces:</label></td><td><input id="codeformer_upscale_faces" name="codeformer_upscale_faces" type="checkbox" checked> <label><small>(improves the resolution of faces)</small></label></td></tr>
</table>
</li>
<li class="pl-5">
@ -416,7 +520,13 @@
<select id="upscale_model" name="upscale_model">
<option value="RealESRGAN_x4plus" selected>RealESRGAN_x4plus</option>
<option value="RealESRGAN_x4plus_anime_6B">RealESRGAN_x4plus_anime_6B</option>
<option value="latent_upscaler">Latent Upscaler 2x</option>
<option value="ESRGAN_4x" class="pl-5 gated-feature" data-feature-keys="backend_webui">ESRGAN_4x</option>
<option value="Lanczos" class="pl-5 gated-feature" data-feature-keys="backend_webui">Lanczos</option>
<option value="Nearest" class="pl-5 gated-feature" data-feature-keys="backend_webui">Nearest</option>
<option value="ScuNET" class="pl-5 gated-feature" data-feature-keys="backend_webui">ScuNET GAN</option>
<option value="ScuNET PSNR" class="pl-5 gated-feature" data-feature-keys="backend_webui">ScuNET PSNR</option>
<option value="SwinIR_4x" class="pl-5 gated-feature" data-feature-keys="backend_webui">SwinIR 4x</option>
<option value="latent_upscaler" class="pl-5 gated-feature" data-feature-keys="backend_ed_classic backend_ed_diffusers">Latent Upscaler 2x</option>
</select>
<table id="latent_upscaler_settings" class="displayNone sub-settings">
<tr class="pl-5"><td><label for="latent_upscaler_steps_slider">Upscaling Steps:</label></td><td><input id="latent_upscaler_steps_slider" name="latent_upscaler_steps_slider" class="editor-slider" value="10" type="range" min="1" max="50"> <input id="latent_upscaler_steps" name="latent_upscaler_steps" size="4" pattern="^[0-9\.]+$" onkeypress="preventNonNumericalInput(event)" inputmode="numeric"></td></tr>
@ -823,7 +933,8 @@
<p>This license of this software forbids you from sharing any content that violates any laws, produce any harm to a person, disseminate any personal information that would be meant for harm, <br/>spread misinformation and target vulnerable groups. For the full list of restrictions please read <a href="https://github.com/easydiffusion/easydiffusion/blob/main/LICENSE" target="_blank">the license</a>.</p>
<p>By using this software, you consent to the terms and conditions of the license.</p>
</div>
<input id="test_diffusers" type="checkbox" style="display: none" checked />
<input id="test_diffusers" type="checkbox" style="display: none" checked /> <!-- for backwards compatibility -->
<input id="use_v3_engine" type="checkbox" style="display: none" checked /> <!-- for backwards compatibility -->
</div>
</div>
</body>

View File

@ -9,6 +9,3 @@ server.init()
model_manager.init()
app.init_render_threads()
bucket_manager.init()
# start the browser ui
app.open_browser()

View File

@ -79,6 +79,7 @@
}
.parameters-table .fa-fire,
.parameters-table .fa-bolt {
.parameters-table .fa-bolt,
.parameters-table .fa-robot {
color: #F7630C;
}

View File

@ -36,6 +36,15 @@ code {
transform: translateY(4px);
cursor: pointer;
}
#engine-logo {
font-size: 8pt;
padding-left: 10pt;
color: var(--small-label-color);
}
#engine-logo a {
text-decoration: none;
/* color: var(--small-label-color); */
}
#prompt {
width: 100%;
height: 65pt;
@ -541,7 +550,7 @@ div.img-preview img {
position: relative;
background: var(--background-color4);
display: flex;
padding: 12px 0 0;
padding: 6px 0 0;
}
.tab .icon {
padding-right: 4pt;
@ -657,6 +666,15 @@ div.img-preview img {
display: block;
}
.gated-feature {
display: none;
}
.warning-label {
font-size: smaller;
color: var(--status-orange);
}
.display-settings {
float: right;
position: relative;
@ -1459,11 +1477,6 @@ div.top-right {
margin-top: 6px;
}
#small_image_warning {
font-size: smaller;
color: var(--status-orange);
}
button#save-system-settings-btn {
padding: 4pt 8pt;
}

View File

@ -16,10 +16,12 @@ const SETTINGS_IDS_LIST = [
"clip_skip",
"vae_model",
"sampler_name",
"scheduler_name",
"width",
"height",
"num_inference_steps",
"guidance_scale",
"distilled_guidance_scale",
"prompt_strength",
"tiling",
"output_format",
@ -29,6 +31,8 @@ const SETTINGS_IDS_LIST = [
"stream_image_progress",
"use_face_correction",
"gfpgan_model",
"codeformer_fidelity",
"codeformer_upscale_faces",
"use_upscale",
"upscale_amount",
"latent_upscaler_steps",
@ -56,7 +60,9 @@ const SETTINGS_IDS_LIST = [
"extract_lora_from_prompt",
"embedding-card-size-selector",
"lora_model",
"text_encoder_model",
"enable_vae_tiling",
"controlnet_alpha",
]
const IGNORE_BY_DEFAULT = ["prompt"]
@ -177,23 +183,6 @@ function loadSettings() {
}
})
CURRENTLY_LOADING_SETTINGS = false
} else if (localStorage.length < 2) {
// localStorage is too short for OldSettings
// So this is likely the first time Easy Diffusion is running.
// Initialize vram_usage_level based on the available VRAM
function initGPUProfile(event) {
if (
"detail" in event &&
"active" in event.detail &&
"cuda:0" in event.detail.active &&
event.detail.active["cuda:0"].mem_total < 4.5
) {
vramUsageLevelField.value = "low"
vramUsageLevelField.dispatchEvent(new Event("change"))
}
document.removeEventListener("system_info_update", initGPUProfile)
}
document.addEventListener("system_info_update", initGPUProfile)
} else {
CURRENTLY_LOADING_SETTINGS = true
tryLoadOldSettings()

View File

@ -131,6 +131,15 @@ const TASK_MAPPING = {
readUI: () => parseFloat(guidanceScaleField.value),
parse: (val) => parseFloat(val),
},
distilled_guidance_scale: {
name: "Distilled Guidance",
setUI: (distilled_guidance_scale) => {
distilledGuidanceScaleField.value = distilled_guidance_scale
updateDistilledGuidanceScaleSlider()
},
readUI: () => parseFloat(distilledGuidanceScaleField.value),
parse: (val) => parseFloat(val),
},
prompt_strength: {
name: "Prompt Strength",
setUI: (prompt_strength) => {
@ -242,6 +251,14 @@ const TASK_MAPPING = {
readUI: () => samplerField.value,
parse: (val) => val,
},
scheduler_name: {
name: "Scheduler",
setUI: (scheduler_name) => {
schedulerField.value = scheduler_name
},
readUI: () => schedulerField.value,
parse: (val) => val,
},
use_stable_diffusion_model: {
name: "Stable Diffusion model",
setUI: (use_stable_diffusion_model) => {
@ -309,10 +326,21 @@ const TASK_MAPPING = {
readUI: () => controlImageFilterField.value,
parse: (val) => val,
},
control_alpha: {
name: "ControlNet Strength",
setUI: (control_alpha) => {
control_alpha = control_alpha || 1.0
controlAlphaField.value = control_alpha
updateControlAlphaSlider()
},
readUI: () => parseFloat(controlAlphaField.value),
parse: (val) => val === null ? 1.0 : parseFloat(val),
},
use_lora_model: {
name: "LoRA model",
setUI: (use_lora_model) => {
let modelPaths = []
use_lora_model = use_lora_model === null ? "" : use_lora_model
use_lora_model = Array.isArray(use_lora_model) ? use_lora_model : [use_lora_model]
use_lora_model.forEach((m) => {
if (m.includes("models\\lora\\")) {
@ -366,6 +394,45 @@ const TASK_MAPPING = {
return val
},
},
use_text_encoder_model: {
name: "Text Encoder model",
setUI: (use_text_encoder_model) => {
let modelPaths = []
use_text_encoder_model = use_text_encoder_model === null ? "" : use_text_encoder_model
use_text_encoder_model = Array.isArray(use_text_encoder_model) ? use_text_encoder_model : [use_text_encoder_model]
use_text_encoder_model.forEach((m) => {
if (m.includes("models\\text-encoder\\")) {
m = m.split("models\\text-encoder\\")[1]
} else if (m.includes("models\\\\text-encoder\\\\")) {
m = m.split("models\\\\text-encoder\\\\")[1]
} else if (m.includes("models/text-encoder/")) {
m = m.split("models/text-encoder/")[1]
}
m = m.replaceAll("\\\\", "/")
m = getModelPath(m, [".safetensors", ".sft"])
modelPaths.push(m)
})
textEncoderModelField.modelNames = modelPaths
},
readUI: () => {
return textEncoderModelField.modelNames
},
parse: (val) => {
val = !val || val === "None" ? "" : val
if (typeof val === "string" && val.includes(",")) {
val = val.split(",")
val = val.map((v) => v.trim())
val = val.map((v) => v.replaceAll("\\", "\\\\"))
val = val.map((v) => v.replaceAll('"', ""))
val = val.map((v) => v.replaceAll("'", ""))
val = val.map((v) => '"' + v + '"')
val = "[" + val + "]"
val = JSON.parse(val)
}
val = Array.isArray(val) ? val : [val]
return val
},
},
use_hypernetwork_model: {
name: "Hypernetwork model",
setUI: (use_hypernetwork_model) => {
@ -529,6 +596,11 @@ function restoreTaskToUI(task, fieldsToSkip) {
// listen for inpainter loading event, which happens AFTER the main image loads (which reloads the inpai
controlImagePreview.src = task.reqBody.control_image
}
if ("use_controlnet_model" in task.reqBody && task.reqBody.use_controlnet_model && !("control_alpha" in task.reqBody)) {
controlAlphaField.value = 1.0
updateControlAlphaSlider()
}
}
function readUI() {
const reqBody = {}
@ -574,19 +646,23 @@ const TASK_TEXT_MAPPING = {
seed: "Seed",
num_inference_steps: "Steps",
guidance_scale: "Guidance Scale",
distilled_guidance_scale: "Distilled Guidance",
prompt_strength: "Prompt Strength",
use_face_correction: "Use Face Correction",
use_upscale: "Use Upscaling",
upscale_amount: "Upscale By",
sampler_name: "Sampler",
scheduler_name: "Scheduler",
negative_prompt: "Negative Prompt",
use_stable_diffusion_model: "Stable Diffusion model",
use_hypernetwork_model: "Hypernetwork model",
hypernetwork_strength: "Hypernetwork Strength",
use_lora_model: "LoRA model",
lora_alpha: "LoRA Strength",
use_text_encoder_model: "Text Encoder model",
use_controlnet_model: "ControlNet model",
control_filter_to_apply: "ControlNet Filter",
control_alpha: "ControlNet Strength",
tiling: "Seamless Tiling",
}
function parseTaskFromText(str) {

View File

@ -12,8 +12,16 @@ const taskConfigSetup = {
seed: { value: ({ seed }) => seed, label: "Seed" },
dimensions: { value: ({ reqBody }) => `${reqBody?.width}x${reqBody?.height}`, label: "Dimensions" },
sampler_name: "Sampler",
scheduler_name: {
label: "Scheduler",
visible: ({ reqBody }) => reqBody?.scheduler_name,
},
num_inference_steps: "Inference Steps",
guidance_scale: "Guidance Scale",
distilled_guidance_scale: {
label: "Distilled Guidance Scale",
visible: ({ reqBody }) => reqBody?.distilled_guidance_scale,
},
use_stable_diffusion_model: "Model",
clip_skip: {
label: "Clip Skip",
@ -46,11 +54,16 @@ const taskConfigSetup = {
label: "Hypernetwork Strength",
visible: ({ reqBody }) => !!reqBody?.use_hypernetwork_model,
},
use_text_encoder_model: { label: "Text Encoder", visible: ({ reqBody }) => !!reqBody?.use_text_encoder_model },
use_lora_model: { label: "Lora Model", visible: ({ reqBody }) => !!reqBody?.use_lora_model },
lora_alpha: { label: "Lora Strength", visible: ({ reqBody }) => !!reqBody?.use_lora_model },
preserve_init_image_color_profile: "Preserve Color Profile",
strict_mask_border: "Strict Mask Border",
use_controlnet_model: "ControlNet Model",
control_alpha: {
label: "ControlNet Strength",
visible: ({ reqBody }) => !!reqBody?.use_controlnet_model,
},
},
pluginTaskConfig: {},
getCSSKey: (key) =>
@ -72,6 +85,8 @@ let numOutputsParallelField = document.querySelector("#num_outputs_parallel")
let numInferenceStepsField = document.querySelector("#num_inference_steps")
let guidanceScaleSlider = document.querySelector("#guidance_scale_slider")
let guidanceScaleField = document.querySelector("#guidance_scale")
let distilledGuidanceScaleSlider = document.querySelector("#distilled_guidance_scale_slider")
let distilledGuidanceScaleField = document.querySelector("#distilled_guidance_scale")
let outputQualitySlider = document.querySelector("#output_quality_slider")
let outputQualityField = document.querySelector("#output_quality")
let outputQualityRow = document.querySelector("#output_quality_row")
@ -99,6 +114,8 @@ let controlImagePreview = document.querySelector("#control_image_preview")
let controlImageClearBtn = document.querySelector(".control_image_clear")
let controlImageContainer = document.querySelector("#control_image_wrapper")
let controlImageFilterField = document.querySelector("#control_image_filter")
let controlAlphaSlider = document.querySelector("#controlnet_alpha_slider")
let controlAlphaField = document.querySelector("#controlnet_alpha")
let applyColorCorrectionField = document.querySelector("#apply_color_correction")
let strictMaskBorderField = document.querySelector("#strict_mask_border")
let colorCorrectionSetting = document.querySelector("#apply_color_correction_setting")
@ -107,6 +124,8 @@ let promptStrengthSlider = document.querySelector("#prompt_strength_slider")
let promptStrengthField = document.querySelector("#prompt_strength")
let samplerField = document.querySelector("#sampler_name")
let samplerSelectionContainer = document.querySelector("#samplerSelection")
let schedulerField = document.querySelector("#scheduler_name")
let schedulerSelectionContainer = document.querySelector("#schedulerSelection")
let useFaceCorrectionField = document.querySelector("#use_face_correction")
let gfpganModelField = new ModelDropdown(document.querySelector("#gfpgan_model"), ["gfpgan", "codeformer"], "", false)
let useUpscalingField = document.querySelector("#use_upscale")
@ -123,6 +142,7 @@ let tilingField = document.querySelector("#tiling")
let controlnetModelField = new ModelDropdown(document.querySelector("#controlnet_model"), "controlnet", "None", false)
let vaeModelField = new ModelDropdown(document.querySelector("#vae_model"), "vae", "None")
let loraModelField = new MultiModelSelector(document.querySelector("#lora_model"), "lora", "LoRA", 0.5, 0.02)
let textEncoderModelField = new MultiModelSelector(document.querySelector("#text_encoder_model"), "text-encoder", "Text Encoder", 0.5, 0.02, false)
let hypernetworkModelField = new ModelDropdown(document.querySelector("#hypernetwork_model"), "hypernetwork", "None")
let hypernetworkStrengthSlider = document.querySelector("#hypernetwork_strength_slider")
let hypernetworkStrengthField = document.querySelector("#hypernetwork_strength")
@ -605,6 +625,13 @@ function onUseAsInputClick(req, img) {
initImagePreview.src = imgData
maskSetting.checked = false
//Force the image settings size to match the input, as inpaint currently only works correctly
//if input image and generate sizes match.
addImageSizeOption(img.naturalWidth);
addImageSizeOption(img.naturalHeight);
widthField.value = img.naturalWidth;
heightField.value = img.naturalHeight;
}
function onUseForControlnetClick(req, img) {
@ -975,7 +1002,20 @@ function onRedoFilter(req, img, e, tools) {
function onUpscaleClick(req, img, e, tools) {
let path = upscaleModelField.value
let scale = parseInt(upscaleAmountField.value)
let filterName = path.toLowerCase().includes("realesrgan") ? "realesrgan" : "latent_upscaler"
let filterName = null
const FILTERS = ["realesrgan", "latent_upscaler", "esrgan_4x", "lanczos", "nearest", "scunet", "swinir"]
for (let idx in FILTERS) {
let f = FILTERS[idx]
if (path.toLowerCase().includes(f)) {
filterName = f
break
}
}
if (!filterName) {
return
}
let statusText = "Upscaling by " + scale + "x using " + filterName
applyInlineFilter(filterName, path, { scale: scale }, img, statusText, tools)
}
@ -1032,6 +1072,9 @@ function makeImage() {
if (guidanceScaleField.value == "") {
guidanceScaleField.value = guidanceScaleSlider.value / 10
}
if (distilledGuidanceScaleField.value == "") {
distilledGuidanceScaleField.value = distilledGuidanceScaleSlider.value / 10
}
if (hypernetworkStrengthField.value == "") {
hypernetworkStrengthField.value = hypernetworkStrengthSlider.value / 100
}
@ -1355,6 +1398,7 @@ function getCurrentUserRequest() {
newTask.reqBody.hypernetwork_strength = parseFloat(hypernetworkStrengthField.value)
}
if (testDiffusers.checked) {
// lora
let loraModelData = loraModelField.value
let modelNames = loraModelData["modelNames"]
let modelStrengths = loraModelData["modelWeights"]
@ -1367,6 +1411,18 @@ function getCurrentUserRequest() {
newTask.reqBody.lora_alpha = modelStrengths
}
// text encoder
let textEncoderModelNames = textEncoderModelField.modelNames
if (textEncoderModelNames.length > 0) {
textEncoderModelNames = textEncoderModelNames.length == 1 ? textEncoderModelNames[0] : textEncoderModelNames
newTask.reqBody.use_text_encoder_model = textEncoderModelNames
} else {
newTask.reqBody.use_text_encoder_model = ""
}
// vae tiling
if (tilingField.value !== "none") {
newTask.reqBody.tiling = tilingField.value
}
@ -1395,10 +1451,17 @@ function getCurrentUserRequest() {
if (controlnetModelField.value !== "" && IMAGE_REGEX.test(controlImagePreview.src)) {
newTask.reqBody.use_controlnet_model = controlnetModelField.value
newTask.reqBody.control_image = controlImagePreview.src
newTask.reqBody.control_alpha = parseFloat(controlAlphaField.value)
if (controlImageFilterField.value !== "") {
newTask.reqBody.control_filter_to_apply = controlImageFilterField.value
}
}
if (stableDiffusionModelField.value.toLowerCase().includes("flux")) {
newTask.reqBody.distilled_guidance_scale = parseFloat(distilledGuidanceScaleField.value)
}
if (schedulerSelectionContainer.style.display !== "none") {
newTask.reqBody.scheduler_name = schedulerField.value
}
return newTask
}
@ -1838,36 +1901,162 @@ controlImagePreview.addEventListener("load", onControlnetModelChange)
controlImagePreview.addEventListener("unload", onControlnetModelChange)
onControlnetModelChange()
function onControlImageFilterChange() {
let filterId = controlImageFilterField.value
if (filterId.includes("openpose")) {
controlnetModelField.value = "control_v11p_sd15_openpose"
} else if (filterId === "canny") {
controlnetModelField.value = "control_v11p_sd15_canny"
} else if (filterId === "mlsd") {
controlnetModelField.value = "control_v11p_sd15_mlsd"
} else if (filterId === "mlsd") {
controlnetModelField.value = "control_v11p_sd15_mlsd"
} else if (filterId.includes("scribble")) {
controlnetModelField.value = "control_v11p_sd15_scribble"
} else if (filterId.includes("softedge")) {
controlnetModelField.value = "control_v11p_sd15_softedge"
} else if (filterId === "normal_bae") {
controlnetModelField.value = "control_v11p_sd15_normalbae"
} else if (filterId.includes("depth")) {
controlnetModelField.value = "control_v11f1p_sd15_depth"
} else if (filterId === "lineart_anime") {
controlnetModelField.value = "control_v11p_sd15s2_lineart_anime"
} else if (filterId.includes("lineart")) {
controlnetModelField.value = "control_v11p_sd15_lineart"
} else if (filterId === "shuffle") {
controlnetModelField.value = "control_v11e_sd15_shuffle"
} else if (filterId === "segment") {
controlnetModelField.value = "control_v11p_sd15_seg"
document.addEventListener("refreshModels", function() {
onFixFaceModelChange()
onControlnetModelChange()
})
// utilities for Flux and Chroma
let sdModelField = document.querySelector("#stable_diffusion_model")
// function checkAndSetDependentModels() {
// let sdModel = sdModelField.value.toLowerCase()
// let isFlux = sdModel.includes("flux")
// let isChroma = sdModel.includes("chroma")
// if (isFlux || isChroma) {
// vaeModelField.value = "ae"
// if (isFlux) {
// textEncoderModelField.modelNames = ["t5xxl_fp16", "clip_l"]
// } else {
// textEncoderModelField.modelNames = ["t5xxl_fp16"]
// }
// } else {
// if (vaeModelField.value == "ae") {
// vaeModelField.value = ""
// }
// textEncoderModelField.modelNames = []
// }
// }
// sdModelField.addEventListener("change", checkAndSetDependentModels)
function checkGuidanceValue() {
let guidance = parseFloat(guidanceScaleField.value)
let guidanceWarning = document.querySelector("#guidanceWarning")
let guidanceWarningText = document.querySelector("#guidanceWarningText")
if (sdModelField.value.toLowerCase().includes("flux")) {
if (guidance > 1.5) {
guidanceWarningText.innerText = "Flux recommends a 'Guidance Scale' of 1"
guidanceWarning.classList.remove("displayNone")
} else {
guidanceWarning.classList.add("displayNone")
}
} else {
if (guidance < 2) {
guidanceWarningText.innerText = "A higher 'Guidance Scale' is recommended!"
guidanceWarning.classList.remove("displayNone")
} else {
guidanceWarning.classList.add("displayNone")
}
}
}
controlImageFilterField.addEventListener("change", onControlImageFilterChange)
onControlImageFilterChange()
sdModelField.addEventListener("change", checkGuidanceValue)
guidanceScaleField.addEventListener("change", checkGuidanceValue)
guidanceScaleSlider.addEventListener("change", checkGuidanceValue)
// disabling until we can detect flux models more reliably
// function checkGuidanceScaleVisibility() {
// let guidanceScaleContainer = document.querySelector("#distilled_guidance_scale_container")
// if (sdModelField.value.toLowerCase().includes("flux")) {
// guidanceScaleContainer.classList.remove("displayNone")
// } else {
// guidanceScaleContainer.classList.add("displayNone")
// }
// }
// sdModelField.addEventListener("change", checkGuidanceScaleVisibility)
function checkFluxSampler() {
let samplerWarning = document.querySelector("#fluxSamplerWarning")
if (sdModelField.value.toLowerCase().includes("flux")) {
if (samplerField.value == "euler_a") {
samplerWarning.classList.remove("displayNone")
} else {
samplerWarning.classList.add("displayNone")
}
} else {
samplerWarning.classList.add("displayNone")
}
}
function checkFluxScheduler() {
const badSchedulers = ["automatic", "uniform", "turbo", "align_your_steps", "align_your_steps_GITS", "align_your_steps_11", "align_your_steps_32"]
let schedulerWarning = document.querySelector("#fluxSchedulerWarning")
if (sdModelField.value.toLowerCase().includes("flux")) {
if (badSchedulers.includes(schedulerField.value)) {
schedulerWarning.classList.remove("displayNone")
} else {
schedulerWarning.classList.add("displayNone")
}
} else {
schedulerWarning.classList.add("displayNone")
}
}
function checkFluxSchedulerSteps() {
const problematicSchedulers = ["karras", "exponential", "polyexponential"]
let schedulerWarning = document.querySelector("#fluxSchedulerStepsWarning")
if (sdModelField.value.toLowerCase().includes("flux") && parseInt(numInferenceStepsField.value) < 15) {
if (problematicSchedulers.includes(schedulerField.value)) {
schedulerWarning.classList.remove("displayNone")
} else {
schedulerWarning.classList.add("displayNone")
}
} else {
schedulerWarning.classList.add("displayNone")
}
}
sdModelField.addEventListener("change", checkFluxSampler)
samplerField.addEventListener("change", checkFluxSampler)
sdModelField.addEventListener("change", checkFluxScheduler)
schedulerField.addEventListener("change", checkFluxScheduler)
sdModelField.addEventListener("change", checkFluxSchedulerSteps)
schedulerField.addEventListener("change", checkFluxSchedulerSteps)
numInferenceStepsField.addEventListener("change", checkFluxSchedulerSteps)
document.addEventListener("refreshModels", function() {
// checkAndSetDependentModels()
checkGuidanceValue()
// checkGuidanceScaleVisibility()
checkFluxSampler()
checkFluxScheduler()
checkFluxSchedulerSteps()
})
// function onControlImageFilterChange() {
// let filterId = controlImageFilterField.value
// if (filterId.includes("openpose")) {
// controlnetModelField.value = "control_v11p_sd15_openpose"
// } else if (filterId === "canny") {
// controlnetModelField.value = "control_v11p_sd15_canny"
// } else if (filterId === "mlsd") {
// controlnetModelField.value = "control_v11p_sd15_mlsd"
// } else if (filterId === "mlsd") {
// controlnetModelField.value = "control_v11p_sd15_mlsd"
// } else if (filterId.includes("scribble")) {
// controlnetModelField.value = "control_v11p_sd15_scribble"
// } else if (filterId.includes("softedge")) {
// controlnetModelField.value = "control_v11p_sd15_softedge"
// } else if (filterId === "normal_bae") {
// controlnetModelField.value = "control_v11p_sd15_normalbae"
// } else if (filterId.includes("depth")) {
// controlnetModelField.value = "control_v11f1p_sd15_depth"
// } else if (filterId === "lineart_anime") {
// controlnetModelField.value = "control_v11p_sd15s2_lineart_anime"
// } else if (filterId.includes("lineart")) {
// controlnetModelField.value = "control_v11p_sd15_lineart"
// } else if (filterId === "shuffle") {
// controlnetModelField.value = "control_v11e_sd15_shuffle"
// } else if (filterId === "segment") {
// controlnetModelField.value = "control_v11p_sd15_seg"
// }
// }
// controlImageFilterField.addEventListener("change", onControlImageFilterChange)
// onControlImageFilterChange()
upscaleModelField.disabled = !useUpscalingField.checked
upscaleAmountField.disabled = !useUpscalingField.checked
@ -1966,6 +2155,27 @@ guidanceScaleSlider.addEventListener("input", updateGuidanceScale)
guidanceScaleField.addEventListener("input", updateGuidanceScaleSlider)
updateGuidanceScale()
/********************* Distilled Guidance **************************/
function updateDistilledGuidanceScale() {
distilledGuidanceScaleField.value = distilledGuidanceScaleSlider.value / 10
distilledGuidanceScaleField.dispatchEvent(new Event("change"))
}
function updateDistilledGuidanceScaleSlider() {
if (distilledGuidanceScaleField.value < 0) {
distilledGuidanceScaleField.value = 0
} else if (distilledGuidanceScaleField.value > 50) {
distilledGuidanceScaleField.value = 50
}
distilledGuidanceScaleSlider.value = distilledGuidanceScaleField.value * 10
distilledGuidanceScaleSlider.dispatchEvent(new Event("change"))
}
distilledGuidanceScaleSlider.addEventListener("input", updateDistilledGuidanceScale)
distilledGuidanceScaleField.addEventListener("input", updateDistilledGuidanceScaleSlider)
updateDistilledGuidanceScale()
/********************* Prompt Strength *******************/
function updatePromptStrength() {
promptStrengthField.value = promptStrengthSlider.value / 100
@ -2015,6 +2225,27 @@ function updateHypernetworkStrengthContainer() {
hypernetworkModelField.addEventListener("change", updateHypernetworkStrengthContainer)
updateHypernetworkStrengthContainer()
/********************* Controlnet Alpha **************************/
function updateControlAlpha() {
controlAlphaField.value = controlAlphaSlider.value / 10
controlAlphaField.dispatchEvent(new Event("change"))
}
function updateControlAlphaSlider() {
if (controlAlphaField.value < 0) {
controlAlphaField.value = 0
} else if (controlAlphaField.value > 10) {
controlAlphaField.value = 10
}
controlAlphaSlider.value = controlAlphaField.value * 10
controlAlphaSlider.dispatchEvent(new Event("change"))
}
controlAlphaSlider.addEventListener("input", updateControlAlpha)
controlAlphaField.addEventListener("input", updateControlAlphaSlider)
updateControlAlpha()
/********************* JPEG/WEBP Quality **********************/
function updateOutputQuality() {
outputQualityField.value = 0 | outputQualitySlider.value

View File

@ -10,6 +10,7 @@ class MultiModelSelector {
root
modelType
modelNameFriendly
showWeights
defaultWeight
weightStep
@ -35,13 +36,13 @@ class MultiModelSelector {
if (typeof modelData !== "object") {
throw new Error("Multi-model selector expects an object containing modelNames and modelWeights as keys!")
}
if (!("modelNames" in modelData) || !("modelWeights" in modelData)) {
if (!("modelNames" in modelData) || (this.showWeights && !("modelWeights" in modelData))) {
throw new Error("modelNames or modelWeights not present in the data passed to the multi-model selector")
}
let newModelNames = modelData["modelNames"]
let newModelWeights = modelData["modelWeights"]
if (newModelNames.length !== newModelWeights.length) {
if (newModelWeights && newModelNames.length !== newModelWeights.length) {
throw new Error("Need to pass an equal number of modelNames and modelWeights!")
}
@ -50,7 +51,7 @@ class MultiModelSelector {
// the root of all this unholiness is because searchable-models automatically dispatches an update event
// as soon as the value is updated via JS, which is against the DOM pattern of not dispatching an event automatically
// unless the caller explicitly dispatches the event.
this.modelWeights = newModelWeights
this.modelWeights = newModelWeights || []
this.modelNames = newModelNames
}
get disabled() {
@ -91,10 +92,11 @@ class MultiModelSelector {
}
}
constructor(root, modelType, modelNameFriendly = undefined, defaultWeight = 0.5, weightStep = 0.02) {
constructor(root, modelType, modelNameFriendly = undefined, defaultWeight = 0.5, weightStep = 0.02, showWeights = true) {
this.root = root
this.modelType = modelType
this.modelNameFriendly = modelNameFriendly || modelType
this.showWeights = showWeights
this.defaultWeight = defaultWeight
this.weightStep = weightStep
@ -135,10 +137,13 @@ class MultiModelSelector {
const modelElement = document.createElement("div")
modelElement.className = "model_entry"
modelElement.innerHTML = `
<input id="${this.modelType}_${idx}" class="model_name model-filter" type="text" spellcheck="false" autocomplete="off" data-path="" />
<input class="model_weight" type="number" step="${this.weightStep}" value="${this.defaultWeight}" pattern="^-?[0-9]*\.?[0-9]*$" onkeypress="preventNonNumericalInput(event)">
`
let html = `<input id="${this.modelType}_${idx}" class="model_name model-filter" type="text" spellcheck="false" autocomplete="off" data-path="" />`
if (this.showWeights) {
html += `<input class="model_weight" type="number" step="${this.weightStep}" value="${this.defaultWeight}" pattern="^-?[0-9]*\.?[0-9]*$" onkeypress="preventNonNumericalInput(event)">`
}
modelElement.innerHTML = html
this.modelContainer.appendChild(modelElement)
let modelNameEl = modelElement.querySelector(".model_name")
@ -160,8 +165,8 @@ class MultiModelSelector {
modelNameEl.addEventListener("change", makeUpdateEvent("change"))
modelNameEl.addEventListener("input", makeUpdateEvent("input"))
modelWeightEl.addEventListener("change", makeUpdateEvent("change"))
modelWeightEl.addEventListener("input", makeUpdateEvent("input"))
modelWeightEl?.addEventListener("change", makeUpdateEvent("change"))
modelWeightEl?.addEventListener("input", makeUpdateEvent("input"))
let removeBtn = document.createElement("button")
removeBtn.className = "remove_model_btn"
@ -218,10 +223,14 @@ class MultiModelSelector {
}
get modelWeights() {
return this.getModelElements(true).map((e) => e.weight.value)
return this.getModelElements(true).map((e) => e.weight?.value)
}
set modelWeights(newModelWeights) {
if (!this.showWeights) {
return
}
this.resizeEntryList(newModelWeights.length)
if (newModelWeights.length === 0) {

View File

@ -102,7 +102,7 @@ var PARAMETERS = [
type: ParameterType.custom,
icon: "fa-folder-tree",
label: "Models Folder",
note: "Path to the 'models' folder. Please save and refresh the page after changing this.",
note: "Path to the 'models' folder. Please save and restart Easy Diffusion after changing this.",
saveInAppConfig: true,
render: (parameter) => {
return `<input id="${parameter.id}" name="${parameter.id}" size="30">`
@ -161,6 +161,7 @@ var PARAMETERS = [
"<b>Low:</b> slowest, recommended for GPUs with 3 to 4 GB memory",
icon: "fa-forward",
default: "balanced",
saveInAppConfig: true,
options: [
{ value: "balanced", label: "Balanced" },
{ value: "high", label: "High" },
@ -249,14 +250,19 @@ var PARAMETERS = [
default: false,
},
{
id: "use_v3_engine",
type: ParameterType.checkbox,
label: "Use the new v3 engine (diffusers)",
id: "backend",
type: ParameterType.select,
label: "Engine to use",
note:
"Use our new v3 engine, with additional features like LoRA, ControlNet, SDXL, Embeddings, Tiling and lots more! Please press Save, then restart the program after changing this.",
icon: "fa-bolt",
default: true,
"Use our new v3.5 engine (Forge), with additional features like Flux, SD3, Lycoris and lots more! Please press Save, then restart the program after changing this.",
icon: "fa-robot",
saveInAppConfig: true,
default: "ed_diffusers",
options: [
{ value: "webui", label: "v3.5 (latest)" },
{ value: "ed_diffusers", label: "v3.0" },
{ value: "ed_classic", label: "v2.0" },
],
},
{
id: "cloudflare",
@ -432,6 +438,7 @@ let useBetaChannelField = document.querySelector("#use_beta_channel")
let uiOpenBrowserOnStartField = document.querySelector("#ui_open_browser_on_start")
let confirmDangerousActionsField = document.querySelector("#confirm_dangerous_actions")
let testDiffusers = document.querySelector("#use_v3_engine")
let backendEngine = document.querySelector("#backend")
let profileNameField = document.querySelector("#profileName")
let modelsDirField = document.querySelector("#models_dir")
@ -454,6 +461,23 @@ async function changeAppConfig(configDelta) {
}
}
function getDefaultDisplay(element) {
const tag = element.tagName.toLowerCase();
const defaultDisplays = {
div: 'block',
span: 'inline',
p: 'block',
tr: 'table-row',
table: 'table',
li: 'list-item',
ul: 'block',
ol: 'block',
button: 'inline',
// Add more if needed
};
return defaultDisplays[tag] || 'block'; // Default to 'block' if not listed
}
async function getAppConfig() {
try {
let res = await fetch("/get/app_config")
@ -478,14 +502,16 @@ async function getAppConfig() {
modelsDirField.value = config.models_dir
let testDiffusersEnabled = true
if (config.use_v3_engine === false) {
if (config.backend === "ed_classic") {
testDiffusersEnabled = false
}
testDiffusers.checked = testDiffusersEnabled
backendEngine.value = config.backend
document.querySelector("#test_diffusers").checked = testDiffusers.checked // don't break plugins
document.querySelector("#use_v3_engine").checked = testDiffusers.checked // don't break plugins
if (config.config_on_startup) {
if (config.config_on_startup?.use_v3_engine) {
if (config.config_on_startup?.backend !== "ed_classic") {
document.body.classList.add("diffusers-enabled-on-startup")
document.body.classList.remove("diffusers-disabled-on-startup")
} else {
@ -494,37 +520,27 @@ async function getAppConfig() {
}
}
if (!testDiffusersEnabled) {
document.querySelector("#lora_model_container").style.display = "none"
document.querySelector("#tiling_container").style.display = "none"
document.querySelector("#controlnet_model_container").style.display = "none"
document.querySelector("#hypernetwork_model_container").style.display = ""
document.querySelector("#hypernetwork_strength_container").style.display = ""
document.querySelector("#negative-embeddings-button").style.display = "none"
document.querySelectorAll("#sampler_name option.diffusers-only").forEach((option) => {
option.style.display = "none"
})
if (config.backend === "ed_classic") {
IMAGE_STEP_SIZE = 64
customWidthField.step = IMAGE_STEP_SIZE
customHeightField.step = IMAGE_STEP_SIZE
} else {
document.querySelector("#lora_model_container").style.display = ""
document.querySelector("#tiling_container").style.display = ""
document.querySelector("#controlnet_model_container").style.display = ""
document.querySelector("#hypernetwork_model_container").style.display = "none"
document.querySelector("#hypernetwork_strength_container").style.display = "none"
document.querySelectorAll("#sampler_name option.k_diffusion-only").forEach((option) => {
option.style.display = "none"
})
document.querySelector("#clip_skip_config").classList.remove("displayNone")
document.querySelector("#embeddings-button").classList.remove("displayNone")
IMAGE_STEP_SIZE = 8
customWidthField.step = IMAGE_STEP_SIZE
customHeightField.step = IMAGE_STEP_SIZE
}
customWidthField.step = IMAGE_STEP_SIZE
customHeightField.step = IMAGE_STEP_SIZE
const currentBackendKey = "backend_" + config.backend
document.querySelectorAll('.gated-feature').forEach((element) => {
const featureKeys = element.getAttribute('data-feature-keys').split(' ')
if (featureKeys.includes(currentBackendKey)) {
element.style.display = getDefaultDisplay(element)
} else {
element.style.display = 'none'
}
});
if (config.force_save_metadata) {
metadataOutputFormatField.value = config.force_save_metadata
}
@ -642,7 +658,7 @@ function setDeviceInfo(devices) {
function ID_TO_TEXT(d) {
let info = devices.all[d]
if ("mem_free" in info && "mem_total" in info) {
if ("mem_free" in info && "mem_total" in info && info["mem_total"] > 0) {
return `${info.name} <small>(${d}) (${info.mem_free.toFixed(1)}Gb free / ${info.mem_total.toFixed(
1
)} Gb total)</small>`
@ -749,6 +765,11 @@ async function getSystemInfo() {
metadataOutputFormatField.disabled = !saveToDiskField.checked
}
setDiskPath(res["default_output_dir"], force)
// backend info
if (res["backend_url"]) {
document.querySelector("#backend-url").setAttribute("href", res["backend_url"])
}
} catch (e) {
console.log("error fetching devices", e)
}

View File

@ -0,0 +1,80 @@
// christmas hack, courtesy: https://pajasevi.github.io/CSSnowflakes/
;(function(){
"use strict";
function makeItSnow() {
const styleSheet = document.createElement("style")
styleSheet.textContent = `
/* customizable snowflake styling */
.snowflake {
color: #fff;
font-size: 1em;
font-family: Arial, sans-serif;
text-shadow: 0 0 5px #000;
}
.snowflake,.snowflake .inner{animation-iteration-count:infinite;animation-play-state:running}@keyframes snowflakes-fall{0%{transform:translateY(0)}100%{transform:translateY(110vh)}}@keyframes snowflakes-shake{0%,100%{transform:translateX(0)}50%{transform:translateX(80px)}}.snowflake{position:fixed;top:-10%;z-index:9999;-webkit-user-select:none;user-select:none;cursor:default;animation-name:snowflakes-shake;animation-duration:3s;animation-timing-function:ease-in-out}.snowflake .inner{animation-duration:10s;animation-name:snowflakes-fall;animation-timing-function:linear}.snowflake:nth-of-type(0){left:1%;animation-delay:0s}.snowflake:nth-of-type(0) .inner{animation-delay:0s}.snowflake:first-of-type{left:10%;animation-delay:1s}.snowflake:first-of-type .inner,.snowflake:nth-of-type(8) .inner{animation-delay:1s}.snowflake:nth-of-type(2){left:20%;animation-delay:.5s}.snowflake:nth-of-type(2) .inner,.snowflake:nth-of-type(6) .inner{animation-delay:6s}.snowflake:nth-of-type(3){left:30%;animation-delay:2s}.snowflake:nth-of-type(11) .inner,.snowflake:nth-of-type(3) .inner{animation-delay:4s}.snowflake:nth-of-type(4){left:40%;animation-delay:2s}.snowflake:nth-of-type(10) .inner,.snowflake:nth-of-type(4) .inner{animation-delay:2s}.snowflake:nth-of-type(5){left:50%;animation-delay:3s}.snowflake:nth-of-type(5) .inner{animation-delay:8s}.snowflake:nth-of-type(6){left:60%;animation-delay:2s}.snowflake:nth-of-type(7){left:70%;animation-delay:1s}.snowflake:nth-of-type(7) .inner{animation-delay:2.5s}.snowflake:nth-of-type(8){left:80%;animation-delay:0s}.snowflake:nth-of-type(9){left:90%;animation-delay:1.5s}.snowflake:nth-of-type(9) .inner{animation-delay:3s}.snowflake:nth-of-type(10){left:25%;animation-delay:0s}.snowflake:nth-of-type(11){left:65%;animation-delay:2.5s}
`
document.head.appendChild(styleSheet)
const snowflakes = document.createElement("div")
snowflakes.id = "snowflakes-container"
snowflakes.innerHTML = `
<div class="snowflakes" aria-hidden="true">
<div class="snowflake">
<div class="inner">❅</div>
</div>
<div class="snowflake">
<div class="inner">❅</div>
</div>
<div class="snowflake">
<div class="inner">❅</div>
</div>
<div class="snowflake">
<div class="inner">❅</div>
</div>
<div class="snowflake">
<div class="inner">❅</div>
</div>
<div class="snowflake">
<div class="inner">❅</div>
</div>
<div class="snowflake">
<div class="inner">❅</div>
</div>
<div class="snowflake">
<div class="inner">❅</div>
</div>
<div class="snowflake">
<div class="inner">❅</div>
</div>
<div class="snowflake">
<div class="inner">❅</div>
</div>
<div class="snowflake">
<div class="inner">❅</div>
</div>
<div class="snowflake">
<div class="inner">❅</div>
</div>
</div>`
document.body.appendChild(snowflakes)
const script = document.createElement("script")
script.innerHTML = `
$(document).ready(function() {
setTimeout(function() {
$("#snowflakes-container").fadeOut("slow", function() {$(this).remove()})
}, 10 * 1000)
})
`
document.body.appendChild(script)
}
let date = new Date()
if (date.getMonth() === 11 && date.getDate() >= 12) {
makeItSnow()
}
})()