Compare commits

...

460 Commits

Author SHA1 Message Date
3836b91ae1 Recognize some more cards (5070, Granite Ridge) via a torchruntime upgrade 2025-03-06 12:07:42 +05:30
72c4e47619 Support Blackwell (NVIDIA 5060/5070/5080/5090) 2025-03-04 16:23:43 +05:30
afae421cee Use python 3.9 by default 2025-03-04 15:28:38 +05:30
0d0ec4ee56 sdkit 2.0.22.7/2.0.15.16 - python 3.9 compatibility 2025-03-04 15:10:17 +05:30
55a31c77e6 torchruntime upgrade - fix bug with multiple GPUs of the same model 2025-03-04 11:28:03 +05:30
43d2642b68 Update README.md 2025-02-21 17:08:30 +05:30
9dc2154027 sdkit upgrade - fixes loras with numpy arrays 2025-02-20 10:12:35 +05:30
fd49ba5dbc Update README.md 2025-02-18 11:59:33 +05:30
3e71054150 Potential fix for #1902 2025-02-18 11:14:42 +05:30
8d6c0de262 Recognize Phoenix3 and Phoenix4 AMD APUs 2025-02-18 11:14:37 +05:30
561fe0cc79 Use torchruntime for installing torch/torchvision on the users' PC, instead of the custom logic used here. torchruntime was built from the custom logic used here, and covers a lot more scenarios and supports more devices 2025-02-13 11:56:24 +05:30
26cbc30407 Use the correct size of the image when used as the input. Code credit: @AvidGameFan 2025-02-13 11:33:29 +05:30
7a1e2c4190 Hotfix for missing torchruntime on new installations 2025-02-10 19:31:26 +05:30
7b0a17a3ab Temporary fix for #1899 2025-02-10 18:15:31 +05:30
302426f5d4 Another fix for mps backend 2025-02-10 10:05:35 +05:30
9dc9ea3825 Fix broken mps backend 2025-02-10 09:55:31 +05:30
2a24a49f6b torchruntime 1.9.2 - fixes a bug on cpu platforms 2025-02-08 16:15:12 +05:30
5e5e39c285 changelog 2025-02-08 15:10:50 +05:30
cd8365558a Remove hardcoded references to torch.cuda; Use torchruntime and sdkit's device utilities instead 2025-02-08 15:08:46 +05:30
2e4623736a sdkit 2.0.22.3 or 2.0.15.12 - fixes a regression on mac 2025-02-03 21:32:06 +05:30
7f3a4383c7 sdkit 2.0.22.2 or 2.0.15.11 - install torchruntime 2025-02-03 21:32:02 +05:30
6d6b528aad sdkit 2.0.22 or 2.0.15.9 2025-02-03 10:58:47 +05:30
76485ab1e7 Move the half precision bug check logic to sdkit 2025-02-03 10:58:43 +05:30
68d67248f4 [sdkit update] v2.0.22 (and v2.0.15.8 for LTS) 2025-02-03 10:58:39 +05:30
81119a5893 Skip sdkit/diffusers install if it's in developer mode 2025-01-28 09:57:04 +05:30
554559f5ce changelog 2025-01-28 09:56:16 +05:30
b3cc415359 Temporarily remove torch 2.5 from the list, since it doesn't work with Python 3.8. More on this in future commits 2025-01-28 09:55:09 +05:30
5ac44de6c7 Even older rocm versions 2025-01-09 16:39:44 +05:30
a7a78a40d0 Allow older rocm versions 2025-01-09 16:00:18 +05:30
fea24cee90 Update README.md 2025-01-07 11:29:20 +05:30
20d77a85a1 Upgrade the version of torch used for rocm for Navi 30+, and point to the broader torch URL 2025-01-07 10:32:25 +05:30
0687e7b020 Update the index url for AMD ROCm torch install 2025-01-06 19:49:06 +05:30
75e4dc25dc Extend the list of supported torch, CUDA and ROCm versions 2025-01-06 19:32:10 +05:30
7e635caec8 version bump for wmic deprecation 2025-01-04 18:12:49 +05:30
dcb1f3351e Replace the use of wmic (deprecated) with a powershell call 2025-01-04 18:09:43 +05:30
8e9a9dda0f Workaround for when the context doesn't have a model_load_errors field; Not sure why it doesn't have it 2025-01-04 18:07:00 +05:30
546fc937b2 Annual 2024-12-13 15:56:10 +05:30
28badd5319 2024-12-12 00:30:08 +05:30
1a1f8f381b winter is coming 2024-12-12 00:16:48 +05:30
c246c7456a Pin wandb 2024-12-11 11:59:26 +05:30
77a9226720 Annotate with types for pydantic 2024-12-11 11:47:18 +05:30
74b05022f4 Merge pull request #1867 from tjcomserv/tjcomserv-patch-1-pydantic
Tjcomserv patch 1 pydantic
2024-12-11 11:45:24 +05:30
b3a961fc82 Update check_modules.py 2024-11-13 21:49:17 +05:30
0c8410c371 Pin huggingface-hub to 0.23.2 to fix broken deployments 2024-11-13 21:39:01 +05:30
5fe3acd44b Use 1.4 by default, instead of 1.5 2024-09-09 18:34:32 +05:30
716f30fecb Merge pull request #1810 from easydiffusion/beta
Beta
2024-06-14 09:49:27 +05:30
364902f8a1 Ignore text in the version string when comparing them 2024-06-14 09:48:49 +05:30
a261a2d47d Fix #1779 - add to PATH only if it isn't present, to avoid exploding the PATH variable each time the function is called 2024-06-14 09:43:30 +05:30
dea962dc89 Merge pull request #1808 from easydiffusion/beta
temp hotfix for rocm torch
2024-06-13 14:06:14 +05:30
d062c2149a temp hotfix for rocm torch 2024-06-13 14:05:11 +05:30
7d49dc105e Merge pull request #1806 from easydiffusion/beta
Don't crash if psutils fails to get cpu or memory usage
2024-06-11 18:28:07 +05:30
fcdc3f2dd0 Don't crash if psutils fails to get cpu or memory usage 2024-06-11 18:27:21 +05:30
d17b167a81 Merge pull request #1803 from easydiffusion/beta
Support legacy installations with torch 1.11, as well as an option for people to upgrade to the latest sdkit+diffusers
2024-06-07 10:33:56 +05:30
1fa83eda0e Merge pull request #1800 from siakc/uvicorn-run-programmatically
Enhancement - using uvicorn.run() instead of os.system()
2024-06-06 17:51:06 +05:30
969751a195 Use uvicorn.run since it's clearer to read 2024-06-06 17:50:41 +05:30
1ae8675487 typo 2024-06-06 16:20:04 +05:30
05f0bfebba Upgrade torch if using the newer sdkit versions 2024-06-06 16:18:11 +05:30
91ad53cd94 Enhancement - using uvicorn.run() instead of os.system() 2024-06-06 10:28:01 +03:30
de680dfd09 Print diffusers' version 2024-06-05 18:53:45 +05:30
4edeb14e94 Allow a user to opt-in to the latest sdkit+diffusers version, while keeping existing 2.0.15.x users restricted to diffusers 0.21.4. This avoids a lengthy upgrade for existing users, while allowing users to opt-in to the latest version. More to come. 2024-06-05 18:46:22 +05:30
e64cf9c9eb Merge pull request #1796 from easydiffusion/beta
Another typo
2024-06-01 09:02:08 +05:30
66d0c4726e Another typo 2024-06-01 09:01:35 +05:30
c923b44f56 Merge pull request #1795 from easydiffusion/beta
typo
2024-05-31 19:30:50 +05:30
b9c343195b typo 2024-05-31 19:30:29 +05:30
4427e8d3dd Merge pull request #1794 from easydiffusion/beta
Generalize the hotfix for missing sdkit dependencies. This is still a…
2024-05-31 19:28:32 +05:30
87c8fe2758 Generalize the hotfix for missing sdkit dependencies. This is still a temporary hotfix, but will ensure that missing packages are installed, not assume that having picklescan means everything's good 2024-05-31 19:27:30 +05:30
70acde7809 Merge pull request #1792 from easydiffusion/beta
Hotfix - sdkit's dependencies aren't getting pulled for some reason
2024-05-31 10:57:43 +05:30
c4b938f132 Hotfix - sdkit's dependencies aren't getting pulled for some reason 2024-05-31 10:56:43 +05:30
d6fdb8d5a9 Merge pull request #1788 from easydiffusion/beta
Hotfix for older accelerate version in the Windows installer
2024-05-30 17:51:32 +05:30
54ac1f7169 Hotfix for older accelerate version in the Windows installer 2024-05-30 17:50:36 +05:30
deebfc6850 Merge pull request #1787 from easydiffusion/beta
Controlnet Strength and SDXL Controlnet support for img2img and inpainting
2024-05-30 13:22:12 +05:30
21644adbe1 sdkit 2.0.15.6 - typo that prevented 0 controlnet strength 2024-05-29 10:01:04 +05:30
fe3c648a24 sdkit 2.0.15.5 - minor null check 2024-05-28 19:46:59 +05:30
05f3523364 Set the controlnet alpha correctly from older exports; Fix a bug with null lora model in exports 2024-05-28 19:16:48 +05:30
4d9b023378 changelog 2024-05-28 18:48:23 +05:30
44789bf16b sdkit 2.0.15.4 - Controlnet strength slider 2024-05-28 18:45:08 +05:30
ad649a8050 sdkit 2.0.15.3 - disable watermarking on SDXL ControlNets to avoid visual artifacts 2024-05-28 09:00:57 +05:30
723304204e diffusers 0.21.4 2024-05-27 15:26:09 +05:30
ddf54d589e v3.0.8 - use sdkit 2.0.15.1, to enable SDXL Controlnets for img2img and inpainting, using diffusers 0.21.4 2024-05-27 15:18:14 +05:30
a5c9c44e53 Merge pull request #1784 from easydiffusion/beta
Another hotfix for setuptools version on Windows and Linux/mac
2024-05-27 10:58:06 +05:30
4d28c78fcc Another hotfix for setuptools version on Windows and Linux/mac 2024-05-27 10:57:20 +05:30
7dc01370ea Merge pull request #1783 from easydiffusion/beta
Pin setuptools to 0.59
2024-05-27 10:45:49 +05:30
21ff109632 Pin setuptools to 0.59 2024-05-27 10:45:19 +05:30
9b0a654d32 Merge pull request #1782 from easydiffusion/beta
Hotfix to pin setuptools to 0.69 - for #1781
2024-05-27 10:38:41 +05:30
fb749dbe24 Potential hotfix for #1781 - pin setuptools to a specific version, until clip is upgraded 2024-05-27 10:37:09 +05:30
17ef1e04f7 Roll back sdkit 2.0.18 (again) 2024-03-19 20:06:49 +05:30
a5b9eefcf9 v3.0.8 - update diffusers to v0.26.3 2024-03-19 19:17:10 +05:30
e5519cda37 sdkit 2.0.18 (diffusers 0.26.3) 2024-03-19 19:12:00 +05:30
d1bd9e2a16 Prev version 2024-03-13 19:53:14 +05:30
5924d01789 Temporarily revert to sdkit 2.0.15 2024-03-13 19:05:55 +05:30
47432fe54e diffusers 0.26.3 2024-03-13 19:01:09 +05:30
8660a79ccd v3.0.8 - update diffusers to v0.26.3 2024-03-13 18:44:17 +05:30
dfb26ed781 Merge pull request #1702 from easydiffusion/beta
Beta
2023-12-12 18:10:46 +05:30
547febafba Autosave the VAE tiling setting 2023-12-12 18:10:04 +05:30
85eaa305cc Hotfix for #1701 - run disable VAE tiling only on pipelines that support it 2023-12-12 18:07:56 +05:30
25272ce083 Kofi only 2023-12-12 12:48:13 +05:30
212fa77b47 Merge pull request #1700 from easydiffusion/beta
Beta
2023-12-12 12:47:24 +05:30
e77629c525 Version and changelog 2023-12-11 22:31:12 +05:30
097780be26 Setting to enable/disable VAE tiling 2023-12-11 22:28:19 +05:30
6489cd785d Merge pull request #1648 from michaelachrisco/main
Fix Sampler learn more link
2023-11-05 19:16:14 +05:30
a4e651e27e Click to learn more about samplers should go to wiki page 2023-10-28 23:40:20 -07:00
bedf176e62 Merge pull request #1630 from easydiffusion/beta
Beta
2023-10-12 10:06:42 +05:30
398a0509d7 Banner change 2023-10-12 10:05:43 +05:30
52cc99bf1f Revert "Revert the support banner experiment"
This reverts commit 45a14a9be9.
2023-10-12 09:58:29 +05:30
824e057d7b Merge pull request #1624 from easydiffusion/beta
sdkit 2.0.15 - fix for gfgpan/realesrgan in parallel threads
2023-10-06 09:54:40 +05:30
9bd4b3a6d0 sdkit 2.0.15 - fix for gfgpan/realesrgan in parallel threads with Stable Diffusion 2023-10-05 19:04:19 +05:30
307b00cc05 Merge pull request #1622 from easydiffusion/beta
Beta
2023-10-03 19:38:11 +05:30
8a98df4673 Merge branch 'beta' of github.com:cmdr2/stable-diffusion-ui into beta 2023-10-03 19:35:36 +05:30
45a14a9be9 Revert the support banner experiment 2023-10-03 19:35:23 +05:30
e419276e34 Merge pull request #1621 from easydiffusion/main
Main
2023-10-03 12:45:36 +05:30
0a92b7b1d5 Merge pull request #1620 from easydiffusion/beta
Use sd 2.1.5
2023-10-03 12:42:11 +05:30
f110168366 Use sd 2.1.5 2023-10-03 12:41:51 +05:30
ce24a05909 Merge pull request #1619 from easydiffusion/beta
Beta
2023-10-03 12:39:29 +05:30
45facf64e5 sdkit 2.0.14 - pin transformers 4.33.2 (via sd 2.1.5) and acccelerate 0.23.0, and k-diffusion to 0.0.12 2023-10-02 12:08:58 +05:30
e999832c26 Prevent the user from changing the metadata format if the server has set force_save_metadata 2023-09-30 20:11:28 +05:30
4c8d5a7077 Allow setting the metadata field in the server settings, instead of forcing json whenever force_save_path is set 2023-09-29 20:23:24 +05:30
81643cb3af Merge pull request #1611 from easydiffusion/beta
Fix error if a user doesn't have any LoRA models in the folder
2023-09-28 10:06:23 +05:30
7a9bc883df Fix error if a user doesn't have any LoRA models in the folder 2023-09-27 19:32:21 +05:30
6280a80129 Merge pull request #1608 from easydiffusion/beta
LoRA Manager and Upload Thumbnails
2023-09-27 19:24:41 +05:30
a33908b6de Changelog for LoRA manager and 'Upload thumbnails' 2023-09-27 19:22:35 +05:30
0ea5620413 Multi-gpu GFPGAN not fixed yet 2023-09-27 19:03:45 +05:30
e23eb1fea8 Save metadata as json if using force_save_path 2023-09-26 20:58:27 +05:30
41f2c82eaf Save metadata if force_save_path is enabled. We can make this more flexible later 2023-09-26 20:54:16 +05:30
91e3bfe58f Merge pull request #1604 from flavioislima/fix/rocm_url
FIX: ROCM download URL
2023-09-25 14:06:45 +05:30
83d5519a31 Merge pull request #1605 from JeLuF/hover
Fix 'Swap width&height' tooltip
2023-09-25 14:06:11 +05:30
cc2666b9d6 Fix 'Swap w&h' tooltip 2023-09-24 22:06:30 +02:00
954493fef5 FIX: ROCM download URL 2023-09-23 20:08:19 +01:00
967c3681cd Merge pull request #1598 from JeLuF/loraman3
LoraManager: Remove old plugin file
2023-09-19 11:18:41 +05:30
87c9df5c0d Remove old plugin file 2023-09-18 18:55:34 +02:00
62136768d2 typo 2023-09-18 21:25:39 +05:30
b71b7804fc Merge branch 'beta' of github.com:cmdr2/stable-diffusion-ui into beta 2023-09-18 21:25:22 +05:30
e8b7751374 typo 2023-09-18 21:25:08 +05:30
54d4433141 Merge pull request #1596 from JeLuF/loraman2
LoraManager: Implement 'Upload thumbnail' button
2023-09-18 10:42:25 +05:30
14dbebbc35 LoraManager: Implement 'Upload thumbnail' button 2023-09-17 22:37:17 +02:00
d6a02a31a7 LoraManager: Implement 'Upload thumbnail' button 2023-09-17 22:36:50 +02:00
86e2ac40ae changelog 2023-09-15 19:09:45 +05:30
a12ed7533b Fix broken embeddings dialog when the lora info couldn't be fetched 2023-09-15 19:09:14 +05:30
9fb0ee2d1b Merge pull request #1588 from JeLuF/loraman1
Loramanager fixes
2023-09-15 19:02:17 +05:30
6311b80474 Loramanager fixes
- avoid console errors in python and JS code
- suppress localhost:9000/null links
2023-09-14 23:15:27 +02:00
c13d1093ee sdkit 2.0.12 - actually use the gfpgan fix. 2.0.11 was bad 2023-09-14 20:01:53 +05:30
dd7deeba53 v3.0.6 2023-09-14 19:53:44 +05:30
338aef3e95 sdkit 2.0.11 - fix for gfpgan when using multiple GPUs in parallel 2023-09-14 19:52:50 +05:30
134c98ccb5 Merge pull request #1565 from JeLuF/loramanager
Lora Manager
2023-09-14 19:05:17 +05:30
d12877987f Merge pull request #1584 from easydiffusion/beta
Beta
2023-09-13 18:14:26 +05:30
676316e5e4 Merge pull request #1583 from JeLuF/poor
🔥 FIX Linux installer: Don't use rich
2023-09-13 18:12:56 +05:30
52761ad88c Update check_modules.py 2023-09-13 13:45:21 +02:00
f5e489ba87 Don't use rich
During the first installation, rich is not yet installed
2023-09-13 13:39:34 +02:00
982af1fff3 Merge pull request #1581 from easydiffusion/main
Main
2023-09-13 13:12:32 +05:30
1cff398c20 Merge pull request #1580 from easydiffusion/beta
Beta
2023-09-13 13:12:14 +05:30
a6271d2c4e Merge pull request #1563 from JeLuF/amdperm
AMD/Linux: Warn about file permissions
2023-09-05 16:32:13 +05:30
60f8cc6883 Merge pull request #1567 from easydiffusion/beta
Beta
2023-09-05 16:31:34 +05:30
ffb8feba6b Merge pull request #1564 from JeLuF/wmic
Windows: Show GPU list and driver versions in log
2023-09-05 16:18:42 +05:30
4aca3c4639 Lora Manager 2023-09-04 01:36:32 +02:00
120f9e567c Windows: Show GPU list and driver versions in log 2023-09-03 13:49:30 +02:00
c0492511df AMD/Linux: Warn about file permissions 2023-09-03 13:44:06 +02:00
1075a5ed93 changelog 2023-09-02 19:30:56 +05:30
58d3507155 sdkit 2.0.10 - SDXL ControlNet support; upgrade to diffusers 0.20.2 2023-09-02 19:30:27 +05:30
ae0c9b6a6b Merge branch 'beta' of github.com:cmdr2/stable-diffusion-ui into beta 2023-09-02 18:34:33 +05:30
ad1374af1d bring back config print 2023-09-02 18:34:12 +05:30
8436e8a71e Merge pull request #1560 from JeLuF/mdir-err
Error handling for models_dir
2023-09-02 08:23:54 +05:30
ea07483465 Error handling for models_dir 2023-09-01 22:54:03 +02:00
51f857c3f3 Merge pull request #1559 from easydiffusion/beta
Beta
2023-09-01 20:30:54 +05:30
74c0ca0902 changelog 2023-09-01 19:53:51 +05:30
ad5641fa3e Fix incorrect metadata generation of embeddings, by removing duplicated logic. The UI already handles this 2023-09-01 19:52:20 +05:30
b0294f8cbd Support banner 2023-09-01 19:31:46 +05:30
5d4498ff85 changelog 2023-09-01 19:29:54 +05:30
d52fb15746 Merge pull request #1558 from easydiffusion/beta
Revert "Continue using uvicorn directly on windows"
2023-09-01 18:29:43 +05:30
ee6be74e72 Revert "Continue using uvicorn directly on windows"
This reverts commit 3a5e0cb2d2.
2023-09-01 18:29:19 +05:30
4cbc86f945 Merge pull request #1557 from easydiffusion/beta
Continue using uvicorn directly on windows
2023-09-01 17:52:19 +05:30
3a5e0cb2d2 Continue using uvicorn directly on windows 2023-09-01 17:51:15 +05:30
7916b8d26a Merge pull request #1556 from easydiffusion/beta
Ignore unknown AMD GPUs
2023-09-01 17:05:04 +05:30
a0842b4659 Ignore unknown AMD GPUs 2023-09-01 17:04:38 +05:30
14ee87ca80 Merge pull request #1555 from easydiffusion/beta
AMD on Linux
2023-09-01 15:58:35 +05:30
cec1d7d6c9 hide debug log 2023-09-01 13:30:46 +05:30
9aeae4d16e note to self 2023-09-01 13:25:00 +05:30
9c1b741d89 Relative path for src 2023-09-01 13:17:50 +05:30
c71a74f857 Merge pull request #1491 from JeLuF/launcher
Pythonize the uvicorn startup
2023-09-01 13:10:45 +05:30
524612cee5 Different PYTHONPATH for Windows and Linux/Mac 2023-09-01 13:09:01 +05:30
11e47b3871 Merge pull request #1554 from easydiffusion/main
Main
2023-09-01 12:51:47 +05:30
4a1b2be45c Merge pull request #1553 from easydiffusion/beta
Beta
2023-09-01 12:51:23 +05:30
d641aa2f6e Fix ordering of help topics 2023-09-01 11:08:11 +05:30
237c7a5348 3.0.4 2023-09-01 10:42:29 +05:30
19f37907d9 Allow changing the models directory via a setting, to share models with other locations on the disk 2023-09-01 10:40:18 +05:30
b8706da990 Merge pull request #1548 from easydiffusion/beta
Beta
2023-08-31 22:28:03 +05:30
b458d57355 Keep the old test_diffusers id around to prevent broken plugins 2023-08-31 22:24:27 +05:30
a5962dae33 Allow underscore in embeddings path 2023-08-31 22:19:04 +05:30
670768e5b3 Allow hyphens in embeddings 2023-08-31 22:16:48 +05:30
f02b915cd0 Fix typo when using force_save_path 2023-08-31 22:11:42 +05:30
71bbbeb936 Update help topics 2023-08-31 21:25:29 +05:30
e084b78b53 Update README.md 2023-08-31 20:14:06 +05:30
013860e3c0 Merge pull request #1546 from easydiffusion/beta
Use v3 for everyone
2023-08-31 20:03:57 +05:30
7a118eeb15 Rename the test_diffusers config key to upgrade all the existing users to the v3 engine. Users can now opt to disable v3. This upgrades existing users who had maybe tried diffusers many months ago (when it was still unstable) and decided against it (at that time). 2023-08-31 19:20:26 +05:30
df408b25e5 changelog 2023-08-31 15:59:23 +05:30
536082c1a6 Save filtered images to disk if required by the API, for e.g. when clicking 'Upscale' or 'Fix Faces on the image 2023-08-31 15:57:53 +05:30
b986ca3059 Update README.md 2023-08-31 12:44:52 +05:30
4bf9e577b9 Merge pull request #1541 from easydiffusion/beta
Beta
2023-08-31 09:57:11 +05:30
a7c12e61d8 Fix incorrect tiling message in the task info 2023-08-30 19:32:29 +05:30
847d27bffb sdkit 2.0.9 - another fix for torch 2.0 and onnx export 2023-08-30 19:32:09 +05:30
781e812f22 sdkit 2.0.8 - temp hack for allowing onnx export on pytorch 2.0 2023-08-30 18:51:34 +05:30
e49b5e0e6b changelog 2023-08-30 18:24:46 +05:30
8f1c1b128e sdkit 2.0.7 - Allow loading NovelAI-based models 2023-08-30 18:24:23 +05:30
04cbb052d7 bump version 2023-08-30 17:54:19 +05:30
16f0950ebd sdkit 2.0.6 - Fix broken VAE tiling 2023-08-30 17:42:50 +05:30
e959a3d7ab ui 2023-08-30 17:42:32 +05:30
fc9941abaa Merge pull request #1539 from easydiffusion/beta
Server-side setting to block_nsfw
2023-08-30 16:25:04 +05:30
f177011395 changelog 2023-08-30 16:22:08 +05:30
80e47be5a5 Prevent block_nsfw from getting edited via the HTTP api 2023-08-30 16:22:05 +05:30
9a9f6e3559 Server-side config to allow force-blocking of NSFW images 2023-08-30 16:13:10 +05:30
1a6e0234b3 Merge pull request #1538 from easydiffusion/beta
Beta
2023-08-30 15:35:21 +05:30
56bea46e3a Use absolute config path 2023-08-30 15:34:55 +05:30
a09441b2c8 Change the tensorrt installation commands to what NVIDIA suggested over chat 2023-08-30 15:14:05 +05:30
105994d96d Merge pull request #1536 from easydiffusion/beta
Beta
2023-08-30 14:58:24 +05:30
d641647b1e Merge branch 'beta' of github.com:cmdr2/stable-diffusion-ui into beta 2023-08-30 14:57:53 +05:30
672574d278 sdkit 2.0.5 - don't download the safety checker unless necessary 2023-08-30 14:57:37 +05:30
f1ded17399 Merge pull request #1535 from easydiffusion/beta
Beta
2023-08-30 14:44:41 +05:30
d254e3e2fd Merge pull request #1534 from easydiffusion/main
Main
2023-08-30 14:44:09 +05:30
ab5450bb27 Don't download codeformer and controlnet if not being used 2023-08-30 14:39:43 +05:30
a2e9e5eb57 Remove old files 2023-08-30 13:18:03 +05:30
8965f11ab4 Update CONTRIBUTING.md 2023-08-30 13:15:19 +05:30
1dd5644e7a Update build.bat and build.sh to create the installers for Windows and Mac/Linux (respectively) 2023-08-30 13:09:12 +05:30
37f813506e Merge pull request #1533 from easydiffusion/beta
Beta
2023-08-29 20:11:44 +05:30
a5d5ed90e6 Merge pull request #1532 from easydiffusion/main
Main
2023-08-29 20:10:26 +05:30
3792a1bc0d sdkit 2.0.4 - use sd 1.5 fp16 by default, if no model is present 2023-08-29 20:06:54 +05:30
fbafa56ecb Use torch 2.0.1 and torchvision 0.15.2 by default on Windows 2023-08-29 18:52:06 +05:30
2f910c69b8 unused file 2023-08-29 17:54:23 +05:30
bf06cc48bb Merge branch 'beta' of github.com:cmdr2/stable-diffusion-ui into beta 2023-08-29 17:52:11 +05:30
3ef67ebc73 NSIS - v3, create from an existing installation 2023-08-29 17:51:58 +05:30
0c4318fb31 Update README.md 2023-08-29 17:49:50 +05:30
c55ced93db Update FUNDING.yml 2023-08-29 17:48:50 +05:30
4bd89ab2e1 How to run 2023-08-29 15:27:03 +05:30
807d940001 Merge pull request #1528 from JeLuF/inputmode
inputmode=numeric/decimal for <input> fields
2023-08-29 15:09:13 +05:30
d4427b97ae Merge pull request #1531 from easydiffusion/beta
Unset PYTHONHOME
2023-08-29 15:05:37 +05:30
4f336d9f25 Merge pull request #1260 from JeLuF/patch-25
Unset PYTHONHOME
2023-08-29 14:58:57 +05:30
1565530b0f Merge pull request #1530 from easydiffusion/beta
v3 in scripts
2023-08-29 14:55:30 +05:30
a21b01a0cd Merge branch 'beta' of github.com:cmdr2/stable-diffusion-ui into beta 2023-08-29 14:54:43 +05:30
1c7e90576d v3 in scripts 2023-08-29 14:54:31 +05:30
8c27fa136c inputmode=numeric/decimal for <input> fields 2023-08-29 10:02:06 +02:00
c8de1cd49b Merge pull request #1527 from easydiffusion/beta
v3
2023-08-29 12:27:35 +05:30
5eb36e131d Merge branch 'main' into beta 2023-08-29 12:15:17 +05:30
b5d1adaa19 Use latest download link to avoid having to manually update readme (#1519) 2023-08-29 11:00:38 +05:30
b89d152540 Support lora models in subfolders when scanning the <lora> tag (#1521)
* Recursive lora search

* Support lora models in subfolders when scanning the <lora> tag
2023-08-29 10:48:57 +05:30
e49772030d changelog 2023-08-29 10:28:06 +05:30
b1cb03962c Fix embedding extraction for weights, commas, etc. This fixes the recent change where 'world' would match 'rld'. 2023-08-29 10:23:05 +05:30
a7b0858b22 Temporarily hide the support banner until v3 releases to main. Avoids the distraction from handling support/bugs during v3's release, will bring this back soon 2023-08-29 09:43:31 +05:30
ad227ca190 Remove safetensors hack (#1525)
We should upgrade to 0.3.3, since it has wheels for all our supported plattforms.
2023-08-29 09:21:44 +05:30
a8360484b2 Remove safetensors hack (#1525)
We should upgrade to 0.3.3, since it has wheels for all our supported plattforms.
2023-08-29 09:20:32 +05:30
80c4a50ca1 Merge branch 'beta' of github.com:cmdr2/stable-diffusion-ui into beta 2023-08-29 09:20:25 +05:30
768b88a0ac sdkit 2.0.3 - use safetensors 0.3.3 2023-08-29 09:19:46 +05:30
82607573fa Update check_modules.py 2023-08-25 21:15:16 +05:30
d07e00cd74 Update check_modules.py 2023-08-25 21:15:06 +05:30
dfdd2b32e0 Update check_modules.py 2023-08-25 19:23:06 +05:30
844edbc865 safetensors 0.3.3 2023-08-25 19:07:26 +05:30
2bc66cc640 Merge branch 'banner' into beta 2023-08-24 18:24:28 +05:30
f9f9aba92d f 2023-08-24 17:39:39 +05:30
3f278cf2ad Pick the right embedding even if it has an underscore 2023-08-24 17:36:49 +05:30
cb7ba96dad Fix handling of embeddings with space in their name (#1402)
* Fix handling of files with space in their name

* Handle embeddings in save files

* Moved get_embedding_token

* Moved get_embedding_token

* Update save_utils.py
2023-08-24 16:32:17 +05:30
31edce4a60 Add ".pt" to the Lora extensions (#1518)
https://discord.com/channels/1014774730907209781/1014774732018683926/1144179143873929288

There seem to be ".pt" LORA files in the wild.
2023-08-24 16:09:42 +05:30
1b6aae9678 Cancel/Stop/Remove task buttons (#1493)
* Cancel/Stop/Remove task
So far, the button to remove a not yet rendered and a completed task was labeled 'Remove'. This can lead to confusions.
This PR changes the label to 'Cancel' for not yet rendered tasks. It also changes the color of the undoable 'Remove' button

* Keep the button color as red
2023-08-24 16:07:58 +05:30
9572ddf1c1 sdkit 2.0.2 - fix broken seamless tiling 2023-08-24 14:59:44 +05:30
3bbce82454 Force safetensors 0.3.2 for sdkit, the newer version has issues during installation 2023-08-23 22:09:49 +05:30
1f44cebd0e Merge branch 'beta' of github.com:cmdr2/stable-diffusion-ui into beta 2023-08-23 22:05:52 +05:30
843ea58c15 Force safetensors 0.3.2 for sdkit, the newer version has issues during installation 2023-08-23 22:04:57 +05:30
1e13c4e808 Support banner with kofi, patreon and itch.io links 2023-08-23 19:50:38 +05:30
ab8f10ae4a Update FUNDING.yml 2023-08-23 16:01:54 +05:30
c62161770d v3 readme 2023-08-23 13:52:37 +05:30
15b828b0f5 typo 2023-08-23 13:29:36 +05:30
faa83a87df rename kofi link 2023-08-23 13:21:46 +05:30
796c12bc4c Update index.html 2023-08-23 13:21:10 +05:30
50da182e30 sdkit 2.0.0 - enable diffusers by default 2023-08-23 13:02:40 +05:30
dba573bf1a Show v3 engine selection even without beta 2023-08-23 13:02:20 +05:30
6a0eef3fe4 Show tabs on mobile 2023-08-23 12:19:35 +05:30
98f58e8672 Download button styling on mobile 2023-08-23 12:15:35 +05:30
04274f5839 Fix styling on mobile devices 2023-08-23 12:11:06 +05:30
f387b9f464 changelog 2023-08-23 11:07:38 +05:30
b8f533d0ea v3 changelog summary 2023-08-23 11:06:53 +05:30
5a49818a10 changelog 2023-08-22 19:06:38 +05:30
ad9d9e0b04 sdkit 1.0.185 - full support for custom inpainting models 2023-08-22 18:57:41 +05:30
c92470ff7e sdkit 1.0.184 - fix broken SD2 inpainting model 2023-08-22 17:45:00 +05:30
1cd9c7fdac Show the negative embeddings button only if the negative prompt panel is open 2023-08-22 16:59:40 +05:30
e607035c65 changelog 2023-08-22 16:11:20 +05:30
bde8113414 sdkit 1.0.183 - reduce VRAM usage of controlnet in low vram mode, and allow accelerating controlnets with xformers 2023-08-22 16:10:02 +05:30
1fd011b1be Don't fail if the prompt strength is too low 2023-08-22 15:41:21 +05:30
061380742c version 2023-08-22 15:17:24 +05:30
8f9feb3ed9 changelog 2023-08-22 15:16:46 +05:30
0dc01cb974 sdkit 1.0.182 - improve detection of SD 2.0 and 2.1 models, auto-detect v-parameterization and improve load time by speeding up the black-image test 2023-08-22 15:14:53 +05:30
55af328181 Merge branch 'beta' of github.com:cmdr2/stable-diffusion-ui into beta 2023-08-22 12:02:42 +05:30
a8c0abfd5d Use document event handling for task events 2023-08-22 12:02:34 +05:30
4807744aa7 sdkit 1.0.181 - fix typo in watermark skip 2023-08-22 11:42:38 +05:30
669d40a9d2 fix embedding parser and use standard embedding varuable for metadata (#1516) 2023-08-22 09:08:59 +05:30
18049d529a Fix the lora prompt parser 2023-08-21 19:45:26 +05:30
f2b441d9fc typo 2023-08-21 19:20:23 +05:30
d2078d4dde sdkit 1.0.180 - auto-download the tile controlnet if necessary, by fixing the id in the models db 2023-08-21 19:17:30 +05:30
41d4ad2096 sdkit 1.0.179 - tile controlnet 2023-08-21 14:29:54 +05:30
29ec8291ad Merge branch 'beta' of github.com:cmdr2/stable-diffusion-ui into beta 2023-08-21 14:29:05 +05:30
b93a206a48 sdkit 1.0.178 - prevent image size errors when blending mask with strict mask border 2023-08-21 14:04:40 +05:30
be83336cf7 WebP metadata support (#1511)
* WebP metadata support
- Replace piexif.js by Exif-Reader
- Merge plugin html/css to index.html/main.css

* Add webp to tooltip message
2023-08-21 13:28:26 +05:30
19fdba7d73 Send the controlnet filter preview request only when the task is about to start 2023-08-21 11:04:13 +05:30
2c2b3b75d5 Merge branch 'beta' of github.com:cmdr2/stable-diffusion-ui into beta 2023-08-19 13:18:05 +05:30
47d5cb9e33 Refactor some of the task-related functions into task-manager.js, with callbacks for some UI-related code. This isn't exhaustive, just another step towards breaking up main.js 2023-08-19 13:15:32 +05:30
7b8e1bc919 type=number for number of images fields (#1507) 2023-08-19 09:51:34 +05:30
77aa7a0148 Improvements/Fixes for embeddings UI (#1509)
- Don't show "Use as thumbnail" if no embeddings were used in the prompt
- Fix layout issue on small screens
- use req.use_embeddings_model instead of parsing the prompt
- Add button to upload a thumb" ../../index.html ../css/main.css main.js
2023-08-19 09:50:40 +05:30
bdd7d2599f API to get SHA256 of a model file (#1510)
To be used from javascript to collect metadata from civitai
https://civitai.com/api/v1/model-versions/by-hash/0A35347528
2023-08-19 09:47:59 +05:30
ca8a96f956 Don't show or save hypernetwork info if using v3 (diffusers) 2023-08-18 19:09:14 +05:30
8957250db8 changelog 2023-08-18 19:02:14 +05:30
1b6ec418a1 sdkit 1.0.177 - rotate images if EXIF rotation present 2023-08-18 19:01:16 +05:30
3759d77945 changelog 2023-08-18 18:44:16 +05:30
ab4d34e509 sdkit 1.0.176 - resize control images to the task dimensions, to avoid memory errors with high-res control images 2023-08-18 18:43:23 +05:30
7f878f365b Don't add hypernetwork info in the metadata if using diffusers (v3 engine) 2023-08-18 18:23:21 +05:30
5efabfaea6 Don't include hypernetwork info in 'copy settings' if using diffusers 2023-08-18 18:10:35 +05:30
4cd8ae45e3 changelog 2023-08-18 18:01:15 +05:30
8999f9450f Show control image when 'Use these Settings' is used 2023-08-18 17:59:00 +05:30
1d54943d71 Support drag-and-drop and use-these-settings for controlnet 2023-08-18 17:53:16 +05:30
767d8fc35d Use these Settings work for multi-lora now 2023-08-18 17:18:01 +05:30
894f34678e Some more fixes for multi-lora use-these-settings 2023-08-18 17:08:19 +05:30
1190bedafd Don't include empty lora values in the metadata 2023-08-18 17:03:41 +05:30
e80db71d1c Allow downloading the controlnet preview image 2023-08-18 16:24:17 +05:30
846bb2134e changelog 2023-08-18 16:14:30 +05:30
38b2eec4be Show controlnet preview in the task entry after applying the filter 2023-08-18 16:14:01 +05:30
8dafe486a2 Show controlnet model in the task info 2023-08-18 14:19:50 +05:30
c895a96a43 changelog 2023-08-18 14:16:44 +05:30
67cae9725e Fix drag-and-drop and Use these Settings for LoRA 2023-08-18 14:16:23 +05:30
a2d06f87f6 Use the new lora models component while creating the render request 2023-08-18 13:27:00 +05:30
8e4afc8374 changelog 2023-08-18 13:18:39 +05:30
afd879a692 Auto-save LoRAs 2023-08-18 13:18:06 +05:30
83de2b8de7 formatting 2023-08-17 16:09:47 +05:30
4930f36a1a Remove tensorrt demo settings 2023-08-17 16:09:37 +05:30
fa3f196add Ignore commas while looking for embeddings 2023-08-17 16:09:17 +05:30
95004be0e9 Merge branch 'beta' of github.com:cmdr2/stable-diffusion-ui into beta 2023-08-17 15:29:16 +05:30
281a849c8f changelog 2023-08-17 15:28:56 +05:30
5d82ce665c sdkit 1.0.175 - Automatically check for SDXL and use the correct yaml file 2023-08-17 15:27:48 +05:30
d632cfcde9 Hide empty folders in embeddings search results (#1506) 2023-08-17 13:23:13 +05:30
07f797a5e4 Negative embedding label 2023-08-17 12:32:50 +05:30
121107dd13 Shadow in dialogs 2023-08-17 12:30:18 +05:30
a2479b74be Show thumbnails for embeddings 2023-08-17 12:26:23 +05:30
7ee1d3cd91 Remove the embeddings in low warning (again) 2023-08-17 12:05:36 +05:30
4b28ddd691 Typo 2023-08-17 11:53:59 +05:30
7270b5fe0c Thumbnails for Embeddings (#1483)
* sqlalchemy

* sqlalchemy

* sqlalchemy bucket v1

* Bucket API

* Move easydb files to its own folders

* show images

* add croppr

* croppr ui

* profile, thumbnail croppr size limit

* fill list

* add upload

* Use modifiers card, resize thumbs

* Remove debugging code

* remove unused variable
2023-08-17 11:33:05 +05:30
285792f692 Controlnet thumb in taskConfig (#1502) 2023-08-17 11:18:47 +05:30
23a0a48b81 Warn when no controlnet model is chosen (#1503)
* Warn when no controlnet model is chosen

* Update main.js
2023-08-17 11:18:00 +05:30
2baad73bb9 Error messages for SDXL yaml files (#1504)
* Error messages for SDXL embeddings and SDXL yaml files

* Embeddings are supported now with SDXL
2023-08-17 11:08:18 +05:30
097dc99e77 changelog 2023-08-17 11:01:13 +05:30
edd10bcfe7 sdkit 1.0.174 - embedding support for SDXL models, refactor embeddings to use the standard context.models API in sdkit 2023-08-17 10:55:26 +05:30
ac1c65fba1 Move the extraction logic for embeddings-from-prompt, from sdkit to ED's UI 2023-08-17 10:54:47 +05:30
b4cc21ea89 changelog 2023-08-16 16:02:35 +05:30
3dfc3f5ff7 Merge branch 'beta' of github.com:cmdr2/stable-diffusion-ui into beta 2023-08-16 16:02:03 +05:30
7c012df1d5 sdkit 1.0.173 - SDXL LoRA support 2023-08-16 15:59:07 +05:30
a1854d3734 'Use for Controlnet', Drag'n'Drop (#1501)
* Drop area for Controlnet

* 'Use for Controlnet', DND
2023-08-16 10:23:12 +05:30
074c566826 changelog 2023-08-15 19:06:48 +05:30
a2e7bfb30e sdkit 1.0.172 - fix broken tiling after diffusers 0.19.2 upgrade 2023-08-15 19:05:46 +05:30
01c1c77564 formatting 2023-08-15 16:58:12 +05:30
34de4fe8fe Remove warning about embeddings in low vram mode, works now 2023-08-15 16:57:51 +05:30
4975f8167e changelog 2023-08-15 16:50:21 +05:30
6777459e62 sdkit 1.0.171 - fix embeddings in low vram usage mode 2023-08-15 16:49:51 +05:30
253d0dbd5e changelog 2023-08-15 16:02:10 +05:30
e98bd70871 sdkit 1.0.170 - fix VAE in low vram mode 2023-08-15 16:00:47 +05:30
6a216be5cb Force-clean the local git repo, even if it has some unmerged changes 2023-08-15 13:31:08 +05:30
0adb7831e7 Use the correct nvidia wheels path 2023-08-15 12:53:16 +05:30
30ca98b597 sdkit 1.0.169 - don't clip-skip with SDXL, isn't supported yet 2023-08-14 19:02:56 +05:30
e80001e8c8 Merge branch 'beta' of github.com:cmdr2/stable-diffusion-ui into beta 2023-08-14 18:52:50 +05:30
b5490f7712 changelog 2023-08-14 18:52:38 +05:30
dc5748624f sdkit 1.0.168 - fail loudly if an embedding fails to load, disable watermarks for sdxl img2img 2023-08-14 18:52:20 +05:30
91fb82e9b6 Mention minimum CUDA HW level (#1472) 2023-08-14 18:27:14 +05:30
84c5a759d4 Resize slider for the advanced image popup (#1490) 2023-08-14 18:26:33 +05:30
ec43aa2f18 Merge pull request #1486 from ManInDark/beta2
Change Cursor type on logo hover
2023-08-14 18:25:05 +05:30
8e7a6077e5 Make it work on Windows 2023-08-13 15:01:32 +02:00
53a79c1a81 Automatically detect whether NAVI1/2 or NAVI3 ROCm versions are needed 2023-08-11 22:36:20 +02:00
e9f54c8bae Launch uvicorn from check_modules.py 2023-08-11 21:31:45 +02:00
c978863e5f Add uvicorn-launch to check_modules.py 2023-08-08 22:16:57 +02:00
12fa08d7a7 Changed cursor on logo hover to indicate that it can be clicked 2023-08-08 21:45:18 +02:00
50dea4cb52 Use --pre for trt installs 2023-08-04 19:48:24 +05:30
20b06db359 Include onnx and polygraphy for TRT, and allow skipping the wheels for TRT 2023-08-04 19:46:37 +05:30
b6e512e65f v3 engine name 2023-08-04 11:21:31 +05:30
7d71c353b2 changelog 2023-08-04 11:19:35 +05:30
2adf43274c Fix regression with new installations not being able to start ED 2023-08-03 21:02:02 +05:30
3216a68d63 Fix regression with new installations not being able to start ED 2023-08-03 21:01:21 +05:30
df518f822c SDXL 2023-08-03 20:11:27 +05:30
abdf0b6719 changelog 2023-08-03 20:02:52 +05:30
2d2a75f23c v2.6.0 (beta) - switch beta to use diffusers by default 2023-08-03 20:01:27 +05:30
fcb59c68d4 sdkit 1.0.167 - fix reversing loras 2023-08-03 19:42:56 +05:30
d47816e7b9 sdkit 1.0.166 - check for lora to SD model compatibility only for text-based loras 2023-08-03 18:47:58 +05:30
21297d98f2 Option to disable LoRA tag parsing 2023-08-03 18:38:25 +05:30
cc7452374d Remove hypernetworks from the UI options in diffusers. Sorry 2023-08-03 17:43:49 +05:30
851aa7aaaf Merge pull request #1465 from easydiffusion/beta
Keep IMAGE_STEP_SIZE synchronized across all the clamps
2023-08-03 17:41:52 +05:30
376d238ad8 Keep IMAGE_STEP_SIZE synchronized across all the clamps 2023-08-03 17:40:26 +05:30
e0998e227f Merge pull request #1463 from easydiffusion/beta
Beta
2023-08-03 17:20:18 +05:30
07b584b3b4 Merge branch 'beta' of github.com:cmdr2/stable-diffusion-ui into beta 2023-08-03 17:08:38 +05:30
d35a89bb01 Show the controlnet buttons only for diffusers 2023-08-03 17:08:21 +05:30
22a6fe7721 Merge branch 'main' into beta 2023-08-03 16:56:38 +05:30
404329f9b5 Fix for image modifier improvements plugin 2023-08-03 16:33:20 +05:30
3929e88d87 Include the lora parser plugin as a core feature 2023-08-03 16:20:27 +05:30
83a5b5b46f Clamp controlnet images to multiples of 8 2023-08-03 15:51:39 +05:30
b97c906128 Fix a bug where setting an initial image would mess up the width and height field 2023-08-03 15:49:01 +05:30
b8328b6071 sdkit 1.0.165 - warn users about incompatible loras 2023-08-03 15:14:00 +05:30
9a528496a3 Reload the model if the path exists in the request but the model has been unloaded 2023-08-03 15:13:41 +05:30
6a95c602b1 sdkit 1.0.164 - Warn the user if the controlnet isn't compatible with the SD model version 2023-08-03 12:43:30 +05:30
f0f6578b9c Round image sizes to a multiple of 8 2023-08-03 10:22:24 +05:30
83c93eb9ef sdkit 1.0.163 - trt multi-gpu fix 2023-08-02 21:53:11 +05:30
befe8ad24e TRT logging 2023-08-02 18:55:09 +05:30
c5249e6144 TRT styling 2023-08-02 16:45:12 +05:30
9be3297c27 sdkit 1.0.162 - bug fixes for TRT 2023-08-02 16:42:24 +05:30
b6344ef6f9 sdkit 1.0.161 - bug fixes for TRT 2023-08-02 16:37:19 +05:30
76b7e32125 Bug fixes for TRT 2023-08-02 16:37:05 +05:30
801a3dd598 sdkit 1.0.160 - Dynamic load/unload of TensorRT engines 2023-08-02 15:34:55 +05:30
d1fdf1766a Allow batch size ranges again in TRT 2023-08-02 14:03:59 +05:30
35073adc1f Merge branch 'beta' of github.com:cmdr2/stable-diffusion-ui into beta 2023-08-02 12:36:15 +05:30
d76930c7f4 sdkit 1.0.159 - typo in TensorRT forward 2023-08-02 12:35:44 +05:30
7d496f4ad0 Add ControlNet model and filter to metadata (#1454) 2023-08-02 10:13:21 +05:30
53b5ce6e2c typo 2023-08-02 00:10:19 +05:30
38ab5b090f TRT ui changes 2023-08-02 00:08:43 +05:30
fa58996f37 sdkit 1.0.157 - tensorRT build configuration from the UI; clamp images to 8 instead of 64 pixels 2023-08-01 23:53:01 +05:30
56f92ccab0 Don't restrict TRT to batch size 1 2023-08-01 21:24:22 +05:30
4e444b418e sdkit 1.0.156 - missing jsons for controlnet 2023-08-01 18:23:34 +05:30
3d9a9299dc changelog 2023-08-01 17:42:15 +05:30
ae34c9e84b Download known controlnet models if selected; Auto-pick the recommended controlnet model when a filter is selected 2023-08-01 17:39:04 +05:30
eba7bab15e Allow named models in the dropdown 2023-08-01 16:16:38 +05:30
ee6db85768 Initial support for Controlnet 2023-08-01 15:39:15 +05:30
05ed110519 Don't show parallel field for tensorrt demo 2023-08-01 13:02:53 +05:30
9690fd1fa8 sdkit 1.0.154 - restrict tensorrt from 768x768 to 1024x1024, and Unet-only, to avoid going out of memory 2023-08-01 12:45:56 +05:30
4cee1be99c Default settings for TensorRT demo; Don't show splash screen for diffusers 2023-08-01 12:43:23 +05:30
d39e1da183 Fixes for TensorRT 2023-08-01 11:49:30 +05:30
8538a684e7 sdkit 1.0.153 - use TensorRT only if enabled in the UI 2023-07-31 13:19:56 +05:30
47d7513dd8 sdkit 1.0.152 - fix for black images with TensorRT, and enable a timing cache 2023-07-31 12:54:05 +05:30
432fd57581 Use the desired output format and quality while applying the quick filter 2023-07-30 14:06:31 +05:30
9c06e2612a changelog 2023-07-30 13:53:27 +05:30
1d6742f463 sdkit 1.0.151 - An option to use strict mask borders 2023-07-30 13:51:19 +05:30
2e849827d1 Restore width/height dropdown (#1445) 2023-07-30 10:16:04 +05:30
1e2c9ecb41 Use nvidia pypi index url for linux 2023-07-29 22:24:34 +05:30
14679586a8 changelog 2023-07-29 22:04:03 +05:30
11fb83a2a7 sdkit 1.0.148 - fix watermarking which is causing image artifacts in SDXL; fix SDXL long prompts with compel 2.0.1 2023-07-29 22:03:39 +05:30
4d3f55622a Support more image sizes (#1441)
* Support more image sizes
With diffusers, width and height must be a multiple of 8 (instead of 64), allowing more resolution values.

* Add swap button

* Change popup button icon
2023-07-29 21:42:48 +05:30
eedf6f0aad changelog 2023-07-29 21:30:43 +05:30
13592fae1a sdkit 1.0.147 - diffusers 0.19.2 - fix red specs in SDXL images 2023-07-29 21:29:24 +05:30
4dd05d3efe Merge branch 'trt' into beta 2023-07-29 21:10:00 +05:30
2e3059a7c8 UI for TensorRT installation and conversion 2023-07-29 21:09:27 +05:30
3b53b5ebaf sdkit 1.0.144 - use prompts for SDXL refiner; use VAE slicing and VAE tiling for larger images 2023-07-29 12:42:34 +05:30
a9f1000af8 Install button for TensorRT - displayed only if an NVIDIA gpu is active 2023-07-29 11:41:44 +05:30
a9960ded01 Styling 2023-07-29 10:14:52 +05:30
ed84a23f36 Redo button for image filters, limit undo buffer size to 5 2023-07-29 10:07:41 +05:30
8301cafb37 changelog 2023-07-29 09:23:15 +05:30
c906c5d14a Don't rely on old keys to exist in the request 2023-07-29 09:14:00 +05:30
6e52680fa8 Fast in-place upscale and face fix buttons, with an option to undo the operations 2023-07-28 22:48:41 +05:30
7f32c531d7 sdkit 1.0.143 - Fixes for the new beta 2023-07-28 19:14:40 +05:30
17a11b94b2 changelog 2023-07-28 18:59:10 +05:30
e61549e0cd Mega refactor of the task processing and rendering logic; Split filter into a separate task, and add support for running filter tasks individually; Change the format for sending model and filter data from the API, but maintain backwards compatibility for now with the old API 2023-07-28 18:57:28 +05:30
710208f376 Update README.md 2023-07-19 23:33:57 +05:30
788404f66a Update README.md 2023-07-12 10:29:19 +05:30
324226f87d Merge pull request #1379 from easydiffusion/yaml-legacy-path
Handle the legacy yaml config path
2023-06-30 16:31:16 +05:30
3120b593c6 Handle the legacy yaml config path 2023-06-30 16:30:29 +05:30
d98e4772ac Merge pull request #1378 from easydiffusion/yaml-to-json
Allow main to switch back from yaml to json config files
2023-06-30 15:55:02 +05:30
cf87c34bef Allow main to switch back from yaml to json config files 2023-06-30 15:53:54 +05:30
656acafed3 Don't read config.yaml just yet in the main branch 2023-06-30 09:52:10 +05:30
5bc0d1f762 Merge pull request #1366 from easydiffusion/beta
Fix broken save settings
2023-06-26 15:35:34 +05:30
07e30ae4ad Merge pull request #1365 from easydiffusion/beta
Beta
2023-06-26 15:05:40 +05:30
8ced5b7199 Merge pull request #1344 from easydiffusion/beta
Beta
2023-06-13 17:08:46 +05:30
a2856b2b77 Update README.md 2023-06-08 16:47:50 +05:30
3045f5211f Merge pull request #1321 from cmdr2/beta
Tiling and other bug fixes
2023-06-01 16:53:51 +05:30
41ecc822df Merge pull request #1305 from JeLuF/patch-27
Update "How to install and run.txt"
2023-05-26 15:25:31 +05:30
ce2a42ca13 Update "How to install and run.txt" 2023-05-25 20:18:19 +02:00
c0dcf1633c Unset PYTHONHOME in start.sh 2023-05-09 18:07:46 +02:00
d8447ef1a9 Unset PYTHONHOME in Start Stable Diffusion UI.cmd
PYTHONHOME needs to be deleted before conda gets called for the first time.
2023-05-09 18:04:45 +02:00
67 changed files with 7617 additions and 4842 deletions

2
.github/FUNDING.yml vendored
View File

@ -1,3 +1,3 @@
# These are supported funding model platforms
ko_fi: cmdr2_stablediffusion_ui
ko_fi: easydiffusion

4
.gitignore vendored
View File

@ -3,4 +3,6 @@ installer
installer.tar
dist
.idea/*
node_modules/*
node_modules/*
.tmp1
.tmp2

View File

@ -712,3 +712,411 @@ FileSaver.js is licensed under the MIT license:
SOFTWARE.
[1]: http://eligrey.com
croppr.js
=========
https://github.com/jamesssooi/Croppr.js
croppr.js is licensed under the MIT license:
MIT License
Copyright (c) 2017 James Ooi
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
ExifReader
==========
https://github.com/mattiasw/ExifReader
ExifReader is licensed under the Mozilla Public License:
Mozilla Public License Version 2.0
==================================
1. Definitions
--------------
1.1. "Contributor"
means each individual or legal entity that creates, contributes to
the creation of, or owns Covered Software.
1.2. "Contributor Version"
means the combination of the Contributions of others (if any) used
by a Contributor and that particular Contributor's Contribution.
1.3. "Contribution"
means Covered Software of a particular Contributor.
1.4. "Covered Software"
means Source Code Form to which the initial Contributor has attached
the notice in Exhibit A, the Executable Form of such Source Code
Form, and Modifications of such Source Code Form, in each case
including portions thereof.
1.5. "Incompatible With Secondary Licenses"
means
(a) that the initial Contributor has attached the notice described
in Exhibit B to the Covered Software; or
(b) that the Covered Software was made available under the terms of
version 1.1 or earlier of the License, but not also under the
terms of a Secondary License.
1.6. "Executable Form"
means any form of the work other than Source Code Form.
1.7. "Larger Work"
means a work that combines Covered Software with other material, in
a separate file or files, that is not Covered Software.
1.8. "License"
means this document.
1.9. "Licensable"
means having the right to grant, to the maximum extent possible,
whether at the time of the initial grant or subsequently, any and
all of the rights conveyed by this License.
1.10. "Modifications"
means any of the following:
(a) any file in Source Code Form that results from an addition to,
deletion from, or modification of the contents of Covered
Software; or
(b) any new file in Source Code Form that contains any Covered
Software.
1.11. "Patent Claims" of a Contributor
means any patent claim(s), including without limitation, method,
process, and apparatus claims, in any patent Licensable by such
Contributor that would be infringed, but for the grant of the
License, by the making, using, selling, offering for sale, having
made, import, or transfer of either its Contributions or its
Contributor Version.
1.12. "Secondary License"
means either the GNU General Public License, Version 2.0, the GNU
Lesser General Public License, Version 2.1, the GNU Affero General
Public License, Version 3.0, or any later versions of those
licenses.
1.13. "Source Code Form"
means the form of the work preferred for making modifications.
1.14. "You" (or "Your")
means an individual or a legal entity exercising rights under this
License. For legal entities, "You" includes any entity that
controls, is controlled by, or is under common control with You. For
purposes of this definition, "control" means (a) the power, direct
or indirect, to cause the direction or management of such entity,
whether by contract or otherwise, or (b) ownership of more than
fifty percent (50%) of the outstanding shares or beneficial
ownership of such entity.
2. License Grants and Conditions
--------------------------------
2.1. Grants
Each Contributor hereby grants You a world-wide, royalty-free,
non-exclusive license:
(a) under intellectual property rights (other than patent or trademark)
Licensable by such Contributor to use, reproduce, make available,
modify, display, perform, distribute, and otherwise exploit its
Contributions, either on an unmodified basis, with Modifications, or
as part of a Larger Work; and
(b) under Patent Claims of such Contributor to make, use, sell, offer
for sale, have made, import, and otherwise transfer either its
Contributions or its Contributor Version.
2.2. Effective Date
The licenses granted in Section 2.1 with respect to any Contribution
become effective for each Contribution on the date the Contributor first
distributes such Contribution.
2.3. Limitations on Grant Scope
The licenses granted in this Section 2 are the only rights granted under
this License. No additional rights or licenses will be implied from the
distribution or licensing of Covered Software under this License.
Notwithstanding Section 2.1(b) above, no patent license is granted by a
Contributor:
(a) for any code that a Contributor has removed from Covered Software;
or
(b) for infringements caused by: (i) Your and any other third party's
modifications of Covered Software, or (ii) the combination of its
Contributions with other software (except as part of its Contributor
Version); or
(c) under Patent Claims infringed by Covered Software in the absence of
its Contributions.
This License does not grant any rights in the trademarks, service marks,
or logos of any Contributor (except as may be necessary to comply with
the notice requirements in Section 3.4).
2.4. Subsequent Licenses
No Contributor makes additional grants as a result of Your choice to
distribute the Covered Software under a subsequent version of this
License (see Section 10.2) or under the terms of a Secondary License (if
permitted under the terms of Section 3.3).
2.5. Representation
Each Contributor represents that the Contributor believes its
Contributions are its original creation(s) or it has sufficient rights
to grant the rights to its Contributions conveyed by this License.
2.6. Fair Use
This License is not intended to limit any rights You have under
applicable copyright doctrines of fair use, fair dealing, or other
equivalents.
2.7. Conditions
Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted
in Section 2.1.
3. Responsibilities
-------------------
3.1. Distribution of Source Form
All distribution of Covered Software in Source Code Form, including any
Modifications that You create or to which You contribute, must be under
the terms of this License. You must inform recipients that the Source
Code Form of the Covered Software is governed by the terms of this
License, and how they can obtain a copy of this License. You may not
attempt to alter or restrict the recipients' rights in the Source Code
Form.
3.2. Distribution of Executable Form
If You distribute Covered Software in Executable Form then:
(a) such Covered Software must also be made available in Source Code
Form, as described in Section 3.1, and You must inform recipients of
the Executable Form how they can obtain a copy of such Source Code
Form by reasonable means in a timely manner, at a charge no more
than the cost of distribution to the recipient; and
(b) You may distribute such Executable Form under the terms of this
License, or sublicense it under different terms, provided that the
license for the Executable Form does not attempt to limit or alter
the recipients' rights in the Source Code Form under this License.
3.3. Distribution of a Larger Work
You may create and distribute a Larger Work under terms of Your choice,
provided that You also comply with the requirements of this License for
the Covered Software. If the Larger Work is a combination of Covered
Software with a work governed by one or more Secondary Licenses, and the
Covered Software is not Incompatible With Secondary Licenses, this
License permits You to additionally distribute such Covered Software
under the terms of such Secondary License(s), so that the recipient of
the Larger Work may, at their option, further distribute the Covered
Software under the terms of either this License or such Secondary
License(s).
3.4. Notices
You may not remove or alter the substance of any license notices
(including copyright notices, patent notices, disclaimers of warranty,
or limitations of liability) contained within the Source Code Form of
the Covered Software, except that You may alter any license notices to
the extent required to remedy known factual inaccuracies.
3.5. Application of Additional Terms
You may choose to offer, and to charge a fee for, warranty, support,
indemnity or liability obligations to one or more recipients of Covered
Software. However, You may do so only on Your own behalf, and not on
behalf of any Contributor. You must make it absolutely clear that any
such warranty, support, indemnity, or liability obligation is offered by
You alone, and You hereby agree to indemnify every Contributor for any
liability incurred by such Contributor as a result of warranty, support,
indemnity or liability terms You offer. You may include additional
disclaimers of warranty and limitations of liability specific to any
jurisdiction.
4. Inability to Comply Due to Statute or Regulation
---------------------------------------------------
If it is impossible for You to comply with any of the terms of this
License with respect to some or all of the Covered Software due to
statute, judicial order, or regulation then You must: (a) comply with
the terms of this License to the maximum extent possible; and (b)
describe the limitations and the code they affect. Such description must
be placed in a text file included with all distributions of the Covered
Software under this License. Except to the extent prohibited by statute
or regulation, such description must be sufficiently detailed for a
recipient of ordinary skill to be able to understand it.
5. Termination
--------------
5.1. The rights granted under this License will terminate automatically
if You fail to comply with any of its terms. However, if You become
compliant, then the rights granted under this License from a particular
Contributor are reinstated (a) provisionally, unless and until such
Contributor explicitly and finally terminates Your grants, and (b) on an
ongoing basis, if such Contributor fails to notify You of the
non-compliance by some reasonable means prior to 60 days after You have
come back into compliance. Moreover, Your grants from a particular
Contributor are reinstated on an ongoing basis if such Contributor
notifies You of the non-compliance by some reasonable means, this is the
first time You have received notice of non-compliance with this License
from such Contributor, and You become compliant prior to 30 days after
Your receipt of the notice.
5.2. If You initiate litigation against any entity by asserting a patent
infringement claim (excluding declaratory judgment actions,
counter-claims, and cross-claims) alleging that a Contributor Version
directly or indirectly infringes any patent, then the rights granted to
You by any and all Contributors for the Covered Software under Section
2.1 of this License shall terminate.
5.3. In the event of termination under Sections 5.1 or 5.2 above, all
end user license agreements (excluding distributors and resellers) which
have been validly granted by You or Your distributors under this License
prior to termination shall survive termination.
************************************************************************
* *
* 6. Disclaimer of Warranty *
* ------------------------- *
* *
* Covered Software is provided under this License on an "as is" *
* basis, without warranty of any kind, either expressed, implied, or *
* statutory, including, without limitation, warranties that the *
* Covered Software is free of defects, merchantable, fit for a *
* particular purpose or non-infringing. The entire risk as to the *
* quality and performance of the Covered Software is with You. *
* Should any Covered Software prove defective in any respect, You *
* (not any Contributor) assume the cost of any necessary servicing, *
* repair, or correction. This disclaimer of warranty constitutes an *
* essential part of this License. No use of any Covered Software is *
* authorized under this License except under this disclaimer. *
* *
************************************************************************
************************************************************************
* *
* 7. Limitation of Liability *
* -------------------------- *
* *
* Under no circumstances and under no legal theory, whether tort *
* (including negligence), contract, or otherwise, shall any *
* Contributor, or anyone who distributes Covered Software as *
* permitted above, be liable to You for any direct, indirect, *
* special, incidental, or consequential damages of any character *
* including, without limitation, damages for lost profits, loss of *
* goodwill, work stoppage, computer failure or malfunction, or any *
* and all other commercial damages or losses, even if such party *
* shall have been informed of the possibility of such damages. This *
* limitation of liability shall not apply to liability for death or *
* personal injury resulting from such party's negligence to the *
* extent applicable law prohibits such limitation. Some *
* jurisdictions do not allow the exclusion or limitation of *
* incidental or consequential damages, so this exclusion and *
* limitation may not apply to You. *
* *
************************************************************************
8. Litigation
-------------
Any litigation relating to this License may be brought only in the
courts of a jurisdiction where the defendant maintains its principal
place of business and such litigation shall be governed by laws of that
jurisdiction, without reference to its conflict-of-law provisions.
Nothing in this Section shall prevent a party's ability to bring
cross-claims or counter-claims.
9. Miscellaneous
----------------
This License represents the complete agreement concerning the subject
matter hereof. If any provision of this License is held to be
unenforceable, such provision shall be reformed only to the extent
necessary to make it enforceable. Any law or regulation which provides
that the language of a contract shall be construed against the drafter
shall not be used to construe this License against a Contributor.
10. Versions of the License
---------------------------
10.1. New Versions
Mozilla Foundation is the license steward. Except as provided in Section
10.3, no one other than the license steward has the right to modify or
publish new versions of this License. Each version will be given a
distinguishing version number.
10.2. Effect of New Versions
You may distribute the Covered Software under the terms of the version
of the License under which You originally received the Covered Software,
or under the terms of any subsequent version published by the license
steward.
10.3. Modified Versions
If you create software not governed by this License, and you want to
create a new license for such software, you may create and use a
modified version of this License if you rename the license and remove
any references to the name of the license steward (except to note that
such modified license differs from this License).
10.4. Distributing Source Code Form that is Incompatible With Secondary
Licenses
If You choose to distribute Source Code Form that is Incompatible With
Secondary Licenses under the terms of this version of the License, the
notice described in Exhibit B of this License must be attached.
Exhibit A - Source Code Form License Notice
-------------------------------------------
This Source Code Form is subject to the terms of the Mozilla Public
License, v. 2.0. If a copy of the MPL was not distributed with this
file, You can obtain one at https://mozilla.org/MPL/2.0/.
If it is not possible or desirable to put the notice in a particular
file, then You may include the notice in a location (such as a LICENSE
file in a relevant directory) where a recipient would be likely to look
for such a notice.
You may add additional accurate notices of copyright ownership.
Exhibit B - "Incompatible With Secondary Licenses" Notice
---------------------------------------------------------
This Source Code Form is "Incompatible With Secondary Licenses", as
defined by the Mozilla Public License, v. 2.0.

View File

@ -1,5 +1,61 @@
# What's new?
## v3.0
### Major Changes
- **ControlNet** - Full support for ControlNet, with native integration of the common ControlNet models. Just select a control image, then choose the ControlNet filter/model and run. No additional configuration or download necessary. Supports custom ControlNets as well.
- **SDXL** - Full support for SDXL. No configuration necessary, just put the SDXL model in the `models/stable-diffusion` folder.
- **Multiple LoRAs** - Use multiple LoRAs, including SDXL and SD2-compatible LoRAs. Put them in the `models/lora` folder.
- **Embeddings** - Use textual inversion embeddings easily, by putting them in the `models/embeddings` folder and using their names in the prompt (or by clicking the `+ Embeddings` button to select embeddings visually). Thanks @JeLuf.
- **Seamless Tiling** - Generate repeating textures that can be useful for games and other art projects. Works best in 512x512 resolution. Thanks @JeLuf.
- **Inpainting Models** - Full support for inpainting models, including custom inpainting models. No configuration (or yaml files) necessary.
- **Faster than v2.5** - Nearly 40% faster than Easy Diffusion v2.5, and can be even faster if you enable xFormers.
- **Even less VRAM usage** - Less than 2 GB for 512x512 images on 'low' VRAM usage setting (SD 1.5). Can generate large images with SDXL.
- **WebP images** - Supports saving images in the lossless webp format.
- **Undo/Redo in the UI** - Remove tasks or images from the queue easily, and undo the action if you removed anything accidentally. Thanks @JeLuf.
- **Three new samplers, and latent upscaler** - Added `DEIS`, `DDPM` and `DPM++ 2m SDE` as additional samplers. Thanks @ogmaresca and @rbertus2000.
- **Significantly faster 'Upscale' and 'Fix Faces' buttons on the images**
- **Major rewrite of the code** - We've switched to using diffusers under-the-hood, which allows us to release new features faster, and focus on making the UI and installer even easier to use.
### Detailed changelog
* 3.0.9c - 6 Feb 2025 - (Internal code change) Remove hardcoded references to `torch.cuda`, and replace with torchruntime's device utilities.
* 3.0.9b - 28 Jan 2025 - Fix a bug affecting older versions of Easy Diffusion, which tried to upgrade to an incompatible version of PyTorch.
* 3.0.9b - 4 Jan 2025 - Replace the use of WMIC (deprecated) with a powershell call.
* 3.0.9 - 28 May 2024 - Slider for controlling the strength of controlnets.
* 3.0.8 - 27 May 2024 - SDXL ControlNets for Img2Img and Inpainting.
* 3.0.7 - 11 Dec 2023 - Setting to enable/disable VAE tiling (in the Image Settings panel). Sometimes VAE tiling reduces the quality of the image, so this setting will help control that.
* 3.0.6 - 18 Sep 2023 - Add thumbnails to embeddings from the UI, using the new `Upload Thumbnail` button in the Embeddings popup. Thanks @JeLuf.
* 3.0.6 - 15 Sep 2023 - Fix broken embeddings dialog when LoRA information couldn't be fetched.
* 3.0.6 - 14 Sep 2023 - UI for adding notes to LoRA files (to help you remember which prompts to use). Also added a button to automatically fetch prompts from Civitai for a LoRA file, using the `Import from Civitai` button. Thanks @JeLuf.
* 3.0.5 - 2 Sep 2023 - Support SDXL ControlNets.
* 3.0.4 - 1 Sep 2023 - Fix incorrect metadata generated for embeddings, when the exact word doesn't match the case, or is part of a larger word.
* 3.0.4 - 1 Sep 2023 - Simplify the installation for AMD users on Linux. Thanks @JeLuf.
* 3.0.4 - 1 Sep 2023 - Allow using a different folder for models. This is useful if you want to share a models folder across different software, or on a different drive. You can change this path in the Settings tab.
* 3.0.3 - 31 Aug 2023 - Auto-save images to disk (if enabled by the user) when upscaling/fixing using the buttons on the image.
* 3.0.3 - 30 Aug 2023 - Allow loading NovelAI-based custom models.
* 3.0.3 - 30 Aug 2023 - Fix broken VAE tiling. This allows you to create larger images with lesser VRAM usage.
* 3.0.3 - 30 Aug 2023 - Allow blocking NSFW images using a server-side config. This prevents the browser from generating NSFW images or changing the config. Open `config.yaml` in a text editor (e.g. Notepad), and add `block_nsfw: true` at the end, and save the file.
* 3.0.2 - 29 Aug 2023 - Fixed incorrect matching of embeddings from prompts.
* 3.0.2 - 24 Aug 2023 - Fix broken seamless tiling.
* 3.0.2 - 23 Aug 2023 - Fix styling on mobile devices.
* 3.0.2 - 22 Aug 2023 - Full support for inpainting models, including custom models. Support SD 1.x and SD 2.x inpainting models. Does not require you to specify a yaml config file.
* 3.0.2 - 22 Aug 2023 - Reduce VRAM consumption of controlnet in 'low' VRAM mode, and allow accelerating controlnets using xformers.
* 3.0.2 - 22 Aug 2023 - Improve auto-detection of SD 2.0 and 2.1 models, removing the need for custom yaml files for SD 2.x models. Improve the model load time by speeding-up the black image test.
* 3.0.1 - 18 Aug 2023 - Rotate an image if EXIF rotation is present. For e.g. this is common in images taken with a smartphone.
* 3.0.1 - 18 Aug 2023 - Resize control images to the task dimensions, to avoid memory errors with high-res control images.
* 3.0.1 - 18 Aug 2023 - Show controlnet filter preview in the task entry.
* 3.0.1 - 18 Aug 2023 - Fix drag-and-drop and 'Use these Settings' for LoRA and ControlNet.
* 3.0.1 - 18 Aug 2023 - Auto-save LoRA models and strengths.
* 3.0.1 - 17 Aug 2023 - Automatically use the correct yaml config file for custom SDXL models, even if a yaml file isn't present in the folder.
* 3.0.1 - 17 Aug 2023 - Fix broken embeddings with SDXL.
* 3.0.1 - 16 Aug 2023 - Fix broken LoRA with SDXL.
* 3.0.1 - 15 Aug 2023 - Fix broken seamless tiling.
* 3.0.1 - 15 Aug 2023 - Fix textual inversion embeddings not working in `low` VRAM usage mode.
* 3.0.1 - 15 Aug 2023 - Fix for custom VAEs not working in `low` VRAM usage mode.
* 3.0.1 - 14 Aug 2023 - Slider to change the image dimensions proportionally (in Image Settings). Thanks @JeLuf.
* 3.0.1 - 14 Aug 2023 - Show an error to the user if an embedding isn't compatible with the model, instead of failing silently without informing the user. Thanks @JeLuf.
* 3.0.1 - 14 Aug 2023 - Disable watermarking for SDXL img2img. Thanks @AvidGameFan.
* 3.0.0 - 3 Aug 2023 - Enabled diffusers for everyone by default. The old v2 engine can be used by disabling the "Use v3 engine" option in the Settings tab.
## v2.5
### Major Changes
- **Nearly twice as fast** - significantly faster speed of image generation. Code contributions are welcome to make our project even faster: https://github.com/easydiffusion/sdkit/#is-it-fast
@ -22,6 +78,12 @@
Our focus continues to remain on an easy installation experience, and an easy user-interface. While still remaining pretty powerful, in terms of features and speed.
### Detailed changelog
* 2.5.48 - 1 Aug 2023 - (beta-only) Full support for ControlNets. You can select a control image to guide the AI. You can pick a filter to pre-process the image, and one of the known (or custom) controlnet models. Supports `OpenPose`, `Canny`, `Straight Lines`, `Depth`, `Line Art`, `Scribble`, `Soft Edge`, `Shuffle` and `Segment`.
* 2.5.47 - 30 Jul 2023 - An option to use `Strict Mask Border` while inpainting, to avoid touching areas outside the mask. But this might show a slight outline of the mask, which you will have to touch up separately.
* 2.5.47 - 29 Jul 2023 - (beta-only) Fix long prompts with SDXL.
* 2.5.47 - 29 Jul 2023 - (beta-only) Fix red dots in some SDXL images.
* 2.5.47 - 29 Jul 2023 - Significantly faster `Fix Faces` and `Upscale` buttons (on the image). They no longer need to generate the image from scratch, instead they just upscale/fix the generated image in-place.
* 2.5.47 - 28 Jul 2023 - Lots of internal code reorganization, in preparation for supporting Controlnets. No user-facing changes.
* 2.5.46 - 27 Jul 2023 - (beta-only) Full support for SD-XL models (base and refiner)!
* 2.5.45 - 24 Jul 2023 - (beta-only) Hide the samplers that won't be supported in the new diffusers version.
* 2.5.45 - 22 Jul 2023 - (beta-only) Fix the recently-broken inpainting models.

View File

@ -47,3 +47,5 @@ Build the Windows installer using Windows, and the Linux installer using Linux.
1. Run `build.bat` or `./build.sh` depending on whether you're in Windows or Linux.
2. Make a new GitHub release and upload the Windows and Linux installer builds created inside the `dist` folder.
For NSIS (on Windows), you need to have these plugins in the `nsis/Plugins` folder: `amd64-unicode`, `x86-ansi`, `x86-unicode`

View File

@ -1,18 +1,18 @@
Congrats on downloading Stable Diffusion UI, version 2!
Congrats on downloading Easy Diffusion, version 3!
If you haven't downloaded Stable Diffusion UI yet, please download from https://github.com/easydiffusion/easydiffusion#installation
If you haven't downloaded Easy Diffusion yet, please download from https://github.com/easydiffusion/easydiffusion#installation
After downloading, to install please follow these instructions:
For Windows:
- Please double-click the "Start Stable Diffusion UI.cmd" file inside the "stable-diffusion-ui" folder.
- Please double-click the "Easy-Diffusion-Windows.exe" file and follow the instructions.
For Linux:
- Please open a terminal, and go to the "stable-diffusion-ui" directory. Then run ./start.sh
For Linux and Mac:
- Please open a terminal, and go to the "easy-diffusion" directory. Then run ./start.sh
That file will automatically install everything. After that it will start the Stable Diffusion interface in a web browser.
That file will automatically install everything. After that it will start the Easy Diffusion interface in a web browser.
To start the UI in the future, please run the same command mentioned above.
To start Easy Diffusion in the future, please run the same command mentioned above.
If you have any problems, please:
@ -21,4 +21,4 @@ If you have any problems, please:
3. Or, file an issue at https://github.com/easydiffusion/easydiffusion/issues
Thanks
cmdr2 (and contributors to the project)
cmdr2 (and contributors to the project)

Binary file not shown.

Before

Width:  |  Height:  |  Size: 288 KiB

View File

@ -1 +0,0 @@
!define EXISTING_INSTALLATION_DIR "D:\path\to\installed\easy-diffusion"

Binary file not shown.

Before

Width:  |  Height:  |  Size: 200 KiB

View File

@ -7,9 +7,9 @@ RequestExecutionLevel user
!AddPluginDir /amd64-unicode "."
; HM NIS Edit Wizard helper defines
!define PRODUCT_NAME "Easy Diffusion"
!define PRODUCT_VERSION "2.5"
!define PRODUCT_VERSION "3.0"
!define PRODUCT_PUBLISHER "cmdr2 and contributors"
!define PRODUCT_WEB_SITE "https://stable-diffusion-ui.github.io"
!define PRODUCT_WEB_SITE "https://easydiffusion.github.io"
!define PRODUCT_DIR_REGKEY "Software\Microsoft\Easy Diffusion\App Paths\installer.exe"
; MUI 1.67 compatible ------
@ -165,9 +165,9 @@ FunctionEnd
; MUI Settings
;---------------------------------------------------------------------------------------------------------
!define MUI_ABORTWARNING
!define MUI_ICON "cyborg_flower_girl.ico"
!define MUI_ICON "${EXISTING_INSTALLATION_DIR}\installer_files\cyborg_flower_girl.ico"
!define MUI_WELCOMEFINISHPAGE_BITMAP "cyborg_flower_girl.bmp"
!define MUI_WELCOMEFINISHPAGE_BITMAP "${EXISTING_INSTALLATION_DIR}\installer_files\cyborg_flower_girl.bmp"
; Welcome page
!define MUI_WELCOMEPAGE_TEXT "This installer will guide you through the installation of Easy Diffusion.$\n$\n\
@ -176,8 +176,8 @@ Click Next to continue."
Page custom MediaPackDialog
; License page
!insertmacro MUI_PAGE_LICENSE "..\LICENSE"
!insertmacro MUI_PAGE_LICENSE "..\CreativeML Open RAIL-M License"
!insertmacro MUI_PAGE_LICENSE "${EXISTING_INSTALLATION_DIR}\LICENSE"
!insertmacro MUI_PAGE_LICENSE "${EXISTING_INSTALLATION_DIR}\CreativeML Open RAIL-M License"
; Directory page
!define MUI_PAGE_CUSTOMFUNCTION_LEAVE "DirectoryLeave"
!insertmacro MUI_PAGE_DIRECTORY
@ -210,29 +210,33 @@ ShowInstDetails show
; List of files to be installed
Section "MainSection" SEC01
SetOutPath "$INSTDIR"
File "..\CreativeML Open RAIL-M License"
File "..\How to install and run.txt"
File "..\LICENSE"
File "..\scripts\Start Stable Diffusion UI.cmd"
File "${EXISTING_INSTALLATION_DIR}\CreativeML Open RAIL-M License"
File "${EXISTING_INSTALLATION_DIR}\How to install and run.txt"
File "${EXISTING_INSTALLATION_DIR}\LICENSE"
File "${EXISTING_INSTALLATION_DIR}\Start Stable Diffusion UI.cmd"
File /r "${EXISTING_INSTALLATION_DIR}\installer_files"
File /r "${EXISTING_INSTALLATION_DIR}\profile"
File /r "${EXISTING_INSTALLATION_DIR}\sd-ui-files"
SetOutPath "$INSTDIR\installer_files"
File "cyborg_flower_girl.ico"
SetOutPath "$INSTDIR\scripts"
File "${EXISTING_INSTALLATION_DIR}\scripts\install_status.txt"
File "..\scripts\on_env_start.bat"
File "${EXISTING_INSTALLATION_DIR}\scripts\on_env_start.bat"
File "C:\windows\system32\curl.exe"
CreateDirectory "$INSTDIR\models"
File "${EXISTING_INSTALLATION_DIR}\scripts\config.yaml.sample"
CreateDirectory "$INSTDIR\models\stable-diffusion"
CreateDirectory "$INSTDIR\models\gfpgan"
CreateDirectory "$INSTDIR\models\realesrgan"
CreateDirectory "$INSTDIR\models\vae"
CreateDirectory "$INSTDIR\profile\.cache\huggingface\hub"
SetOutPath "$INSTDIR\profile\.cache\huggingface\hub"
File /r /x pytorch_model.bin "${EXISTING_INSTALLATION_DIR}\profile\.cache\huggingface\hub\models--openai--clip-vit-large-patch14"
CreateDirectory "$SMPROGRAMS\Easy Diffusion"
CreateShortCut "$SMPROGRAMS\Easy Diffusion\Easy Diffusion.lnk" "$INSTDIR\Start Stable Diffusion UI.cmd" "" "$INSTDIR\installer_files\cyborg_flower_girl.ico"
DetailPrint 'Downloading the Stable Diffusion 1.4 model...'
NScurl::http get "https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/resolve/main/sd-v1-4.ckpt" "$INSTDIR\models\stable-diffusion\sd-v1-4.ckpt" /CANCEL /INSIST /END
DetailPrint 'Downloading the Stable Diffusion 1.5 model...'
NScurl::http get "https://github.com/easydiffusion/sdkit-test-data/releases/download/assets/sd-v1-5.safetensors" "$INSTDIR\models\stable-diffusion\sd-v1-5.safetensors" /CANCEL /INSIST /END
DetailPrint 'Downloading the GFPGAN model...'
NScurl::http get "https://github.com/TencentARC/GFPGAN/releases/download/v1.3.4/GFPGANv1.4.pth" "$INSTDIR\models\gfpgan\GFPGANv1.4.pth" /CANCEL /INSIST /END

View File

@ -1,28 +1,36 @@
# Easy Diffusion 2.5
# Easy Diffusion 3.0
### The easiest way to install and use [Stable Diffusion](https://github.com/CompVis/stable-diffusion) on your computer.
Does not require technical knowledge, does not require pre-installed software. 1-click install, powerful features, friendly community.
[Installation guide](#installation) | [Troubleshooting guide](https://github.com/easydiffusion/easydiffusion/wiki/Troubleshooting) | <sub>[![Discord Server](https://img.shields.io/discord/1014774730907209781?label=Discord)](https://discord.com/invite/u9yhsFmEkB)</sub> <sup>(for support queries, and development discussions)</sup>
️‍🔥🎉 **New!** Support for Flux has been added in the beta branch (v3.5 engine)!
[Installation guide](#installation) | [Troubleshooting guide](https://github.com/easydiffusion/easydiffusion/wiki/Troubleshooting) | [User guide](https://github.com/easydiffusion/easydiffusion/wiki) | <sub>[![Discord Server](https://img.shields.io/discord/1014774730907209781?label=Discord)](https://discord.com/invite/u9yhsFmEkB)</sub> <sup>(for support queries, and development discussions)</sup>
---
![262597678-11089485-2514-4a11-88fb-c3acc81fc9ec](https://github.com/easydiffusion/easydiffusion/assets/844287/050b5e15-e909-45bf-8162-a38234830e38)
![t2i](https://raw.githubusercontent.com/Stability-AI/stablediffusion/main/assets/stable-samples/txt2img/768/merged-0006.png)
# Installation
Click the download button for your operating system:
<p float="left">
<a href="https://github.com/easydiffusion/easydiffusion/releases/download/v2.5.24/Easy-Diffusion-Windows.exe"><img src="https://github.com/easydiffusion/easydiffusion/raw/main/media/download-win.png" width="200" /></a>
<a href="https://github.com/easydiffusion/easydiffusion/releases/download/v2.5.24/Easy-Diffusion-Linux.zip"><img src="https://github.com/easydiffusion/easydiffusion/raw/main/media/download-linux.png" width="200" /></a>
<a href="https://github.com/easydiffusion/easydiffusion/releases/download/v2.5.24/Easy-Diffusion-Mac.zip"><img src="https://github.com/easydiffusion/easydiffusion/raw/main/media/download-mac.png" width="200" /></a>
<a href="https://github.com/cmdr2/stable-diffusion-ui/releases/latest/download/Easy-Diffusion-Linux.zip"><img src="https://github.com/cmdr2/stable-diffusion-ui/raw/main/media/download-linux.png" width="200" /></a>
<a href="https://github.com/cmdr2/stable-diffusion-ui/releases/latest/download/Easy-Diffusion-Mac.zip"><img src="https://github.com/cmdr2/stable-diffusion-ui/raw/main/media/download-mac.png" width="200" /></a>
<a href="https://github.com/cmdr2/stable-diffusion-ui/releases/latest/download/Easy-Diffusion-Windows.exe"><img src="https://github.com/cmdr2/stable-diffusion-ui/raw/main/media/download-win.png" width="200" /></a>
</p>
**Hardware requirements:**
- **Windows:** NVIDIA graphics card (minimum 2 GB RAM), or run on your CPU.
- **Linux:** NVIDIA or AMD graphics card (minimum 2 GB RAM), or run on your CPU.
- **Mac:** M1 or M2, or run on your CPU.
- **Windows:** NVIDIA¹ or AMD graphics card (minimum 2 GB RAM), or run on your CPU.
- **Linux:** NVIDIA¹ or AMD² graphics card (minimum 2 GB RAM), or run on your CPU.
- **Mac:** M1/M2/M3/M4 or AMD graphics card (Intel Mac), or run on your CPU.
- Minimum 8 GB of system RAM.
- Atleast 25 GB of space on the hard disk.
¹) [CUDA Compute capability](https://en.wikipedia.org/wiki/CUDA#GPUs_supported) level of 3.7 or higher required.
²) ROCm 5.2 (or newer) support required.
The installer will take care of whatever is needed. If you face any problems, you can join the friendly [Discord community](https://discord.com/invite/u9yhsFmEkB) and ask for assistance.
## On Windows:
@ -58,17 +66,19 @@ Just delete the `EasyDiffusion` folder to uninstall all the downloaded packages.
- **UI Themes**: Customize the program to your liking.
- **Searchable models dropdown**: organize your models into sub-folders, and search through them in the UI.
### Image generation
- **Supports**: "*Text to Image*" and "*Image to Image*".
- **21 Samplers**: `ddim`, `plms`, `heun`, `euler`, `euler_a`, `dpm2`, `dpm2_a`, `lms`, `dpm_solver_stability`, `dpmpp_2s_a`, `dpmpp_2m`, `dpmpp_sde`, `dpm_fast`, `dpm_adaptive`, `ddpm`, `deis`, `unipc_snr`, `unipc_tu`, `unipc_tq`, `unipc_snr_2`, `unipc_tu_2`.
- **In-Painting**: Specify areas of your image to paint into.
### Powerful image generation
- **Supports**: "*Text to Image*", "*Image to Image*" and "*InPainting*"
- **ControlNet**: For advanced control over the image, e.g. by setting the pose or drawing the outline for the AI to fill in.
- **16 Samplers**: `PLMS`, `DDIM`, `DEIS`, `Heun`, `Euler`, `Euler Ancestral`, `DPM2`, `DPM2 Ancestral`, `LMS`, `DPM Solver`, `DPM++ 2s Ancestral`, `DPM++ 2m`, `DPM++ 2m SDE`, `DPM++ SDE`, `DDPM`, `UniPC`.
- **Stable Diffusion XL and 2.1**: Generate higher-quality images using the latest Stable Diffusion XL models.
- **Textual Inversion Embeddings**: For guiding the AI strongly towards a particular concept.
- **Simple Drawing Tool**: Draw basic images to guide the AI, without needing an external drawing program.
- **Face Correction (GFPGAN)**
- **Upscaling (RealESRGAN)**
- **Loopback**: Use the output image as the input image for the next img2img task.
- **Loopback**: Use the output image as the input image for the next image task.
- **Negative Prompt**: Specify aspects of the image to *remove*.
- **Attention/Emphasis**: () in the prompt increases the model's attention to enclosed words, and [] decreases it.
- **Weighted Prompts**: Use weights for specific words in your prompt to change their importance, e.g. `red:2.4 dragon:1.2`.
- **Attention/Emphasis**: `+` in the prompt increases the model's attention to enclosed words, and `-` decreases it. E.g. `apple++ falling from a tree`.
- **Weighted Prompts**: Use weights for specific words in your prompt to change their importance, e.g. `(red)2.4 (dragon)1.2`.
- **Prompt Matrix**: Quickly create multiple variations of your prompt, e.g. `a photograph of an astronaut riding a horse | illustration | cinematic lighting`.
- **Prompt Set**: Quickly create multiple variations of your prompt, e.g. `a photograph of an astronaut on the {moon,earth}`
- **1-click Upscale/Face Correction**: Upscale or correct an image after it has been generated.
@ -78,10 +88,11 @@ Just delete the `EasyDiffusion` folder to uninstall all the downloaded packages.
### Advanced features
- **Custom Models**: Use your own `.ckpt` or `.safetensors` file, by placing it inside the `models/stable-diffusion` folder!
- **Stable Diffusion 2.1 support**
- **Stable Diffusion XL and 2.1 support**
- **Merge Models**
- **Use custom VAE models**
- **Use pre-trained Hypernetworks**
- **Textual Inversion Embeddings**
- **ControlNet**
- **Use custom GFPGAN models**
- **UI Plugins**: Choose from a growing list of [community-generated UI plugins](https://github.com/easydiffusion/easydiffusion/wiki/UI-Plugins), or write your own plugin to add features to the project!
@ -93,24 +104,14 @@ Just delete the `EasyDiffusion` folder to uninstall all the downloaded packages.
- **Auto scan for malicious models**: Uses picklescan to prevent malicious models.
- **Safetensors support**: Support loading models in the safetensor format, for improved safety.
- **Auto-updater**: Gets you the latest improvements and bug-fixes to a rapidly evolving project.
- **Developer Console**: A developer-mode for those who want to modify their Stable Diffusion code, and edit the conda environment.
- **Developer Console**: A developer-mode for those who want to modify their Stable Diffusion code, modify packages, and edit the conda environment.
**(and a lot more)**
----
## Easy for new users:
![Screenshot of the initial UI](https://user-images.githubusercontent.com/844287/217043152-29454d15-0387-4228-b70d-9a4b84aeb8ba.png)
## Powerful features for advanced users:
![Screenshot of advanced settings](https://user-images.githubusercontent.com/844287/217042588-fc53c975-bacd-4a9c-af88-37408734ade3.png)
## Live Preview
Useful for judging (and stopping) an image quickly, without waiting for it to finish rendering.
![live-512](https://user-images.githubusercontent.com/844287/192097249-729a0a1e-a677-485e-9ccc-16a9e848fabe.gif)
## Easy for new users, powerful features for advanced users:
![image](https://github.com/easydiffusion/easydiffusion/assets/844287/efbbac9f-42ce-4aef-8625-fd23c74a8241)
## Task Queue
![Screenshot of task queue](https://user-images.githubusercontent.com/844287/217043984-0b35f73b-1318-47cb-9eed-a2a91b430490.png)
@ -124,14 +125,17 @@ Please refer to our [guide](https://github.com/easydiffusion/easydiffusion/wiki/
# Bugs reports and code contributions welcome
If there are any problems or suggestions, please feel free to ask on the [discord server](https://discord.com/invite/u9yhsFmEkB) or [file an issue](https://github.com/easydiffusion/easydiffusion/issues).
We could really use help on these aspects (click to view tasks that need your help):
* [User Interface](https://github.com/users/cmdr2/projects/1/views/1)
* [Engine](https://github.com/users/cmdr2/projects/3/views/1)
* [Installer](https://github.com/users/cmdr2/projects/4/views/1)
* [Documentation](https://github.com/users/cmdr2/projects/5/views/1)
If you have any code contributions in mind, please feel free to say Hi to us on the [discord server](https://discord.com/invite/u9yhsFmEkB). We use the Discord server for development-related discussions, and for helping users.
# Credits
* Stable Diffusion: https://github.com/Stability-AI/stablediffusion
* CodeFormer: https://github.com/sczhou/CodeFormer (license: https://github.com/sczhou/CodeFormer/blob/master/LICENSE)
* GFPGAN: https://github.com/TencentARC/GFPGAN
* RealESRGAN: https://github.com/xinntao/Real-ESRGAN
* k-diffusion: https://github.com/crowsonkb/k-diffusion
* Code contributors and artists on the cmdr2 UI: https://github.com/cmdr2/stable-diffusion-ui and Discord (https://discord.com/invite/u9yhsFmEkB)
* Lots of contributors on the internet
# Disclaimer
The authors of this project are not responsible for any content generated using this interface.

View File

@ -1,48 +1,78 @@
@echo off
setlocal enabledelayedexpansion
@echo "Hi there, what you are running is meant for the developers of this project, not for users." & echo.
@echo "If you only want to use the Stable Diffusion UI, you've downloaded the wrong file."
@echo "If you only want to use Easy Diffusion, you've downloaded the wrong file."
@echo "Please download and follow the instructions at https://github.com/easydiffusion/easydiffusion#installation" & echo.
@echo "If you are actually a developer of this project, please type Y and press enter" & echo.
set /p answer=Are you a developer of this project (Y/N)?
if /i "%answer:~,1%" NEQ "Y" exit /b
mkdir dist\win\stable-diffusion-ui\scripts
@REM mkdir dist\linux-mac\stable-diffusion-ui\scripts
@rem verify dependencies
call makensis /VERSION >.tmp1 2>.tmp2
if "!ERRORLEVEL!" NEQ "0" (
echo makensis.exe not found! Download it from https://sourceforge.net/projects/nsisbi/files/ and set it on the PATH variable.
pause
exit
)
@rem copy the installer files for Windows
set /p OUT_DIR=Output folder path (will create the installer files inside this, e.g. F:\EasyDiffusion):
copy scripts\on_env_start.bat dist\win\stable-diffusion-ui\scripts\
copy scripts\bootstrap.bat dist\win\stable-diffusion-ui\scripts\
copy scripts\config.yaml.sample dist\win\stable-diffusion-ui\scripts\config.yaml
copy "scripts\Start Stable Diffusion UI.cmd" dist\win\stable-diffusion-ui\
copy LICENSE dist\win\stable-diffusion-ui\
copy "CreativeML Open RAIL-M License" dist\win\stable-diffusion-ui\
copy "How to install and run.txt" dist\win\stable-diffusion-ui\
echo. > dist\win\stable-diffusion-ui\scripts\install_status.txt
mkdir "%OUT_DIR%\scripts"
mkdir "%OUT_DIR%\installer_files"
@rem copy the installer files for Linux and Mac
set BASE_DIR=%cd%
@REM copy scripts\on_env_start.sh dist\linux-mac\stable-diffusion-ui\scripts\
@REM copy scripts\bootstrap.sh dist\linux-mac\stable-diffusion-ui\scripts\
@REM copy scripts\start.sh dist\linux-mac\stable-diffusion-ui\
@REM copy LICENSE dist\linux-mac\stable-diffusion-ui\
@REM copy "CreativeML Open RAIL-M License" dist\linux-mac\stable-diffusion-ui\
@REM copy "How to install and run.txt" dist\linux-mac\stable-diffusion-ui\
@REM echo. > dist\linux-mac\stable-diffusion-ui\scripts\install_status.txt
@rem STEP 1: copy the installer files for Windows
@rem make the zip
cd dist\win
call powershell Compress-Archive -Path stable-diffusion-ui -DestinationPath ..\stable-diffusion-ui-windows.zip
cd ..\..
@REM cd dist\linux-mac
@REM call powershell Compress-Archive -Path stable-diffusion-ui -DestinationPath ..\stable-diffusion-ui-linux.zip
@REM call powershell Compress-Archive -Path stable-diffusion-ui -DestinationPath ..\stable-diffusion-ui-mac.zip
@REM cd ..\..
echo "Build ready. Upload the zip files inside the 'dist' folder."
copy "%BASE_DIR%\scripts\on_env_start.bat" "%OUT_DIR%\scripts\"
copy "%BASE_DIR%\scripts\config.yaml.sample" "%OUT_DIR%\scripts\config.yaml.sample"
copy "%BASE_DIR%\scripts\Start Stable Diffusion UI.cmd" "%OUT_DIR%\"
copy "%BASE_DIR%\LICENSE" "%OUT_DIR%\"
copy "%BASE_DIR%\CreativeML Open RAIL-M License" "%OUT_DIR%\"
copy "%BASE_DIR%\How to install and run.txt" "%OUT_DIR%\"
copy "%BASE_DIR%\NSIS\cyborg_flower_girl.ico" "%OUT_DIR%\installer_files\"
copy "%BASE_DIR%\NSIS\cyborg_flower_girl.bmp" "%OUT_DIR%\installer_files\"
echo. > "%OUT_DIR%\scripts\install_status.txt"
echo ----
echo Basic files ready. Verify the files in %OUT_DIR%, then press Enter to initialize the environment, or close to quit.
echo ----
pause
@rem STEP 2: Initialize the environment with git, python and conda
cd /d "%OUT_DIR%\"
call "%BASE_DIR%\scripts\bootstrap.bat"
echo ----
echo Environment ready. Verify the environment, then press Enter to download the necessary packages, or close to quit.
echo ----
pause
@rem STEP 3: Download the packages and create a working installation
cd /d "%OUT_DIR%\"
start "Install Easy Diffusion" /D "%OUT_DIR%" "Start Stable Diffusion UI.cmd"
echo ----
echo Installation in progress (in a new window). Once complete, verify the installation, then press Enter to create an installer from these files, or close to quit.
echo ----
pause
@rem STEP 4: Build the installer from a working installation
cd /d "%OUT_DIR%\"
echo ^^!define EXISTING_INSTALLATION_DIR "%OUT_DIR%" > nsisconf.nsh
call makensis /NOCD /V4 "%BASE_DIR%\NSIS\sdui.nsi"
echo ----
if "!ERRORLEVEL!" EQU "0" (
echo Installer built successfully at %OUT_DIR%
) else (
echo Installer failed to build at %OUT_DIR%
)
echo ----
pause

View File

@ -1,7 +1,7 @@
#!/bin/bash
printf "Hi there, what you are running is meant for the developers of this project, not for users.\n\n"
printf "If you only want to use the Stable Diffusion UI, you've downloaded the wrong file.\n"
printf "If you only want to use Easy Diffusion, you've downloaded the wrong file.\n"
printf "Please download and follow the instructions at https://github.com/easydiffusion/easydiffusion#installation \n\n"
printf "If you are actually a developer of this project, please type Y and press enter\n\n"
@ -11,40 +11,30 @@ case $yn in
* ) exit;;
esac
# mkdir -p dist/win/stable-diffusion-ui/scripts
mkdir -p dist/linux-mac/stable-diffusion-ui/scripts
# copy the installer files for Windows
# cp scripts/on_env_start.bat dist/win/stable-diffusion-ui/scripts/
# cp scripts/bootstrap.bat dist/win/stable-diffusion-ui/scripts/
# cp "scripts/Start Stable Diffusion UI.cmd" dist/win/stable-diffusion-ui/
# cp LICENSE dist/win/stable-diffusion-ui/
# cp "CreativeML Open RAIL-M License" dist/win/stable-diffusion-ui/
# cp "How to install and run.txt" dist/win/stable-diffusion-ui/
# echo "" > dist/win/stable-diffusion-ui/scripts/install_status.txt
mkdir -p dist/linux-mac/easy-diffusion/scripts
# copy the installer files for Linux and Mac
cp scripts/on_env_start.sh dist/linux-mac/stable-diffusion-ui/scripts/
cp scripts/bootstrap.sh dist/linux-mac/stable-diffusion-ui/scripts/
cp scripts/functions.sh dist/linux-mac/stable-diffusion-ui/scripts/
cp scripts/config.yaml.sample dist/linux-mac/stable-diffusion-ui/scripts/config.yaml
cp scripts/start.sh dist/linux-mac/stable-diffusion-ui/
cp LICENSE dist/linux-mac/stable-diffusion-ui/
cp "CreativeML Open RAIL-M License" dist/linux-mac/stable-diffusion-ui/
cp "How to install and run.txt" dist/linux-mac/stable-diffusion-ui/
echo "" > dist/linux-mac/stable-diffusion-ui/scripts/install_status.txt
cp scripts/on_env_start.sh dist/linux-mac/easy-diffusion/scripts/
cp scripts/bootstrap.sh dist/linux-mac/easy-diffusion/scripts/
cp scripts/functions.sh dist/linux-mac/easy-diffusion/scripts/
cp scripts/config.yaml.sample dist/linux-mac/easy-diffusion/scripts/config.yaml.sample
cp scripts/start.sh dist/linux-mac/easy-diffusion/
cp LICENSE dist/linux-mac/easy-diffusion/
cp "CreativeML Open RAIL-M License" dist/linux-mac/easy-diffusion/
cp "How to install and run.txt" dist/linux-mac/easy-diffusion/
echo "" > dist/linux-mac/easy-diffusion/scripts/install_status.txt
# set the permissions
chmod u+x dist/linux-mac/easy-diffusion/scripts/on_env_start.sh
chmod u+x dist/linux-mac/easy-diffusion/scripts/bootstrap.sh
chmod u+x dist/linux-mac/easy-diffusion/start.sh
# make the zip
# cd dist/win
# zip -r ../stable-diffusion-ui-windows.zip stable-diffusion-ui
# cd ../..
cd dist/linux-mac
zip -r ../stable-diffusion-ui-linux.zip stable-diffusion-ui
zip -r ../stable-diffusion-ui-mac.zip stable-diffusion-ui
zip -r ../Easy-Diffusion-Linux.zip easy-diffusion
zip -r ../Easy-Diffusion-Mac.zip easy-diffusion
cd ../..
echo "Build ready. Upload the zip files inside the 'dist' folder."

View File

@ -4,7 +4,7 @@ echo "Opening Stable Diffusion UI - Developer Console.." & echo.
cd /d %~dp0
set PATH=C:\Windows\System32;%PATH%
set PATH=C:\Windows\System32;C:\Windows\System32\WindowsPowerShell\v1.0;%PATH%
@rem set legacy and new installer's PATH, if they exist
if exist "installer" set PATH=%cd%\installer;%cd%\installer\Library\bin;%cd%\installer\Scripts;%cd%\installer\Library\usr\bin;%PATH%
@ -26,6 +26,7 @@ call conda --version
echo.
echo COMSPEC=%COMSPEC%
echo.
powershell -Command "(Get-WmiObject Win32_VideoController | Select-Object Name, AdapterRAM, DriverDate, DriverVersion)"
@rem activate the legacy environment (if present) and set PYTHONPATH
if exist "installer_files\env" (

View File

@ -3,7 +3,8 @@
cd /d %~dp0
echo Install dir: %~dp0
set PATH=C:\Windows\System32;%PATH%
set PATH=C:\Windows\System32;C:\Windows\System32\WindowsPowerShell\v1.0;%PATH%
set PYTHONHOME=
if exist "on_sd_start.bat" (
echo ================================================================================
@ -14,7 +15,7 @@ if exist "on_sd_start.bat" (
echo download. This will not work.
echo.
echo Recommended: Please close this window and download the installer from
echo https://stable-diffusion-ui.github.io/docs/installation/
echo https://easydiffusion.github.io/docs/installation/
echo.
echo ================================================================================
echo.
@ -38,6 +39,7 @@ call where conda
call conda --version
echo .
echo COMSPEC=%COMSPEC%
powershell -Command "(Get-WmiObject Win32_VideoController | Select-Object Name, AdapterRAM, DriverDate, DriverVersion)"
@rem Download the rest of the installer and UI
call scripts\on_env_start.bat

View File

@ -14,6 +14,8 @@ set LEGACY_INSTALL_ENV_DIR=%cd%\installer
set MICROMAMBA_DOWNLOAD_URL=https://github.com/easydiffusion/easydiffusion/releases/download/v1.1/micromamba.exe
set umamba_exists=F
set PYTHONHOME=
set OLD_APPDATA=%APPDATA%
set OLD_USERPROFILE=%USERPROFILE%
set APPDATA=%cd%\installer_files\appdata
@ -22,15 +24,12 @@ set USERPROFILE=%cd%\profile
@rem figure out whether git and conda needs to be installed
if exist "%INSTALL_ENV_DIR%" set PATH=%INSTALL_ENV_DIR%;%INSTALL_ENV_DIR%\Library\bin;%INSTALL_ENV_DIR%\Scripts;%INSTALL_ENV_DIR%\Library\usr\bin;%PATH%
set PACKAGES_TO_INSTALL=
set PACKAGES_TO_INSTALL=git python=3.9
if not exist "%LEGACY_INSTALL_ENV_DIR%\etc\profile.d\conda.sh" (
if not exist "%INSTALL_ENV_DIR%\etc\profile.d\conda.sh" set PACKAGES_TO_INSTALL=%PACKAGES_TO_INSTALL% conda python=3.8.5
if not exist "%INSTALL_ENV_DIR%\etc\profile.d\conda.sh" set PACKAGES_TO_INSTALL=%PACKAGES_TO_INSTALL% conda
)
call git --version >.tmp1 2>.tmp2
if "!ERRORLEVEL!" NEQ "0" set PACKAGES_TO_INSTALL=%PACKAGES_TO_INSTALL% git
call "%MAMBA_ROOT_PREFIX%\micromamba.exe" --version >.tmp1 2>.tmp2
if "!ERRORLEVEL!" EQU "0" set umamba_exists=T

View File

@ -46,7 +46,7 @@ if [ -e "$INSTALL_ENV_DIR" ]; then export PATH="$INSTALL_ENV_DIR/bin:$PATH"; fi
PACKAGES_TO_INSTALL=""
if [ ! -e "$LEGACY_INSTALL_ENV_DIR/etc/profile.d/conda.sh" ] && [ ! -e "$INSTALL_ENV_DIR/etc/profile.d/conda.sh" ]; then PACKAGES_TO_INSTALL="$PACKAGES_TO_INSTALL conda python=3.8.5"; fi
if [ ! -e "$LEGACY_INSTALL_ENV_DIR/etc/profile.d/conda.sh" ] && [ ! -e "$INSTALL_ENV_DIR/etc/profile.d/conda.sh" ]; then PACKAGES_TO_INSTALL="$PACKAGES_TO_INSTALL conda python=3.9"; fi
if ! hash "git" &>/dev/null; then PACKAGES_TO_INSTALL="$PACKAGES_TO_INSTALL git"; fi
if "$MAMBA_ROOT_PREFIX/micromamba" --version &>/dev/null; then umamba_exists="T"; fi

View File

@ -8,26 +8,37 @@ a custom index URL depending on the platform.
"""
import os
import os, sys
from importlib.metadata import version as pkg_version
import platform
import traceback
import shutil
from pathlib import Path
from pprint import pprint
import re
import torchruntime
os_name = platform.system()
modules_to_check = {
"torch": ("1.11.0", "1.13.1", "2.0.0"),
"torchvision": ("0.12.0", "0.14.1", "0.15.1"),
"sdkit": "1.0.142",
"stable-diffusion-sdkit": "2.1.4",
"setuptools": "69.5.1",
# "sdkit": "2.0.15.6", # checked later
# "diffusers": "0.21.4", # checked later
"stable-diffusion-sdkit": "2.1.5",
"rich": "12.6.0",
"uvicorn": "0.19.0",
"fastapi": "0.85.1",
"fastapi": "0.115.6",
"pycloudflared": "0.2.0",
"ruamel.yaml": "0.17.21",
"sqlalchemy": "2.0.19",
"python-multipart": "0.0.6",
# "xformers": "0.0.16",
"huggingface-hub": "0.21.4",
"wandb": "0.17.2",
"torchruntime": "1.14.1",
"torchsde": "0.2.6",
}
modules_to_log = ["torch", "torchvision", "sdkit", "stable-diffusion-sdkit"]
modules_to_log = ["torchruntime", "torch", "torchvision", "sdkit", "stable-diffusion-sdkit", "diffusers"]
def version(module_name: str) -> str:
@ -37,26 +48,9 @@ def version(module_name: str) -> str:
return None
def install(module_name: str, module_version: str):
if module_name == "xformers" and (os_name == "Darwin" or is_amd_on_linux()):
return
index_url = None
if module_name in ("torch", "torchvision"):
module_version, index_url = apply_torch_install_overrides(module_version)
if is_amd_on_linux(): # hack until AMD works properly on torch 2.0 (avoids black images on some cards)
if module_name == "torch":
module_version = "1.13.1+rocm5.2"
elif module_name == "torchvision":
module_version = "0.14.1+rocm5.2"
elif os_name == "Darwin":
if module_name == "torch":
module_version = "1.13.1"
elif module_name == "torchvision":
module_version = "0.14.1"
def install(module_name: str, module_version: str, index_url=None):
install_cmd = f"python -m pip install --upgrade {module_name}=={module_version}"
if index_url:
install_cmd += f" --index-url {index_url}"
if module_name == "sdkit" and version("sdkit") is not None:
@ -66,24 +60,26 @@ def install(module_name: str, module_version: str):
os.system(install_cmd)
def init():
def update_modules():
if version("torch") is None:
torchruntime.install(["torch", "torchvision"])
for module_name, allowed_versions in modules_to_check.items():
if os.path.exists(f"../src/{module_name}"):
if os.path.exists(f"src/{module_name}"):
print(f"Skipping {module_name} update, since it's in developer/editable mode")
continue
allowed_versions, latest_version = get_allowed_versions(module_name, allowed_versions)
requires_install = False
if module_name in ("torch", "torchvision"):
if version(module_name) is None: # allow any torch version
requires_install = True
elif os_name == "Darwin" and ( # force mac to downgrade from torch 2.0
version("torch").startswith("2.") or version("torchvision").startswith("0.15.")
):
requires_install = True
elif version(module_name) not in allowed_versions:
requires_install = True
if module_name == "setuptools":
if os_name == "Windows":
allowed_versions = ("59.8.0",)
latest_version = "59.8.0"
else:
allowed_versions = ("69.5.1",)
latest_version = "69.5.1"
requires_install = version(module_name) not in allowed_versions
if requires_install:
try:
@ -91,60 +87,129 @@ def init():
except:
traceback.print_exc()
fail(module_name)
else:
if version(module_name) != latest_version:
print(
f"WARNING! Tried to install {module_name}=={latest_version}, but the version is still {version(module_name)}!"
)
if module_name in modules_to_log:
print(f"{module_name}: {version(module_name)}")
# different sdkit versions, with the corresponding diffusers
# if sdkit is 2.0.15.x (or lower), then diffusers should be restricted to 0.21.4 (see below for the reason)
# otherwise use the current sdkit version (with the corresponding diffusers version)
expected_sdkit_version_str = "2.0.22.7"
expected_diffusers_version_str = "0.28.2"
legacy_sdkit_version_str = "2.0.15.16"
legacy_diffusers_version_str = "0.21.4"
sdkit_version_str = version("sdkit")
if sdkit_version_str is None: # first install
_install("sdkit", expected_sdkit_version_str)
_install("diffusers", expected_diffusers_version_str)
else:
sdkit_version = version_str_to_tuple(sdkit_version_str)
legacy_sdkit_version = version_str_to_tuple(legacy_sdkit_version_str)
if sdkit_version[:3] <= legacy_sdkit_version[:3]:
# stick to diffusers 0.21.4, since it preserves torch 0.11+ compatibility.
# upgrading beyond this will result in a 2+ GB download of torch on older installations
# and a time-consuming chain of small package updates due to huggingface_hub upgrade.
# for now, the user will need to explicitly upgrade to a newer sdkit, to break this ceiling.
install_pkg_if_necessary("sdkit", legacy_sdkit_version_str)
install_pkg_if_necessary("diffusers", legacy_diffusers_version_str)
else:
torch_version = version_str_to_tuple(version("torch"))
if torch_version < (1, 13):
# install the gpu-compatible torch (if necessary), instead of the default CPU-only one
# from the diffusers dependency chain
torchruntime.install(["--upgrade", "torch", "torchvision"])
install_pkg_if_necessary("sdkit", expected_sdkit_version_str)
install_pkg_if_necessary("diffusers", expected_diffusers_version_str)
# hotfix accelerate
accelerate_version = version("accelerate")
if accelerate_version is None:
install("accelerate", "0.23.0")
else:
accelerate_version = accelerate_version.split(".")
accelerate_version = tuple(map(int, accelerate_version))
if accelerate_version < (0, 23):
install("accelerate", "0.23.0")
# hotfix - 29 May 2024. sdkit has stopped pulling its dependencies for some reason
# temporarily dumping sdkit's requirements here:
if os_name != "Windows":
sdkit_deps = [
"gfpgan",
"piexif",
"realesrgan",
"requests",
"picklescan",
"safetensors==0.3.3",
"k-diffusion==0.0.12",
"compel==2.0.1",
"controlnet-aux==0.0.6",
"invisible-watermark==0.2.0", # required for SD XL
]
for mod in sdkit_deps:
mod_name = mod
mod_force_version_str = None
if "==" in mod:
mod_name, mod_force_version_str = mod.split("==")
curr_mod_version_str = version(mod_name)
if curr_mod_version_str is None:
_install(mod_name, mod_force_version_str)
elif mod_force_version_str is not None:
curr_mod_version = version_str_to_tuple(curr_mod_version_str)
mod_force_version = version_str_to_tuple(mod_force_version_str)
if curr_mod_version != mod_force_version:
_install(mod_name, mod_force_version_str)
for module_name in modules_to_log:
print(f"{module_name}: {version(module_name)}")
def _install(module_name, module_version=None):
if module_version is None:
install_cmd = f"python -m pip install {module_name}"
else:
install_cmd = f"python -m pip install --upgrade {module_name}=={module_version}"
print(">", install_cmd)
os.system(install_cmd)
def install_pkg_if_necessary(pkg_name, required_version):
if os.path.exists(f"src/{pkg_name}"):
print(f"Skipping {pkg_name} update, since it's in developer/editable mode")
return
pkg_version = version(pkg_name)
if pkg_version != required_version:
_install(pkg_name, required_version)
def version_str_to_tuple(ver_str):
ver_str = ver_str.split("+")[0]
ver_str = re.sub("[^0-9.]", "", ver_str)
ver = ver_str.split(".")
return tuple(map(int, ver))
### utilities
def get_allowed_versions(module_name: str, allowed_versions: tuple):
allowed_versions = (allowed_versions,) if isinstance(allowed_versions, str) else allowed_versions
latest_version = allowed_versions[-1]
if module_name in ("torch", "torchvision"):
allowed_versions = include_cuda_versions(allowed_versions)
return allowed_versions, latest_version
def apply_torch_install_overrides(module_version: str):
index_url = None
if os_name == "Windows":
module_version += "+cu117"
index_url = "https://download.pytorch.org/whl/cu117"
elif is_amd_on_linux():
index_url = "https://download.pytorch.org/whl/rocm5.2"
return module_version, index_url
def include_cuda_versions(module_versions: tuple) -> tuple:
"Adds CUDA-specific versions to the list of allowed version numbers"
allowed_versions = tuple(module_versions)
allowed_versions += tuple(f"{v}+cu116" for v in module_versions)
allowed_versions += tuple(f"{v}+cu117" for v in module_versions)
allowed_versions += tuple(f"{v}+rocm5.2" for v in module_versions)
allowed_versions += tuple(f"{v}+rocm5.4.2" for v in module_versions)
return allowed_versions
def is_amd_on_linux():
if os_name == "Linux":
try:
with open("/proc/bus/pci/devices", "r") as f:
device_info = f.read()
if "amdgpu" in device_info and "nvidia" not in device_info:
return True
except:
return False
return False
def fail(module_name):
print(
f"""Error installing {module_name}. Sorry about that, please try to:
@ -157,6 +222,100 @@ Thanks!"""
exit(1)
### start
### Launcher
init()
def get_config():
config_directory = os.path.dirname(__file__) # this will be "scripts"
config_yaml = os.path.join(config_directory, "..", "config.yaml")
config_json = os.path.join(config_directory, "config.json")
config = None
# migrate the old config yaml location
config_legacy_yaml = os.path.join(config_directory, "config.yaml")
if os.path.isfile(config_legacy_yaml):
shutil.move(config_legacy_yaml, config_yaml)
if os.path.isfile(config_yaml):
from ruamel.yaml import YAML
yaml = YAML(typ="safe")
with open(config_yaml, "r") as configfile:
try:
config = yaml.load(configfile)
except Exception as e:
print(e, file=sys.stderr)
elif os.path.isfile(config_json):
import json
with open(config_json, "r") as configfile:
try:
config = json.load(configfile)
except Exception as e:
print(e, file=sys.stderr)
if config is None:
config = {}
return config
def launch_uvicorn():
config = get_config()
pprint(config)
with open("scripts/install_status.txt", "a") as f:
f.write("sd_weights_downloaded\n")
f.write("sd_install_complete\n")
print("\n\nEasy Diffusion installation complete, starting the server!\n\n")
torchruntime.configure()
if hasattr(torchruntime, "info"):
torchruntime.info()
if os_name == "Windows":
os.environ["PYTHONPATH"] = str(Path(os.environ["INSTALL_ENV_DIR"], "lib", "site-packages"))
else:
os.environ["PYTHONPATH"] = str(Path(os.environ["INSTALL_ENV_DIR"], "lib", "python3.8", "site-packages"))
os.environ["SD_UI_PATH"] = str(Path(Path.cwd(), "ui"))
print(f"PYTHONPATH={os.environ['PYTHONPATH']}")
print(f"Python: {shutil.which('python')}")
print(f"Version: {platform. python_version()}")
bind_ip = "127.0.0.1"
listen_port = 9000
if "net" in config:
print("Checking network settings")
if "listen_port" in config["net"]:
listen_port = config["net"]["listen_port"]
print("Set listen port to ", listen_port)
if "listen_to_network" in config["net"] and config["net"]["listen_to_network"] == True:
if "bind_ip" in config["net"]:
bind_ip = config["net"]["bind_ip"]
else:
bind_ip = "0.0.0.0"
print("Set bind_ip to ", bind_ip)
os.chdir("stable-diffusion")
print("\nLaunching uvicorn\n")
import uvicorn
uvicorn.run(
"main:server_api",
port=listen_port,
log_level="error",
app_dir=os.environ["SD_UI_PATH"],
host=bind_ip,
access_log=False,
)
update_modules()
if len(sys.argv) > 1 and sys.argv[1] == "--launch-uvicorn":
launch_uvicorn()

View File

@ -1,6 +1,6 @@
@echo off
@echo. & echo "Easy Diffusion - v2" & echo.
@echo. & echo "Easy Diffusion - v3" & echo.
set PATH=C:\Windows\System32;%PATH%
@ -46,6 +46,8 @@ if "%update_branch%"=="" (
@cd sd-ui-files
@call git add -A .
@call git stash
@call git reset --hard
@call git -c advice.detachedHead=false checkout "%update_branch%"
@call git pull

View File

@ -2,7 +2,7 @@
source ./scripts/functions.sh
printf "\n\nEasy Diffusion\n\n"
printf "\n\nEasy Diffusion - v3\n\n"
export PYTHONNOUSERSITE=y
@ -29,6 +29,8 @@ if [ -f "scripts/install_status.txt" ] && [ `grep -c sd_ui_git_cloned scripts/in
cd sd-ui-files
git add -A .
git stash
git reset --hard
git -c advice.detachedHead=false checkout "$update_branch"
git pull

View File

@ -34,6 +34,7 @@ call conda activate
@REM remove the old version of the dev console script, if it's still present
if exist "Open Developer Console.cmd" del "Open Developer Console.cmd"
if exist "ui\plugins\ui\merge.plugin.js" del "ui\plugins\ui\merge.plugin.js"
@rem create the stable-diffusion folder, to work with legacy installations
if not exist "stable-diffusion" mkdir stable-diffusion
@ -52,73 +53,26 @@ if exist ldm rename ldm ldm-old
if not exist "%INSTALL_ENV_DIR%\DLLs\libssl-1_1-x64.dll" copy "%INSTALL_ENV_DIR%\Library\bin\libssl-1_1-x64.dll" "%INSTALL_ENV_DIR%\DLLs\"
if not exist "%INSTALL_ENV_DIR%\DLLs\libcrypto-1_1-x64.dll" copy "%INSTALL_ENV_DIR%\Library\bin\libcrypto-1_1-x64.dll" "%INSTALL_ENV_DIR%\DLLs\"
cd ..
@rem set any overrides
set HF_HUB_DISABLE_SYMLINKS_WARNING=true
@rem install or upgrade the required modules
set PATH=C:\Windows\System32;%PATH%
@REM prevent from using packages from the user's home directory, to avoid conflicts
set PYTHONNOUSERSITE=1
set PYTHONPATH=%INSTALL_ENV_DIR%\lib\site-packages
@rem Download the required packages
call python ..\scripts\check_modules.py
if "%ERRORLEVEL%" NEQ "0" (
pause
exit /b
)
call WHERE uvicorn > .tmp
@>nul findstr /m "uvicorn" .tmp
@if "%ERRORLEVEL%" NEQ "0" (
@echo. & echo "UI packages not found! Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/easydiffusion/easydiffusion/wiki/Troubleshooting" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/easydiffusion/easydiffusion/issues" & echo "Thanks!" & echo.
pause
exit /b
)
@>nul findstr /m "conda_sd_ui_deps_installed" ..\scripts\install_status.txt
@if "%ERRORLEVEL%" NEQ "0" (
@echo conda_sd_ui_deps_installed >> ..\scripts\install_status.txt
)
@>nul findstr /m "sd_install_complete" ..\scripts\install_status.txt
@if "%ERRORLEVEL%" NEQ "0" (
@echo sd_weights_downloaded >> ..\scripts\install_status.txt
@echo sd_install_complete >> ..\scripts\install_status.txt
)
@echo. & echo "Easy Diffusion installation complete! Starting the server!" & echo.
@set SD_DIR=%cd%
set PYTHONPATH=%INSTALL_ENV_DIR%\lib\site-packages
echo PYTHONPATH=%PYTHONPATH%
@rem Download the required packages
call where python
call python --version
@cd ..
@set SD_UI_PATH=%cd%\ui
call python -m pip install -q torchruntime
@FOR /F "tokens=* USEBACKQ" %%F IN (`python scripts\get_config.py --default=9000 net listen_port`) DO (
@SET ED_BIND_PORT=%%F
)
call python scripts\check_modules.py --launch-uvicorn
pause
exit /b
@FOR /F "tokens=* USEBACKQ" %%F IN (`python scripts\get_config.py --default=False net listen_to_network`) DO (
if "%%F" EQU "True" (
@FOR /F "tokens=* USEBACKQ" %%G IN (`python scripts\get_config.py --default=0.0.0.0 net bind_ip`) DO (
@SET ED_BIND_IP=%%G
)
) else (
@SET ED_BIND_IP=127.0.0.1
)
)
@cd stable-diffusion
@rem set any overrides
set HF_HUB_DISABLE_SYMLINKS_WARNING=true
@python -m uvicorn main:server_api --app-dir "%SD_UI_PATH%" --port %ED_BIND_PORT% --host %ED_BIND_IP% --log-level error
@pause

View File

@ -6,6 +6,7 @@ cp sd-ui-files/scripts/bootstrap.sh scripts/
cp sd-ui-files/scripts/check_modules.py scripts/
cp sd-ui-files/scripts/get_config.py scripts/
cp sd-ui-files/scripts/config.yaml.sample scripts/
source ./scripts/functions.sh
@ -20,6 +21,10 @@ if [ -e "open_dev_console.sh" ]; then
rm "open_dev_console.sh"
fi
if [ -e "ui/plugins/ui/merge.plugin.js" ]; then
rm "ui/plugins/ui/merge.plugin.js"
fi
# set the correct installer path (current vs legacy)
if [ -e "installer_files/env" ]; then
export INSTALL_ENV_DIR="$(pwd)/installer_files/env"
@ -41,45 +46,10 @@ fi
if [ -e "src" ]; then mv src src-old; fi
if [ -e "ldm" ]; then mv ldm ldm-old; fi
# Download the required packages
if ! python ../scripts/check_modules.py; then
read -p "Press any key to continue"
exit 1
fi
if ! command -v uvicorn &> /dev/null; then
fail "UI packages not found!"
fi
if [ `grep -c sd_install_complete ../scripts/install_status.txt` -gt "0" ]; then
echo sd_weights_downloaded >> ../scripts/install_status.txt
echo sd_install_complete >> ../scripts/install_status.txt
fi
printf "\n\nEasy Diffusion installation complete, starting the server!\n\n"
SD_PATH=`pwd`
export PYTORCH_ENABLE_MPS_FALLBACK=1
export PYTHONPATH="$INSTALL_ENV_DIR/lib/python3.8/site-packages"
echo "PYTHONPATH=$PYTHONPATH"
which python
python --version
python -m pip install -q torchruntime
cd ..
export SD_UI_PATH=`pwd`/ui
export ED_BIND_PORT="$( python scripts/get_config.py --default=9000 net listen_port )"
case "$( python scripts/get_config.py --default=False net listen_to_network )" in
"True")
export ED_BIND_IP=$( python scripts/get_config.py --default=0.0.0.0 net bind_ip)
;;
"False")
export ED_BIND_IP=127.0.0.1
;;
esac
cd stable-diffusion
uvicorn main:server_api --app-dir "$SD_UI_PATH" --port "$ED_BIND_PORT" --host "$ED_BIND_IP" --log-level error
# Download the required packages
python scripts/check_modules.py --launch-uvicorn
read -p "Press any key to continue"

View File

@ -11,7 +11,7 @@ if [ -f "on_sd_start.bat" ]; then
echo download. This will not work.
echo
echo Recommended: Please close this window and download the installer from
echo https://stable-diffusion-ui.github.io/docs/installation/
echo https://easydiffusion.github.io/docs/installation/
echo
echo ================================================================================
echo
@ -19,6 +19,7 @@ if [ -f "on_sd_start.bat" ]; then
exit 1
fi
unset PYTHONHOME
# set legacy installer's PATH, if it exists
if [ -e "installer" ]; then export PATH="$(pwd)/installer/bin:$PATH"; fi

View File

@ -32,10 +32,12 @@ logging.basicConfig(
SD_DIR = os.getcwd()
ROOT_DIR = os.path.abspath(os.path.join(SD_DIR, ".."))
SD_UI_DIR = os.getenv("SD_UI_PATH", None)
CONFIG_DIR = os.path.abspath(os.path.join(SD_UI_DIR, "..", "scripts"))
MODELS_DIR = os.path.abspath(os.path.join(SD_DIR, "..", "models"))
BUCKET_DIR = os.path.abspath(os.path.join(SD_DIR, "..", "bucket"))
USER_PLUGINS_DIR = os.path.abspath(os.path.join(SD_DIR, "..", "plugins"))
CORE_PLUGINS_DIR = os.path.abspath(os.path.join(SD_UI_DIR, "plugins"))
@ -52,12 +54,12 @@ OUTPUT_DIRNAME = "Stable Diffusion UI" # in the user's home folder
PRESERVE_CONFIG_VARS = ["FORCE_FULL_PRECISION"]
TASK_TTL = 15 * 60 # Discard last session's task timeout
APP_CONFIG_DEFAULTS = {
# auto: selects the cuda device with the most free memory, cuda: use the currently active cuda device.
"render_devices": "auto", # valid entries: 'auto', 'cpu' or 'cuda:N' (where N is a GPU index)
"render_devices": "auto",
"update_branch": "main",
"ui": {
"open_browser_on_start": True,
},
"use_v3_engine": True,
}
IMAGE_EXTENSIONS = [
@ -88,14 +90,23 @@ CUSTOM_MODIFIERS_LANDSCAPE_EXTENSIONS = [
"-landscape",
]
MODELS_DIR = os.path.abspath(os.path.join(SD_DIR, "..", "models"))
def init():
global MODELS_DIR
os.makedirs(USER_UI_PLUGINS_DIR, exist_ok=True)
os.makedirs(USER_SERVER_PLUGINS_DIR, exist_ok=True)
# https://pytorch.org/docs/stable/storage.html
warnings.filterwarnings("ignore", category=UserWarning, message="TypedStorage is deprecated")
config = getConfig()
config_models_dir = config.get("models_dir", None)
if (config_models_dir is not None and config_models_dir != ""):
MODELS_DIR = config_models_dir
def init_render_threads():
load_server_plugins()
@ -112,9 +123,9 @@ def getConfig(default_val=APP_CONFIG_DEFAULTS):
shutil.move(config_legacy_yaml, config_yaml_path)
def set_config_on_startup(config: dict):
if getConfig.__test_diffusers_on_startup is None:
getConfig.__test_diffusers_on_startup = config.get("test_diffusers", False)
config["config_on_startup"] = {"test_diffusers": getConfig.__test_diffusers_on_startup}
if getConfig.__use_v3_engine_on_startup is None:
getConfig.__use_v3_engine_on_startup = config.get("use_v3_engine", True)
config["config_on_startup"] = {"use_v3_engine": getConfig.__use_v3_engine_on_startup}
if os.path.isfile(config_yaml_path):
try:
@ -162,12 +173,15 @@ def getConfig(default_val=APP_CONFIG_DEFAULTS):
return default_val
getConfig.__test_diffusers_on_startup = None
getConfig.__use_v3_engine_on_startup = None
def setConfig(config):
global MODELS_DIR
try: # config.yaml
config_yaml_path = os.path.join(CONFIG_DIR, "..", "config.yaml")
config_yaml_path = os.path.abspath(config_yaml_path)
yaml = YAML()
if not hasattr(config, "_yaml_comment"):
@ -201,6 +215,9 @@ def setConfig(config):
except:
log.error(traceback.format_exc())
if config.get("models_dir"):
MODELS_DIR = config["models_dir"]
def save_to_config(ckpt_model_name, vae_model_name, hypernetwork_model_name, vram_usage_level):
config = getConfig()

View File

@ -0,0 +1,107 @@
from typing import List
from fastapi import Depends, FastAPI, HTTPException, Response, File
from sqlalchemy.orm import Session
from easydiffusion.easydb import crud, models, schemas
from easydiffusion.easydb.database import SessionLocal, engine
from requests.compat import urlparse
import base64, json
MIME_TYPES = {
"jpg": "image/jpeg",
"jpeg": "image/jpeg",
"gif": "image/gif",
"png": "image/png",
"webp": "image/webp",
"js": "text/javascript",
"htm": "text/html",
"html": "text/html",
"css": "text/css",
"json": "application/json",
"mjs": "application/json",
"yaml": "application/yaml",
"svg": "image/svg+xml",
"txt": "text/plain",
}
def init():
from easydiffusion.server import server_api
models.BucketBase.metadata.create_all(bind=engine)
# Dependency
def get_db():
db = SessionLocal()
try:
yield db
finally:
db.close()
@server_api.get("/bucket/{obj_path:path}")
def bucket_get_object(obj_path: str, db: Session = Depends(get_db)):
filename = get_filename_from_url(obj_path)
path = get_path_from_url(obj_path)
if filename==None:
bucket = crud.get_bucket_by_path(db, path=path)
if bucket == None:
raise HTTPException(status_code=404, detail="Bucket not found")
bucketfiles = db.query(models.BucketFile).with_entities(models.BucketFile.filename).filter(models.BucketFile.bucket_id == bucket.id).all()
bucketfiles = [ x.filename for x in bucketfiles ]
return bucketfiles
else:
bucket = crud.get_bucket_by_path(db, path)
if bucket == None:
raise HTTPException(status_code=404, detail="Bucket not found")
bucket_id = bucket.id
bucketfile = db.query(models.BucketFile).filter(models.BucketFile.bucket_id == bucket_id, models.BucketFile.filename == filename).first()
if bucketfile == None:
raise HTTPException(status_code=404, detail="File not found")
suffix = get_suffix_from_filename(filename)
return Response(content=bucketfile.data, media_type=MIME_TYPES.get(suffix, "application/octet-stream"))
@server_api.post("/bucket/{obj_path:path}")
def bucket_post_object(obj_path: str, file: bytes = File(), db: Session = Depends(get_db)):
filename = get_filename_from_url(obj_path)
path = get_path_from_url(obj_path)
bucket = crud.get_bucket_by_path(db, path)
if bucket == None:
bucket = crud.create_bucket(db=db, bucket=schemas.BucketCreate(path=path))
bucket_id = bucket.id
bucketfile = schemas.BucketFileCreate(filename=filename, data=file)
result = crud.create_bucketfile(db=db, bucketfile=bucketfile, bucket_id=bucket_id)
result.data = base64.encodestring(result.data)
return result
@server_api.post("/buckets/{bucket_id}/items/", response_model=schemas.BucketFile)
def create_bucketfile_in_bucket(
bucket_id: int, bucketfile: schemas.BucketFileCreate, db: Session = Depends(get_db)
):
bucketfile.data = base64.decodestring(bucketfile.data)
result = crud.create_bucketfile(db=db, bucketfile=bucketfile, bucket_id=bucket_id)
result.data = base64.encodestring(result.data)
return result
def get_filename_from_url(url):
path = urlparse(url).path
name = path[path.rfind('/')+1:]
return name or None
def get_path_from_url(url):
path = urlparse(url).path
path = path[0:path.rfind('/')]
return path or None
def get_suffix_from_filename(filename):
return filename[filename.rfind('.')+1:]

View File

@ -6,6 +6,15 @@ import traceback
import torch
from easydiffusion.utils import log
from torchruntime.utils import (
get_installed_torch_platform,
get_device,
get_device_count,
get_device_name,
SUPPORTED_BACKENDS,
)
from sdkit.utils import mem_get_info, is_cpu_device, has_half_precision_bug
"""
Set `FORCE_FULL_PRECISION` in the environment variables, or in `config.bat`/`config.sh` to set full precision (i.e. float32).
Otherwise the models will load at half-precision (i.e. float16).
@ -22,33 +31,15 @@ mem_free_threshold = 0
def get_device_delta(render_devices, active_devices):
"""
render_devices: 'cpu', or 'auto', or 'mps' or ['cuda:N'...]
active_devices: ['cpu', 'mps', 'cuda:N'...]
render_devices: 'auto' or backends listed in `torchruntime.utils.SUPPORTED_BACKENDS`
active_devices: [backends listed in `torchruntime.utils.SUPPORTED_BACKENDS`]
"""
if render_devices in ("cpu", "auto", "mps"):
render_devices = [render_devices]
elif render_devices is not None:
if isinstance(render_devices, str):
render_devices = [render_devices]
if isinstance(render_devices, list) and len(render_devices) > 0:
render_devices = list(filter(lambda x: x.startswith("cuda:") or x == "mps", render_devices))
if len(render_devices) == 0:
raise Exception(
'Invalid render_devices value in config.json. Valid: {"render_devices": ["cuda:0", "cuda:1"...]}, or {"render_devices": "cpu"} or {"render_devices": "mps"} or {"render_devices": "auto"}'
)
render_devices = render_devices or "auto"
render_devices = [render_devices] if isinstance(render_devices, str) else render_devices
render_devices = list(filter(lambda x: is_device_compatible(x), render_devices))
if len(render_devices) == 0:
raise Exception(
"Sorry, none of the render_devices configured in config.json are compatible with Stable Diffusion"
)
else:
raise Exception(
'Invalid render_devices value in config.json. Valid: {"render_devices": ["cuda:0", "cuda:1"...]}, or {"render_devices": "cpu"} or {"render_devices": "auto"}'
)
else:
render_devices = ["auto"]
# check for backend support
validate_render_devices(render_devices)
if "auto" in render_devices:
render_devices = auto_pick_devices(active_devices)
@ -64,47 +55,39 @@ def get_device_delta(render_devices, active_devices):
return devices_to_start, devices_to_stop
def is_mps_available():
return (
platform.system() == "Darwin"
and hasattr(torch.backends, "mps")
and torch.backends.mps.is_available()
and torch.backends.mps.is_built()
)
def validate_render_devices(render_devices):
supported_backends = ("auto",) + SUPPORTED_BACKENDS
unsupported_render_devices = [d for d in render_devices if not d.lower().startswith(supported_backends)]
def is_cuda_available():
return torch.cuda.is_available()
if unsupported_render_devices:
raise ValueError(
f"Invalid render devices in config: {unsupported_render_devices}. Valid render devices: {supported_backends}"
)
def auto_pick_devices(currently_active_devices):
global mem_free_threshold
if is_mps_available():
return ["mps"]
torch_platform_name = get_installed_torch_platform()[0]
if not is_cuda_available():
return ["cpu"]
device_count = torch.cuda.device_count()
if device_count == 1:
return ["cuda:0"] if is_device_compatible("cuda:0") else ["cpu"]
if is_cpu_device(torch_platform_name):
return [torch_platform_name]
device_count = get_device_count()
log.debug("Autoselecting GPU. Using most free memory.")
devices = []
for device in range(device_count):
device = f"cuda:{device}"
if not is_device_compatible(device):
continue
for device_id in range(device_count):
device_id = f"{torch_platform_name}:{device_id}" if device_count > 1 else torch_platform_name
device = get_device(device_id)
mem_free, mem_total = torch.cuda.mem_get_info(device)
mem_free, mem_total = mem_get_info(device)
mem_free /= float(10**9)
mem_total /= float(10**9)
device_name = torch.cuda.get_device_name(device)
device_name = get_device_name(device)
log.debug(
f"{device} detected: {device_name} - Memory (free/total): {round(mem_free, 2)}Gb / {round(mem_total, 2)}Gb"
f"{device_id} detected: {device_name} - Memory (free/total): {round(mem_free, 2)}Gb / {round(mem_total, 2)}Gb"
)
devices.append({"device": device, "device_name": device_name, "mem_free": mem_free})
devices.append({"device": device_id, "device_name": device_name, "mem_free": mem_free})
devices.sort(key=lambda x: x["mem_free"], reverse=True)
max_mem_free = devices[0]["mem_free"]
@ -117,69 +100,45 @@ def auto_pick_devices(currently_active_devices):
# always be very low (since their VRAM contains the model).
# These already-running devices probably aren't terrible, since they were picked in the past.
# Worst case, the user can restart the program and that'll get rid of them.
devices = list(
filter(
(lambda x: x["mem_free"] > mem_free_threshold or x["device"] in currently_active_devices),
devices,
)
)
devices = list(map(lambda x: x["device"], devices))
devices = [
x["device"] for x in devices if x["mem_free"] >= mem_free_threshold or x["device"] in currently_active_devices
]
return devices
def device_init(context, device):
"""
This function assumes the 'device' has already been verified to be compatible.
`get_device_delta()` has already filtered out incompatible devices.
"""
def device_init(context, device_id):
context.device = device_id
validate_device_id(device, log_prefix="device_init")
if "cuda" not in device:
context.device = device
if is_cpu_device(context.torch_device):
context.device_name = get_processor_name()
context.half_precision = False
log.debug(f"Render device available as {context.device_name}")
return
else:
context.device_name = get_device_name(context.torch_device)
context.device_name = torch.cuda.get_device_name(device)
context.device = device
# Some graphics cards have bugs in their firmware that prevent image generation at half precision
if needs_to_force_full_precision(context.device_name):
log.warn(f"forcing full precision on this GPU, to avoid corrupted images. GPU: {context.device_name}")
context.half_precision = False
# Force full precision on 1660 and 1650 NVIDIA cards to avoid creating green images
if needs_to_force_full_precision(context):
log.warn(f"forcing full precision on this GPU, to avoid green images. GPU detected: {context.device_name}")
# Apply force_full_precision now before models are loaded.
context.half_precision = False
log.info(f'Setting {device} as active, with precision: {"half" if context.half_precision else "full"}')
torch.cuda.device(device)
log.info(f'Setting {device_id} as active, with precision: {"half" if context.half_precision else "full"}')
def needs_to_force_full_precision(context):
def needs_to_force_full_precision(device_name):
if "FORCE_FULL_PRECISION" in os.environ:
return True
device_name = context.device_name.lower()
return (
("nvidia" in device_name or "geforce" in device_name or "quadro" in device_name)
and (
" 1660" in device_name
or " 1650" in device_name
or " 1630" in device_name
or " t400" in device_name
or " t550" in device_name
or " t600" in device_name
or " t1000" in device_name
or " t1200" in device_name
or " t2000" in device_name
)
) or ("tesla k40m" in device_name)
return has_half_precision_bug(device_name.lower())
def get_max_vram_usage_level(device):
if "cuda" in device:
_, mem_total = torch.cuda.mem_get_info(device)
else:
"Expects a torch.device as the argument"
if is_cpu_device(device):
return "high"
_, mem_total = mem_get_info(device)
if mem_total < 0.001: # probably a torch platform without a mem_get_info() implementation
return "high"
mem_total /= float(10**9)
@ -191,51 +150,6 @@ def get_max_vram_usage_level(device):
return "high"
def validate_device_id(device, log_prefix=""):
def is_valid():
if not isinstance(device, str):
return False
if device == "cpu" or device == "mps":
return True
if not device.startswith("cuda:") or not device[5:].isnumeric():
return False
return True
if not is_valid():
raise EnvironmentError(
f"{log_prefix}: device id should be 'cpu', 'mps', or 'cuda:N' (where N is an integer index for the GPU). Got: {device}"
)
def is_device_compatible(device):
"""
Returns True/False, and prints any compatibility errors
"""
# static variable "history".
is_device_compatible.history = getattr(is_device_compatible, "history", {})
try:
validate_device_id(device, log_prefix="is_device_compatible")
except:
log.error(str(e))
return False
if device in ("cpu", "mps"):
return True
# Memory check
try:
_, mem_total = torch.cuda.mem_get_info(device)
mem_total /= float(10**9)
if mem_total < 1.9:
if is_device_compatible.history.get(device) == None:
log.warn(f"GPU {device} with less than 2 GB of VRAM is not compatible with Stable Diffusion")
is_device_compatible.history[device] = 1
return False
except RuntimeError as e:
log.error(str(e))
return False
return True
def get_processor_name():
try:
import subprocess
@ -243,7 +157,8 @@ def get_processor_name():
if platform.system() == "Windows":
return platform.processor()
elif platform.system() == "Darwin":
os.environ["PATH"] = os.environ["PATH"] + os.pathsep + "/usr/sbin"
if "/usr/sbin" not in os.environ["PATH"].split(os.pathsep):
os.environ["PATH"] = os.environ["PATH"] + os.pathsep + "/usr/sbin"
command = "sysctl -n machdep.cpu.brand_string"
return subprocess.check_output(command, shell=True).decode().strip()
elif platform.system() == "Linux":

View File

@ -0,0 +1,24 @@
from sqlalchemy.orm import Session
from easydiffusion.easydb import models, schemas
def get_bucket_by_path(db: Session, path: str):
return db.query(models.Bucket).filter(models.Bucket.path == path).first()
def create_bucket(db: Session, bucket: schemas.BucketCreate):
db_bucket = models.Bucket(path=bucket.path)
db.add(db_bucket)
db.commit()
db.refresh(db_bucket)
return db_bucket
def create_bucketfile(db: Session, bucketfile: schemas.BucketFileCreate, bucket_id: int):
db_bucketfile = models.BucketFile(**bucketfile.dict(), bucket_id=bucket_id)
db.merge(db_bucketfile)
db.commit()
db_bucketfile = db.query(models.BucketFile).filter(models.BucketFile.bucket_id==bucket_id, models.BucketFile.filename==bucketfile.filename).first()
return db_bucketfile

View File

@ -0,0 +1,14 @@
import os
from easydiffusion import app
from sqlalchemy import create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker
os.makedirs(app.BUCKET_DIR, exist_ok=True)
SQLALCHEMY_DATABASE_URL = "sqlite:///"+os.path.join(app.BUCKET_DIR, "bucket.db")
engine = create_engine(SQLALCHEMY_DATABASE_URL, connect_args={"check_same_thread": False})
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
BucketBase = declarative_base()

View File

@ -0,0 +1,25 @@
from sqlalchemy import Boolean, Column, ForeignKey, Integer, String, BLOB
from sqlalchemy.orm import relationship
from easydiffusion.easydb.database import BucketBase
class Bucket(BucketBase):
__tablename__ = "bucket"
id = Column(Integer, primary_key=True, index=True)
path = Column(String, unique=True, index=True)
bucketfiles = relationship("BucketFile", back_populates="bucket")
class BucketFile(BucketBase):
__tablename__ = "bucketfile"
filename = Column(String, index=True, primary_key=True)
bucket_id = Column(Integer, ForeignKey("bucket.id"), primary_key=True)
data = Column(BLOB, index=False)
bucket = relationship("Bucket", back_populates="bucketfiles")

View File

@ -0,0 +1,36 @@
from typing import List, Union
from pydantic import BaseModel
class BucketFileBase(BaseModel):
filename: str
data: bytes
class BucketFileCreate(BucketFileBase):
pass
class BucketFile(BucketFileBase):
bucket_id: int
class Config:
orm_mode = True
class BucketBase(BaseModel):
path: str
class BucketCreate(BucketBase):
pass
class Bucket(BucketBase):
id: int
bucketfiles: List[BucketFile] = []
class Config:
orm_mode = True

View File

@ -5,11 +5,13 @@ import traceback
from typing import Union
from easydiffusion import app
from easydiffusion.types import TaskData
from easydiffusion.types import ModelsData
from easydiffusion.utils import log
from sdkit import Context
from sdkit.models import load_model, scan_model, unload_model, download_model, get_model_info_from_db
from sdkit.models.model_loader.controlnet_filters import filters as cn_filters
from sdkit.utils import hash_file_quick
from sdkit.models.model_loader.embeddings import get_embedding_token
KNOWN_MODEL_TYPES = [
"stable-diffusion",
@ -19,6 +21,8 @@ KNOWN_MODEL_TYPES = [
"realesrgan",
"lora",
"codeformer",
"embeddings",
"controlnet",
]
MODEL_EXTENSIONS = {
"stable-diffusion": [".ckpt", ".safetensors"],
@ -26,9 +30,10 @@ MODEL_EXTENSIONS = {
"hypernetwork": [".pt", ".safetensors"],
"gfpgan": [".pth"],
"realesrgan": [".pth"],
"lora": [".ckpt", ".safetensors"],
"lora": [".ckpt", ".safetensors", ".pt"],
"codeformer": [".pth"],
"embeddings": [".pt", ".bin", ".safetensors"],
"controlnet": [".pth", ".safetensors"],
}
DEFAULT_MODELS = {
"stable-diffusion": [
@ -57,10 +62,9 @@ def init():
def load_default_models(context: Context):
set_vram_optimizations(context)
from easydiffusion import runtime
config = app.getConfig()
context.embeddings_path = os.path.join(app.MODELS_DIR, "embeddings")
runtime.set_vram_optimizations(context)
# init default model paths
for model_type in MODELS_TO_LOAD_ON_START:
@ -72,7 +76,7 @@ def load_default_models(context: Context):
scan_model=context.model_paths[model_type] != None
and not context.model_paths[model_type].endswith(".safetensors"),
)
if model_type in context.model_load_errors:
if hasattr(context, "model_load_errors") and model_type in context.model_load_errors:
del context.model_load_errors[model_type]
except Exception as e:
log.error(f"[red]Error while loading {model_type} model: {context.model_paths[model_type]}[/red]")
@ -84,6 +88,8 @@ def load_default_models(context: Context):
log.exception(e)
del context.model_paths[model_type]
if not hasattr(context, "model_load_errors"):
context.model_load_errors = {}
context.model_load_errors[model_type] = str(e) # storing the entire Exception can lead to memory leaks
@ -96,7 +102,16 @@ def unload_all(context: Context):
def resolve_model_to_use(model_name: Union[str, list] = None, model_type: str = None, fail_if_not_found: bool = True):
model_names = model_name if isinstance(model_name, list) else [model_name]
model_paths = [resolve_model_to_use_single(m, model_type, fail_if_not_found) for m in model_names]
model_paths = []
for m in model_names:
if model_type == "embeddings":
try:
resolve_model_to_use_single(m, model_type)
except FileNotFoundError: # try with spaces
m = m.replace("_", " ")
path = resolve_model_to_use_single(m, model_type, fail_if_not_found)
model_paths.append(path)
return model_paths[0] if len(model_paths) == 1 else model_paths
@ -135,77 +150,68 @@ def resolve_model_to_use_single(model_name: str = None, model_type: str = None,
return default_model_path
if model_name and fail_if_not_found:
raise Exception(f"Could not find the desired model {model_name}! Is it present in the {model_dir} folder?")
raise FileNotFoundError(
f"Could not find the desired model {model_name}! Is it present in the {model_dir} folder?"
)
def reload_models_if_necessary(context: Context, task_data: TaskData):
face_fix_lower = task_data.use_face_correction.lower() if task_data.use_face_correction else ""
upscale_lower = task_data.use_upscale.lower() if task_data.use_upscale else ""
model_paths_in_req = {
"stable-diffusion": task_data.use_stable_diffusion_model,
"vae": task_data.use_vae_model,
"hypernetwork": task_data.use_hypernetwork_model,
"codeformer": task_data.use_face_correction if "codeformer" in face_fix_lower else None,
"gfpgan": task_data.use_face_correction if "gfpgan" in face_fix_lower else None,
"realesrgan": task_data.use_upscale if "realesrgan" in upscale_lower else None,
"latent_upscaler": True if "latent_upscaler" in upscale_lower else None,
"nsfw_checker": True if task_data.block_nsfw else None,
"lora": task_data.use_lora_model,
}
def reload_models_if_necessary(context: Context, models_data: ModelsData, models_to_force_reload: list = []):
models_to_reload = {
model_type: path
for model_type, path in model_paths_in_req.items()
if context.model_paths.get(model_type) != path
for model_type, path in models_data.model_paths.items()
if context.model_paths.get(model_type) != path or (path is not None and context.models.get(model_type) is None)
}
if task_data.codeformer_upscale_faces:
if models_data.model_paths.get("codeformer"):
if "realesrgan" not in models_to_reload and "realesrgan" not in context.models:
default_realesrgan = DEFAULT_MODELS["realesrgan"][0]["file_name"]
models_to_reload["realesrgan"] = resolve_model_to_use(default_realesrgan, "realesrgan")
elif "realesrgan" in models_to_reload and models_to_reload["realesrgan"] is None:
del models_to_reload["realesrgan"] # don't unload realesrgan
if set_vram_optimizations(context) or set_clip_skip(context, task_data): # reload SD
models_to_reload["stable-diffusion"] = model_paths_in_req["stable-diffusion"]
for model_type in models_to_force_reload:
if model_type not in models_data.model_paths:
continue
models_to_reload[model_type] = models_data.model_paths[model_type]
for model_type, model_path_in_req in models_to_reload.items():
context.model_paths[model_type] = model_path_in_req
action_fn = unload_model if context.model_paths[model_type] is None else load_model
extra_params = models_data.model_params.get(model_type, {})
try:
action_fn(context, model_type, scan_model=False) # we've scanned them already
if model_type in context.model_load_errors:
action_fn(context, model_type, scan_model=False, **extra_params) # we've scanned them already
if hasattr(context, "model_load_errors") and model_type in context.model_load_errors:
del context.model_load_errors[model_type]
except Exception as e:
log.exception(e)
if action_fn == load_model:
if not hasattr(context, "model_load_errors"):
context.model_load_errors = {}
context.model_load_errors[model_type] = str(e) # storing the entire Exception can lead to memory leaks
def resolve_model_paths(task_data: TaskData):
task_data.use_stable_diffusion_model = resolve_model_to_use(
task_data.use_stable_diffusion_model, model_type="stable-diffusion"
)
task_data.use_vae_model = resolve_model_to_use(task_data.use_vae_model, model_type="vae")
task_data.use_hypernetwork_model = resolve_model_to_use(task_data.use_hypernetwork_model, model_type="hypernetwork")
task_data.use_lora_model = resolve_model_to_use(task_data.use_lora_model, model_type="lora")
if task_data.use_face_correction:
if "gfpgan" in task_data.use_face_correction.lower():
model_type = "gfpgan"
elif "codeformer" in task_data.use_face_correction.lower():
model_type = "codeformer"
def resolve_model_paths(models_data: ModelsData):
model_paths = models_data.model_paths
for model_type in model_paths:
skip_models = cn_filters + ["latent_upscaler", "nsfw_checker"]
if model_type in skip_models: # doesn't use model paths
continue
if model_type == "codeformer" and model_paths[model_type]:
download_if_necessary("codeformer", "codeformer.pth", "codeformer-0.1.0")
elif model_type == "controlnet" and model_paths[model_type]:
model_id = model_paths[model_type]
model_info = get_model_info_from_db(model_type=model_type, model_id=model_id)
if model_info:
filename = model_info.get("url", "").split("/")[-1]
download_if_necessary("controlnet", filename, model_id, skip_if_others_exist=False)
task_data.use_face_correction = resolve_model_to_use(task_data.use_face_correction, model_type)
if task_data.use_upscale and "realesrgan" in task_data.use_upscale.lower():
task_data.use_upscale = resolve_model_to_use(task_data.use_upscale, "realesrgan")
model_paths[model_type] = resolve_model_to_use(model_paths[model_type], model_type=model_type)
def fail_if_models_did_not_load(context: Context):
for model_type in KNOWN_MODEL_TYPES:
if model_type in context.model_load_errors:
if hasattr(context, "model_load_errors") and model_type in context.model_load_errors:
e = context.model_load_errors[model_type]
raise Exception(f"Could not load the {model_type} model! Reason: " + e)
@ -222,28 +228,17 @@ def download_default_models_if_necessary():
print(model_type, "model(s) found.")
def download_if_necessary(model_type: str, file_name: str, model_id: str):
def download_if_necessary(model_type: str, file_name: str, model_id: str, skip_if_others_exist=True):
model_path = os.path.join(app.MODELS_DIR, model_type, file_name)
expected_hash = get_model_info_from_db(model_type=model_type, model_id=model_id)["quick_hash"]
other_models_exist = any_model_exists(model_type)
other_models_exist = any_model_exists(model_type) and skip_if_others_exist
known_model_exists = os.path.exists(model_path)
known_model_is_corrupt = known_model_exists and hash_file_quick(model_path) != expected_hash
if known_model_is_corrupt or (not other_models_exist and not known_model_exists):
print("> download", model_type, model_id)
download_model(model_type, model_id, download_base_dir=app.MODELS_DIR)
def set_vram_optimizations(context: Context):
config = app.getConfig()
vram_usage_level = config.get("vram_usage_level", "balanced")
if vram_usage_level != context.vram_usage_level:
context.vram_usage_level = vram_usage_level
return True
return False
download_model(model_type, model_id, download_base_dir=app.MODELS_DIR, download_config_if_available=False)
def migrate_legacy_model_location():
@ -266,21 +261,28 @@ def any_model_exists(model_type: str) -> bool:
return False
def set_clip_skip(context: Context, task_data: TaskData):
clip_skip = task_data.clip_skip
if clip_skip != context.clip_skip:
context.clip_skip = clip_skip
return True
return False
def make_model_folders():
for model_type in KNOWN_MODEL_TYPES:
model_dir_path = os.path.join(app.MODELS_DIR, model_type)
os.makedirs(model_dir_path, exist_ok=True)
try:
os.makedirs(model_dir_path, exist_ok=True)
except Exception as e:
from rich.console import Console
from rich.panel import Panel
Console().print(
Panel(
"\n"
+ f"Error while creating the models directory: '{model_dir_path}'\n"
+ f"Error: {e}\n\n"
+ f"[white]Check the 'models_dir:' line in the file '{os.path.join(app.ROOT_DIR, 'config.yaml')}'.[/white]\n",
title="Fatal Error starting Easy Diffusion",
style="bold yellow on red",
)
)
input("Press Enter to terminate...")
exit(1)
help_file_name = f"Place your {model_type} model files here.txt"
help_file_contents = f'Supported extensions: {" or ".join(MODEL_EXTENSIONS.get(model_type))}'
@ -324,12 +326,27 @@ def is_malicious_model(file_path):
def getModels(scan_for_malicious: bool = True):
models = {
"options": {
"stable-diffusion": ["sd-v1-4"],
"stable-diffusion": [],
"vae": [],
"hypernetwork": [],
"lora": [],
"codeformer": ["codeformer"],
"codeformer": [{"codeformer": "CodeFormer"}],
"embeddings": [],
"controlnet": [
{"control_v11p_sd15_canny": "Canny (*)"},
{"control_v11p_sd15_openpose": "OpenPose (*)"},
{"control_v11p_sd15_normalbae": "Normal BAE (*)"},
{"control_v11f1p_sd15_depth": "Depth (*)"},
{"control_v11p_sd15_scribble": "Scribble"},
{"control_v11p_sd15_softedge": "Soft Edge"},
{"control_v11p_sd15_inpaint": "Inpaint"},
{"control_v11p_sd15_lineart": "Line Art"},
{"control_v11p_sd15s2_lineart_anime": "Line Art Anime"},
{"control_v11p_sd15_mlsd": "Straight Lines"},
{"control_v11p_sd15_seg": "Segment"},
{"control_v11e_sd15_shuffle": "Shuffle"},
{"control_v11f1e_sd15_tile": "Tile"},
],
},
}
@ -338,9 +355,11 @@ def getModels(scan_for_malicious: bool = True):
class MaliciousModelException(Exception):
"Raised when picklescan reports a problem with a model"
def scan_directory(directory, suffixes, directoriesFirst: bool = True):
def scan_directory(directory, suffixes, directoriesFirst: bool = True, default_entries=[], nameFilter=None):
nonlocal models_scanned
tree = []
tree = list(default_entries)
for entry in sorted(
os.scandir(directory),
key=lambda entry: (entry.is_file() == directoriesFirst, entry.name.lower()),
@ -359,15 +378,27 @@ def getModels(scan_for_malicious: bool = True):
raise MaliciousModelException(entry.path)
if scan_for_malicious:
known_models[entry.path] = mtime
tree.append(entry.name[: -len(matching_suffix)])
model_id = entry.name[: -len(matching_suffix)]
if callable(nameFilter):
model_id = nameFilter(model_id)
model_exists = False
for m in tree: # allows default "named" models, like CodeFormer and known ControlNet models
if (isinstance(m, str) and model_id == m) or (isinstance(m, dict) and model_id in m):
model_exists = True
break
if not model_exists:
tree.append(model_id)
elif entry.is_dir():
scan = scan_directory(entry.path, suffixes, directoriesFirst=False)
scan = scan_directory(entry.path, suffixes, directoriesFirst=False, nameFilter=nameFilter)
if len(scan) != 0:
tree.append((entry.name, scan))
return tree
def listModels(model_type):
def listModels(model_type, nameFilter=None):
nonlocal models_scanned
model_extensions = MODEL_EXTENSIONS.get(model_type, [])
@ -376,7 +407,10 @@ def getModels(scan_for_malicious: bool = True):
os.makedirs(models_dir)
try:
models["options"][model_type] = scan_directory(models_dir, model_extensions)
default_tree = models["options"].get(model_type, [])
models["options"][model_type] = scan_directory(
models_dir, model_extensions, default_entries=default_tree, nameFilter=nameFilter
)
except MaliciousModelException as e:
models["scan-error"] = str(e)
@ -388,7 +422,8 @@ def getModels(scan_for_malicious: bool = True):
listModels(model_type="hypernetwork")
listModels(model_type="gfpgan")
listModels(model_type="lora")
listModels(model_type="embeddings")
listModels(model_type="embeddings", nameFilter=get_embedding_token)
listModels(model_type="controlnet")
if scan_for_malicious and models_scanned > 0:
log.info(f"[green]Scanned {models_scanned} models. Nothing infected[/]")

View File

@ -0,0 +1,102 @@
import sys
import os
import platform
from importlib.metadata import version as pkg_version
from sdkit.utils import log
from easydiffusion import app
# future home of scripts/check_modules.py
manifest = {
"tensorrt": {
"install": [
"wheel",
"nvidia-cudnn-cu11==8.9.4.25",
"tensorrt==9.0.0.post11.dev1 --pre --extra-index-url=https://pypi.nvidia.com --trusted-host pypi.nvidia.com",
],
"uninstall": ["tensorrt"],
# TODO also uninstall tensorrt-libs and nvidia-cudnn, but do it upon restarting (avoid 'file in use' error)
}
}
installing = []
# remove this once TRT releases on pypi
if platform.system() == "Windows":
trt_dir = os.path.join(app.ROOT_DIR, "tensorrt")
if os.path.exists(trt_dir) and os.path.isdir(trt_dir) and len(os.listdir(trt_dir)) > 0:
files = os.listdir(trt_dir)
packages = manifest["tensorrt"]["install"]
packages = tuple(p.replace("-", "_") for p in packages)
wheels = []
for p in packages:
p = p.split(" ")[0]
f = next((f for f in files if f.startswith(p) and f.endswith((".whl", ".tar.gz"))), None)
if f:
wheels.append(os.path.join(trt_dir, f))
manifest["tensorrt"]["install"] = wheels
def get_installed_packages() -> list:
return {module_name: version(module_name) for module_name in manifest if is_installed(module_name)}
def is_installed(module_name) -> bool:
return version(module_name) is not None
def install(module_name):
if is_installed(module_name):
log.info(f"{module_name} has already been installed!")
return
if module_name in installing:
log.info(f"{module_name} is already installing!")
return
if module_name not in manifest:
raise RuntimeError(f"Can't install unknown package: {module_name}!")
commands = manifest[module_name]["install"]
if module_name == "tensorrt":
commands += [
"protobuf==3.20.3 polygraphy==0.47.1 onnx==1.14.0 --extra-index-url=https://pypi.ngc.nvidia.com --trusted-host pypi.ngc.nvidia.com"
]
commands = [f"python -m pip install --upgrade {cmd}" for cmd in commands]
installing.append(module_name)
try:
for cmd in commands:
print(">", cmd)
if os.system(cmd) != 0:
raise RuntimeError(f"Error while running {cmd}. Please check the logs in the command-line.")
finally:
installing.remove(module_name)
def uninstall(module_name):
if not is_installed(module_name):
log.info(f"{module_name} hasn't been installed!")
return
if module_name not in manifest:
raise RuntimeError(f"Can't uninstall unknown package: {module_name}!")
commands = manifest[module_name]["uninstall"]
commands = [f"python -m pip uninstall -y {cmd}" for cmd in commands]
for cmd in commands:
print(">", cmd)
if os.system(cmd) != 0:
raise RuntimeError(f"Error while running {cmd}. Please check the logs in the command-line.")
def version(module_name: str) -> str:
try:
return pkg_version(module_name)
except:
return None

View File

@ -1,279 +0,0 @@
import json
import pprint
import queue
import time
from easydiffusion import device_manager
from easydiffusion.types import GenerateImageRequest
from easydiffusion.types import Image as ResponseImage
from easydiffusion.types import Response, TaskData, UserInitiatedStop
from easydiffusion.model_manager import DEFAULT_MODELS, resolve_model_to_use
from easydiffusion.utils import get_printable_request, log, save_images_to_disk
from sdkit import Context
from sdkit.filter import apply_filters
from sdkit.generate import generate_images
from sdkit.models import load_model
from sdkit.utils import (
diffusers_latent_samples_to_images,
gc,
img_to_base64_str,
img_to_buffer,
latent_samples_to_images,
get_device_usage,
)
context = Context() # thread-local
"""
runtime data (bound locally to this thread), for e.g. device, references to loaded models, optimization flags etc
"""
def init(device):
"""
Initializes the fields that will be bound to this runtime's context, and sets the current torch device
"""
context.stop_processing = False
context.temp_images = {}
context.partial_x_samples = None
context.model_load_errors = {}
context.enable_codeformer = True
from easydiffusion import app
app_config = app.getConfig()
context.test_diffusers = (
app_config.get("test_diffusers", False) and app_config.get("update_branch", "main") != "main"
)
log.info("Device usage during initialization:")
get_device_usage(device, log_info=True, process_usage_only=False)
device_manager.device_init(context, device)
def make_images(
req: GenerateImageRequest,
task_data: TaskData,
data_queue: queue.Queue,
task_temp_images: list,
step_callback,
):
context.stop_processing = False
print_task_info(req, task_data)
images, seeds = make_images_internal(req, task_data, data_queue, task_temp_images, step_callback)
res = Response(
req,
task_data,
images=construct_response(images, seeds, task_data, base_seed=req.seed),
)
res = res.json()
data_queue.put(json.dumps(res))
log.info("Task completed")
return res
def print_task_info(req: GenerateImageRequest, task_data: TaskData):
req_str = pprint.pformat(get_printable_request(req, task_data)).replace("[", "\[")
task_str = pprint.pformat(task_data.dict()).replace("[", "\[")
log.info(f"request: {req_str}")
log.info(f"task data: {task_str}")
def make_images_internal(
req: GenerateImageRequest,
task_data: TaskData,
data_queue: queue.Queue,
task_temp_images: list,
step_callback,
):
images, user_stopped = generate_images_internal(
req,
task_data,
data_queue,
task_temp_images,
step_callback,
task_data.stream_image_progress,
task_data.stream_image_progress_interval,
)
gc(context)
filtered_images = filter_images(req, task_data, images, user_stopped)
if task_data.save_to_disk_path is not None:
save_images_to_disk(images, filtered_images, req, task_data)
seeds = [*range(req.seed, req.seed + len(images))]
if task_data.show_only_filtered_image or filtered_images is images:
return filtered_images, seeds
else:
return images + filtered_images, seeds + seeds
def generate_images_internal(
req: GenerateImageRequest,
task_data: TaskData,
data_queue: queue.Queue,
task_temp_images: list,
step_callback,
stream_image_progress: bool,
stream_image_progress_interval: int,
):
context.temp_images.clear()
callback = make_step_callback(
req,
task_data,
data_queue,
task_temp_images,
step_callback,
stream_image_progress,
stream_image_progress_interval,
)
try:
if req.init_image is not None and not context.test_diffusers:
req.sampler_name = "ddim"
images = generate_images(context, callback=callback, **req.dict())
user_stopped = False
except UserInitiatedStop:
images = []
user_stopped = True
if context.partial_x_samples is not None:
if context.test_diffusers:
images = diffusers_latent_samples_to_images(context, context.partial_x_samples)
else:
images = latent_samples_to_images(context, context.partial_x_samples)
finally:
if hasattr(context, "partial_x_samples") and context.partial_x_samples is not None:
if not context.test_diffusers:
del context.partial_x_samples
context.partial_x_samples = None
return images, user_stopped
def filter_images(req: GenerateImageRequest, task_data: TaskData, images: list, user_stopped):
if user_stopped:
return images
if task_data.block_nsfw:
images = apply_filters(context, "nsfw_checker", images)
if task_data.use_face_correction and "codeformer" in task_data.use_face_correction.lower():
default_realesrgan = DEFAULT_MODELS["realesrgan"][0]["file_name"]
prev_realesrgan_path = None
if task_data.codeformer_upscale_faces and default_realesrgan not in context.model_paths["realesrgan"]:
prev_realesrgan_path = context.model_paths["realesrgan"]
context.model_paths["realesrgan"] = resolve_model_to_use(default_realesrgan, "realesrgan")
load_model(context, "realesrgan")
try:
images = apply_filters(
context,
"codeformer",
images,
upscale_faces=task_data.codeformer_upscale_faces,
codeformer_fidelity=task_data.codeformer_fidelity,
)
finally:
if prev_realesrgan_path:
context.model_paths["realesrgan"] = prev_realesrgan_path
load_model(context, "realesrgan")
elif task_data.use_face_correction and "gfpgan" in task_data.use_face_correction.lower():
images = apply_filters(context, "gfpgan", images)
if task_data.use_upscale:
if "realesrgan" in task_data.use_upscale.lower():
images = apply_filters(context, "realesrgan", images, scale=task_data.upscale_amount)
elif task_data.use_upscale == "latent_upscaler":
images = apply_filters(
context,
"latent_upscaler",
images,
scale=task_data.upscale_amount,
latent_upscaler_options={
"prompt": req.prompt,
"negative_prompt": req.negative_prompt,
"seed": req.seed,
"num_inference_steps": task_data.latent_upscaler_steps,
"guidance_scale": 0,
},
)
return images
def construct_response(images: list, seeds: list, task_data: TaskData, base_seed: int):
return [
ResponseImage(
data=img_to_base64_str(
img,
task_data.output_format,
task_data.output_quality,
task_data.output_lossless,
),
seed=seed,
)
for img, seed in zip(images, seeds)
]
def make_step_callback(
req: GenerateImageRequest,
task_data: TaskData,
data_queue: queue.Queue,
task_temp_images: list,
step_callback,
stream_image_progress: bool,
stream_image_progress_interval: int,
):
n_steps = req.num_inference_steps if req.init_image is None else int(req.num_inference_steps * req.prompt_strength)
last_callback_time = -1
def update_temp_img(x_samples, task_temp_images: list):
partial_images = []
if context.test_diffusers:
images = diffusers_latent_samples_to_images(context, x_samples)
else:
images = latent_samples_to_images(context, x_samples)
if task_data.block_nsfw:
images = apply_filters(context, "nsfw_checker", images)
for i, img in enumerate(images):
buf = img_to_buffer(img, output_format="JPEG")
context.temp_images[f"{task_data.request_id}/{i}"] = buf
task_temp_images[i] = buf
partial_images.append({"path": f"/image/tmp/{task_data.request_id}/{i}"})
del images
return partial_images
def on_image_step(x_samples, i, *args):
nonlocal last_callback_time
if context.test_diffusers:
context.partial_x_samples = (x_samples, args[0])
else:
context.partial_x_samples = x_samples
step_time = time.time() - last_callback_time if last_callback_time != -1 else -1
last_callback_time = time.time()
progress = {"step": i, "step_time": step_time, "total_steps": n_steps}
if stream_image_progress and stream_image_progress_interval > 0 and i % stream_image_progress_interval == 0:
progress["output"] = update_temp_img(context.partial_x_samples, task_temp_images)
data_queue.put(json.dumps(progress))
step_callback()
if context.stop_processing:
raise UserInitiatedStop("User requested that we stop processing")
return on_image_step

View File

@ -0,0 +1,51 @@
"""
A runtime that runs on a specific device (in a thread).
It can run various tasks like image generation, image filtering, model merge etc by using that thread-local context.
This creates an `sdkit.Context` that's bound to the device specified while calling the `init()` function.
"""
from easydiffusion import device_manager
from easydiffusion.utils import log
from sdkit import Context
from sdkit.utils import get_device_usage
context = Context() # thread-local
"""
runtime data (bound locally to this thread), for e.g. device, references to loaded models, optimization flags etc
"""
def init(device):
"""
Initializes the fields that will be bound to this runtime's context, and sets the current torch device
"""
context.stop_processing = False
context.temp_images = {}
context.partial_x_samples = None
context.model_load_errors = {}
context.enable_codeformer = True
from easydiffusion import app
app_config = app.getConfig()
context.test_diffusers = app_config.get("use_v3_engine", True)
log.info("Device usage during initialization:")
get_device_usage(device, log_info=True, process_usage_only=False)
device_manager.device_init(context, device)
def set_vram_optimizations(context: Context):
from easydiffusion import app
config = app.getConfig()
vram_usage_level = config.get("vram_usage_level", "balanced")
if vram_usage_level != context.vram_usage_level:
context.vram_usage_level = vram_usage_level
return True
return False

View File

@ -8,8 +8,19 @@ import os
import traceback
from typing import List, Union
from easydiffusion import app, model_manager, task_manager
from easydiffusion.types import GenerateImageRequest, MergeRequest, TaskData
from easydiffusion import app, model_manager, task_manager, package_manager
from easydiffusion.tasks import RenderTask, FilterTask
from easydiffusion.types import (
GenerateImageRequest,
FilterImageRequest,
MergeRequest,
TaskData,
RenderTaskData,
ModelsData,
OutputFormatData,
SaveToDiskData,
convert_legacy_render_req_to_new,
)
from easydiffusion.utils import log
from fastapi import FastAPI, HTTPException
from fastapi.staticfiles import StaticFiles
@ -27,6 +38,7 @@ NOCACHE_HEADERS = {
"Pragma": "no-cache",
"Expires": "0",
}
PROTECTED_CONFIG_KEYS = ("block_nsfw",) # can't change these via the HTTP API
class NoCacheStaticFiles(StaticFiles):
@ -54,7 +66,8 @@ class SetAppConfigRequest(BaseModel, extra=Extra.allow):
ui_open_browser_on_start: bool = None
listen_to_network: bool = None
listen_port: int = None
test_diffusers: bool = False
use_v3_engine: bool = True
models_dir: str = None
def init():
@ -97,6 +110,10 @@ def init():
def render(req: dict):
return render_internal(req)
@server_api.post("/filter")
def render(req: dict):
return filter_internal(req)
@server_api.post("/model/merge")
def model_merge(req: dict):
print(req)
@ -122,6 +139,14 @@ def init():
def stop_cloudflare_tunnel(req: dict):
return stop_cloudflare_tunnel_internal(req)
@server_api.post("/package/{package_name:str}")
def modify_package(package_name: str, req: dict):
return modify_package_internal(package_name, req)
@server_api.get("/sha256/{obj_path:path}")
def get_sha256(obj_path: str):
return get_sha256_internal(obj_path)
@server_api.get("/")
def read_root():
return FileResponse(os.path.join(app.SD_UI_DIR, "index.html"), headers=NOCACHE_HEADERS)
@ -151,10 +176,11 @@ def set_app_config_internal(req: SetAppConfigRequest):
config["net"] = {}
config["net"]["listen_port"] = int(req.listen_port)
config["test_diffusers"] = req.test_diffusers
config["use_v3_engine"] = req.use_v3_engine
config["models_dir"] = req.models_dir
for property, property_value in req.dict().items():
if property_value is not None and property not in req.__fields__:
if property_value is not None and property not in req.__fields__ and property not in PROTECTED_CONFIG_KEYS:
config[property] = property_value
try:
@ -170,11 +196,13 @@ def set_app_config_internal(req: SetAppConfigRequest):
def update_render_devices_in_config(config, render_devices):
if render_devices not in ("cpu", "auto") and not render_devices.startswith("cuda:"):
raise HTTPException(status_code=400, detail=f"Invalid render device requested: {render_devices}")
from easydiffusion.device_manager import validate_render_devices
if render_devices.startswith("cuda:"):
try:
render_devices = render_devices.split(",")
validate_render_devices(render_devices)
except ValueError as e:
raise HTTPException(status_code=400, detail=str(e))
config["render_devices"] = render_devices
@ -183,7 +211,12 @@ def read_web_data_internal(key: str = None, **kwargs):
if not key: # /get without parameters, stable-diffusion easter egg.
raise HTTPException(status_code=418, detail="StableDiffusion is drawing a teapot!") # HTTP418 I'm a teapot
elif key == "app_config":
return JSONResponse(app.getConfig(), headers=NOCACHE_HEADERS)
config = app.getConfig()
if "models_dir" not in config:
config["models_dir"] = app.MODELS_DIR
return JSONResponse(config, headers=NOCACHE_HEADERS)
elif key == "system_info":
config = app.getConfig()
@ -194,6 +227,7 @@ def read_web_data_internal(key: str = None, **kwargs):
"hosts": app.getIPConfig(),
"default_output_dir": output_dir,
"enforce_output_dir": ("force_save_path" in config),
"enforce_output_metadata": ("force_save_metadata" in config),
}
system_info["devices"]["config"] = config.get("render_devices", "auto")
return JSONResponse(system_info, headers=NOCACHE_HEADERS)
@ -213,55 +247,94 @@ def ping_internal(session_id: str = None):
if task_manager.current_state_error:
raise HTTPException(status_code=500, detail=str(task_manager.current_state_error))
raise HTTPException(status_code=500, detail="Render thread is dead.")
if task_manager.current_state_error and not isinstance(task_manager.current_state_error, StopAsyncIteration):
raise HTTPException(status_code=500, detail=str(task_manager.current_state_error))
# Alive
response = {"status": str(task_manager.current_state)}
if session_id:
session = task_manager.get_cached_session(session_id, update_ttl=True)
response["tasks"] = {id(t): t.status for t in session.tasks}
response["devices"] = task_manager.get_devices()
response["packages_installed"] = package_manager.get_installed_packages()
response["packages_installing"] = package_manager.installing
if cloudflare.address != None:
response["cloudflare"] = cloudflare.address
return JSONResponse(response, headers=NOCACHE_HEADERS)
def render_internal(req: dict):
try:
req = convert_legacy_render_req_to_new(req)
# separate out the request data into rendering and task-specific data
render_req: GenerateImageRequest = GenerateImageRequest.parse_obj(req)
task_data: TaskData = TaskData.parse_obj(req)
task_data: RenderTaskData = RenderTaskData.parse_obj(req)
models_data: ModelsData = ModelsData.parse_obj(req)
output_format: OutputFormatData = OutputFormatData.parse_obj(req)
save_data: SaveToDiskData = SaveToDiskData.parse_obj(req)
# Overwrite user specified save path
config = app.getConfig()
if "force_save_path" in config:
task_data.save_to_disk_path = config["force_save_path"]
save_data.save_to_disk_path = config["force_save_path"]
render_req.init_image_mask = req.get("mask") # hack: will rename this in the HTTP API in a future revision
app.save_to_config(
task_data.use_stable_diffusion_model,
task_data.use_vae_model,
task_data.use_hypernetwork_model,
models_data.model_paths.get("stable-diffusion"),
models_data.model_paths.get("vae"),
models_data.model_paths.get("hypernetwork"),
task_data.vram_usage_level,
)
# enqueue the task
new_task = task_manager.render(render_req, task_data)
task = RenderTask(render_req, task_data, models_data, output_format, save_data)
return enqueue_task(task)
except HTTPException as e:
raise e
except Exception as e:
log.error(traceback.format_exc())
raise HTTPException(status_code=500, detail=str(e))
def filter_internal(req: dict):
try:
filter_req: FilterImageRequest = FilterImageRequest.parse_obj(req)
task_data: TaskData = TaskData.parse_obj(req)
models_data: ModelsData = ModelsData.parse_obj(req)
output_format: OutputFormatData = OutputFormatData.parse_obj(req)
save_data: SaveToDiskData = SaveToDiskData.parse_obj(req)
# enqueue the task
task = FilterTask(filter_req, task_data, models_data, output_format, save_data)
return enqueue_task(task)
except HTTPException as e:
raise e
except Exception as e:
log.error(traceback.format_exc())
raise HTTPException(status_code=500, detail=str(e))
def enqueue_task(task):
try:
task_manager.enqueue_task(task)
response = {
"status": str(task_manager.current_state),
"queue": len(task_manager.tasks_queue),
"stream": f"/image/stream/{id(new_task)}",
"task": id(new_task),
"stream": f"/image/stream/{task.id}",
"task": task.id,
}
return JSONResponse(response, headers=NOCACHE_HEADERS)
except ChildProcessError as e: # Render thread is dead
raise HTTPException(status_code=500, detail=f"Rendering thread has died.") # HTTP500 Internal Server Error
except ConnectionRefusedError as e: # Unstarted task pending limit reached, deny queueing too many.
raise HTTPException(status_code=503, detail=str(e)) # HTTP503 Service Unavailable
except Exception as e:
log.error(traceback.format_exc())
raise HTTPException(status_code=500, detail=str(e))
def model_merge_internal(req: dict):
@ -381,3 +454,41 @@ def stop_cloudflare_tunnel_internal(req: dict):
log.error(str(e))
log.error(traceback.format_exc())
return HTTPException(status_code=500, detail=str(e))
def modify_package_internal(package_name: str, req: dict):
try:
cmd = req["command"]
if cmd not in ("install", "uninstall"):
raise RuntimeError(f"Unknown command: {cmd}")
cmd = getattr(package_manager, cmd)
cmd(package_name)
return JSONResponse({"status": "OK"}, headers=NOCACHE_HEADERS)
except Exception as e:
log.error(str(e))
log.error(traceback.format_exc())
return HTTPException(status_code=500, detail=str(e))
def get_sha256_internal(obj_path):
from easydiffusion.utils import sha256sum
path = obj_path.split("/")
type = path.pop(0)
try:
model_path = model_manager.resolve_model_to_use("/".join(path), type)
except Exception as e:
log.error(str(e))
log.error(traceback.format_exc())
return HTTPException(status_code=404)
try:
digest = sha256sum(model_path)
return {"digest": digest}
except Exception as e:
log.error(str(e))
log.error(traceback.format_exc())
return HTTPException(status_code=500, detail=str(e))

View File

@ -17,16 +17,20 @@ from typing import Any, Hashable
import torch
from easydiffusion import device_manager
from easydiffusion.types import GenerateImageRequest, TaskData
from easydiffusion.tasks import Task
from easydiffusion.utils import log
from sdkit.utils import gc
from torchruntime.utils import get_device_count, get_device, get_device_name, get_installed_torch_platform
from sdkit.utils import is_cpu_device, mem_get_info
THREAD_NAME_PREFIX = ""
ERR_LOCK_FAILED = " failed to acquire lock within timeout."
LOCK_TIMEOUT = 15 # Maximum locking time in seconds before failing a task.
# It's better to get an exception than a deadlock... ALWAYS use timeout in critical paths.
DEVICE_START_TIMEOUT = 60 # seconds - Maximum time to wait for a render device to init.
MAX_OVERLOAD_ALLOWED_RATIO = 2 # i.e. 2x pending tasks compared to the number of render threads
class SymbolClass(type): # Print nicely formatted Symbol names.
@ -58,46 +62,6 @@ class ServerStates:
pass
class RenderTask: # Task with output queue and completion lock.
def __init__(self, req: GenerateImageRequest, task_data: TaskData):
task_data.request_id = id(self)
self.render_request: GenerateImageRequest = req # Initial Request
self.task_data: TaskData = task_data
self.response: Any = None # Copy of the last reponse
self.render_device = None # Select the task affinity. (Not used to change active devices).
self.temp_images: list = [None] * req.num_outputs * (1 if task_data.show_only_filtered_image else 2)
self.error: Exception = None
self.lock: threading.Lock = threading.Lock() # Locks at task start and unlocks when task is completed
self.buffer_queue: queue.Queue = queue.Queue() # Queue of JSON string segments
async def read_buffer_generator(self):
try:
while not self.buffer_queue.empty():
res = self.buffer_queue.get(block=False)
self.buffer_queue.task_done()
yield res
except queue.Empty as e:
yield
@property
def status(self):
if self.lock.locked():
return "running"
if isinstance(self.error, StopAsyncIteration):
return "stopped"
if self.error:
return "error"
if not self.buffer_queue.empty():
return "buffer"
if self.response:
return "completed"
return "pending"
@property
def is_pending(self):
return bool(not self.response and not self.error)
# Temporary cache to allow to query tasks results for a short time after they are completed.
class DataCache:
def __init__(self):
@ -123,8 +87,8 @@ class DataCache:
# Remove Items
for key in to_delete:
(_, val) = self._base[key]
if isinstance(val, RenderTask):
log.debug(f"RenderTask {key} expired. Data removed.")
if isinstance(val, Task):
log.debug(f"Task {key} expired. Data removed.")
elif isinstance(val, SessionState):
log.debug(f"Session {key} expired. Data removed.")
else:
@ -220,8 +184,8 @@ class SessionState:
tasks.append(task)
return tasks
def put(self, task, ttl=TASK_TTL):
task_id = id(task)
def put(self, task: Task, ttl=TASK_TTL):
task_id = task.id
self._tasks_ids.append(task_id)
if not task_cache.put(task_id, task, ttl):
return False
@ -230,11 +194,16 @@ class SessionState:
return True
def keep_task_alive(task: Task):
task_cache.keep(task.id, TASK_TTL)
session_cache.keep(task.session_id, TASK_TTL)
def thread_get_next_task():
from easydiffusion import renderer
from easydiffusion import runtime
if not manager_lock.acquire(blocking=True, timeout=LOCK_TIMEOUT):
log.warn(f"Render thread on device: {renderer.context.device} failed to acquire manager lock.")
log.warn(f"Render thread on device: {runtime.context.device} failed to acquire manager lock.")
return None
if len(tasks_queue) <= 0:
manager_lock.release()
@ -242,7 +211,7 @@ def thread_get_next_task():
task = None
try: # Select a render task.
for queued_task in tasks_queue:
if queued_task.render_device and renderer.context.device != queued_task.render_device:
if queued_task.render_device and runtime.context.device != queued_task.render_device:
# Is asking for a specific render device.
if is_alive(queued_task.render_device) > 0:
continue # requested device alive, skip current one.
@ -251,7 +220,7 @@ def thread_get_next_task():
queued_task.error = Exception(queued_task.render_device + " is not currently active.")
task = queued_task
break
if not queued_task.render_device and renderer.context.device == "cpu" and is_alive() > 1:
if not queued_task.render_device and runtime.context.device == "cpu" and is_alive() > 1:
# not asking for any specific devices, cpu want to grab task but other render devices are alive.
continue # Skip Tasks, don't run on CPU unless there is nothing else or user asked for it.
task = queued_task
@ -266,19 +235,19 @@ def thread_get_next_task():
def thread_render(device):
global current_state, current_state_error
from easydiffusion import model_manager, renderer
from easydiffusion import model_manager, runtime
try:
renderer.init(device)
runtime.init(device)
weak_thread_data[threading.current_thread()] = {
"device": renderer.context.device,
"device_name": renderer.context.device_name,
"device": runtime.context.device,
"device_name": runtime.context.device_name,
"alive": True,
}
current_state = ServerStates.LoadingModel
model_manager.load_default_models(renderer.context)
model_manager.load_default_models(runtime.context)
current_state = ServerStates.Online
except Exception as e:
@ -290,8 +259,8 @@ def thread_render(device):
session_cache.clean()
task_cache.clean()
if not weak_thread_data[threading.current_thread()]["alive"]:
log.info(f"Shutting down thread for device {renderer.context.device}")
model_manager.unload_all(renderer.context)
log.info(f"Shutting down thread for device {runtime.context.device}")
model_manager.unload_all(runtime.context)
return
if isinstance(current_state_error, SystemExit):
current_state = ServerStates.Unavailable
@ -311,62 +280,31 @@ def thread_render(device):
task.response = {"status": "failed", "detail": str(task.error)}
task.buffer_queue.put(json.dumps(task.response))
continue
log.info(f"Session {task.task_data.session_id} starting task {id(task)} on {renderer.context.device_name}")
log.info(f"Session {task.session_id} starting task {task.id} on {runtime.context.device_name}")
if not task.lock.acquire(blocking=False):
raise Exception("Got locked task from queue.")
try:
task.run()
def step_callback():
global current_state_error
task_cache.keep(id(task), TASK_TTL)
session_cache.keep(task.task_data.session_id, TASK_TTL)
if (
isinstance(current_state_error, SystemExit)
or isinstance(current_state_error, StopAsyncIteration)
or isinstance(task.error, StopAsyncIteration)
):
renderer.context.stop_processing = True
if isinstance(current_state_error, StopAsyncIteration):
task.error = current_state_error
current_state_error = None
log.info(f"Session {task.task_data.session_id} sent cancel signal for task {id(task)}")
current_state = ServerStates.LoadingModel
model_manager.resolve_model_paths(task.task_data)
model_manager.reload_models_if_necessary(renderer.context, task.task_data)
model_manager.fail_if_models_did_not_load(renderer.context)
current_state = ServerStates.Rendering
task.response = renderer.make_images(
task.render_request,
task.task_data,
task.buffer_queue,
task.temp_images,
step_callback,
)
# Before looping back to the generator, mark cache as still alive.
task_cache.keep(id(task), TASK_TTL)
session_cache.keep(task.task_data.session_id, TASK_TTL)
keep_task_alive(task)
except Exception as e:
task.error = str(e)
task.response = {"status": "failed", "detail": str(task.error)}
task.buffer_queue.put(json.dumps(task.response))
log.error(traceback.format_exc())
finally:
gc(renderer.context)
gc(runtime.context)
task.lock.release()
task_cache.keep(id(task), TASK_TTL)
session_cache.keep(task.task_data.session_id, TASK_TTL)
keep_task_alive(task)
if isinstance(task.error, StopAsyncIteration):
log.info(f"Session {task.task_data.session_id} task {id(task)} cancelled!")
log.info(f"Session {task.session_id} task {task.id} cancelled!")
elif task.error is not None:
log.info(f"Session {task.task_data.session_id} task {id(task)} failed!")
log.info(f"Session {task.session_id} task {task.id} failed!")
else:
log.info(
f"Session {task.task_data.session_id} task {id(task)} completed by {renderer.context.device_name}."
)
log.info(f"Session {task.session_id} task {task.id} completed by {runtime.context.device_name}.")
current_state = ServerStates.Online
@ -394,34 +332,33 @@ def get_devices():
"active": {},
}
def get_device_info(device):
if device in ("cpu", "mps"):
def get_device_info(device_id):
if is_cpu_device(device_id):
return {"name": device_manager.get_processor_name()}
mem_free, mem_total = torch.cuda.mem_get_info(device)
device = get_device(device_id)
mem_free, mem_total = mem_get_info(device)
mem_free /= float(10**9)
mem_total /= float(10**9)
return {
"name": torch.cuda.get_device_name(device),
"name": get_device_name(device),
"mem_free": mem_free,
"mem_total": mem_total,
"max_vram_usage_level": device_manager.get_max_vram_usage_level(device),
}
# list the compatible devices
cuda_count = torch.cuda.device_count()
for device in range(cuda_count):
device = f"cuda:{device}"
if not device_manager.is_device_compatible(device):
continue
torch_platform_name = get_installed_torch_platform()[0]
device_count = get_device_count()
for device_id in range(device_count):
device_id = f"{torch_platform_name}:{device_id}" if device_count > 1 else torch_platform_name
devices["all"].update({device: get_device_info(device)})
devices["all"].update({device_id: get_device_info(device_id)})
if device_manager.is_mps_available():
devices["all"].update({"mps": get_device_info("mps")})
devices["all"].update({"cpu": get_device_info("cpu")})
if torch_platform_name != "cpu":
devices["all"].update({"cpu": get_device_info("cpu")})
# list the activated devices
if not manager_lock.acquire(blocking=True, timeout=LOCK_TIMEOUT):
@ -433,11 +370,17 @@ def get_devices():
weak_data = weak_thread_data.get(rthread)
if not weak_data or not "device" in weak_data or not "device_name" in weak_data:
continue
device = weak_data["device"]
devices["active"].update({device: get_device_info(device)})
device_id = weak_data["device"]
devices["active"].update({device_id: get_device_info(device_id)})
finally:
manager_lock.release()
# temp until TRT releases
import os
from easydiffusion import app
devices["enable_trt"] = os.path.exists(os.path.join(app.ROOT_DIR, "tensorrt"))
return devices
@ -486,12 +429,6 @@ def start_render_thread(device):
def stop_render_thread(device):
try:
device_manager.validate_device_id(device, log_prefix="stop_render_thread")
except:
log.error(traceback.format_exc())
return False
if not manager_lock.acquire(blocking=True, timeout=LOCK_TIMEOUT):
raise Exception("stop_render_thread" + ERR_LOCK_FAILED)
log.info(f"Stopping Rendering Thread on device: {device}")
@ -548,28 +485,27 @@ def shutdown_event(): # Signal render thread to close on shutdown
current_state_error = SystemExit("Application shutting down.")
def render(render_req: GenerateImageRequest, task_data: TaskData):
def enqueue_task(task: Task):
current_thread_count = is_alive()
if current_thread_count <= 0: # Render thread is dead
raise ChildProcessError("Rendering thread has died.")
# Alive, check if task in cache
session = get_cached_session(task_data.session_id, update_ttl=True)
session = get_cached_session(task.session_id, update_ttl=True)
pending_tasks = list(filter(lambda t: t.is_pending, session.tasks))
if current_thread_count < len(pending_tasks):
if len(pending_tasks) > current_thread_count * MAX_OVERLOAD_ALLOWED_RATIO:
raise ConnectionRefusedError(
f"Session {task_data.session_id} already has {len(pending_tasks)} pending tasks out of {current_thread_count}."
f"Session {task.session_id} already has {len(pending_tasks)} pending tasks, with {current_thread_count} workers."
)
new_task = RenderTask(render_req, task_data)
if session.put(new_task, TASK_TTL):
if session.put(task, TASK_TTL):
# Use twice the normal timeout for adding user requests.
# Tries to force session.put to fail before tasks_queue.put would.
if manager_lock.acquire(blocking=True, timeout=LOCK_TIMEOUT * 2):
try:
tasks_queue.append(new_task)
tasks_queue.append(task)
idle_event.set()
return new_task
return task
finally:
manager_lock.release()
raise RuntimeError("Failed to add task to cache.")

View File

@ -0,0 +1,3 @@
from .task import Task
from .render_images import RenderTask
from .filter_images import FilterTask

View File

@ -0,0 +1,164 @@
import os
import json
import pprint
import time
from numpy import base_repr
from sdkit.filter import apply_filters
from sdkit.models import load_model
from sdkit.utils import img_to_base64_str, get_image, log, save_images
from easydiffusion import model_manager, runtime
from easydiffusion.types import (
FilterImageRequest,
FilterImageResponse,
ModelsData,
OutputFormatData,
SaveToDiskData,
TaskData,
GenerateImageRequest,
)
from easydiffusion.utils.save_utils import format_folder_name
from .task import Task
class FilterTask(Task):
"For applying filters to input images"
def __init__(
self,
req: FilterImageRequest,
task_data: TaskData,
models_data: ModelsData,
output_format: OutputFormatData,
save_data: SaveToDiskData,
):
super().__init__(task_data.session_id)
task_data.request_id = self.id
self.request = req
self.task_data = task_data
self.models_data = models_data
self.output_format = output_format
self.save_data = save_data
# convert to multi-filter format, if necessary
if isinstance(req.filter, str):
req.filter_params = {req.filter: req.filter_params}
req.filter = [req.filter]
if not isinstance(req.image, list):
req.image = [req.image]
def run(self):
"Runs the image filtering task on the assigned thread"
from easydiffusion import app
context = runtime.context
model_manager.resolve_model_paths(self.models_data)
model_manager.reload_models_if_necessary(context, self.models_data)
model_manager.fail_if_models_did_not_load(context)
print_task_info(self.request, self.models_data, self.output_format, self.save_data)
if isinstance(self.request.image, list):
images = [get_image(img) for img in self.request.image]
else:
images = get_image(self.request.image)
images = filter_images(context, images, self.request.filter, self.request.filter_params)
output_format = self.output_format
if self.save_data.save_to_disk_path is not None:
app_config = app.getConfig()
folder_format = app_config.get("folder_format", "$id")
dummy_req = GenerateImageRequest()
img_id = base_repr(int(time.time() * 10000), 36)[-7:] # Base 36 conversion, 0-9, A-Z
save_dir_path = os.path.join(
self.save_data.save_to_disk_path, format_folder_name(folder_format, dummy_req, self.task_data)
)
save_images(
images,
save_dir_path,
file_name=img_id,
output_format=output_format.output_format,
output_quality=output_format.output_quality,
output_lossless=output_format.output_lossless,
)
images = [
img_to_base64_str(
img, output_format.output_format, output_format.output_quality, output_format.output_lossless
)
for img in images
]
res = FilterImageResponse(self.request, self.models_data, images=images)
res = res.json()
self.buffer_queue.put(json.dumps(res))
log.info("Filter task completed")
self.response = res
def filter_images(context, images, filters, filter_params={}):
filters = filters if isinstance(filters, list) else [filters]
for filter_name in filters:
params = filter_params.get(filter_name, {})
previous_state = before_filter(context, filter_name, params)
try:
images = apply_filters(context, filter_name, images, **params)
finally:
after_filter(context, filter_name, params, previous_state)
return images
def before_filter(context, filter_name, filter_params):
if filter_name == "codeformer":
from easydiffusion.model_manager import DEFAULT_MODELS, resolve_model_to_use
default_realesrgan = DEFAULT_MODELS["realesrgan"][0]["file_name"]
prev_realesrgan_path = None
upscale_faces = filter_params.get("upscale_faces", False)
if upscale_faces and default_realesrgan not in context.model_paths["realesrgan"]:
prev_realesrgan_path = context.model_paths.get("realesrgan")
context.model_paths["realesrgan"] = resolve_model_to_use(default_realesrgan, "realesrgan")
load_model(context, "realesrgan")
return prev_realesrgan_path
def after_filter(context, filter_name, filter_params, previous_state):
if filter_name == "codeformer":
prev_realesrgan_path = previous_state
if prev_realesrgan_path:
context.model_paths["realesrgan"] = prev_realesrgan_path
load_model(context, "realesrgan")
def print_task_info(
req: FilterImageRequest, models_data: ModelsData, output_format: OutputFormatData, save_data: SaveToDiskData
):
req_str = pprint.pformat({"filter": req.filter, "filter_params": req.filter_params}).replace("[", "\[")
models_data = pprint.pformat(models_data.dict()).replace("[", "\[")
output_format = pprint.pformat(output_format.dict()).replace("[", "\[")
save_data = pprint.pformat(save_data.dict()).replace("[", "\[")
log.info(f"request: {req_str}")
log.info(f"models data: {models_data}")
log.info(f"output format: {output_format}")
log.info(f"save data: {save_data}")

View File

@ -0,0 +1,378 @@
import json
import pprint
import queue
import time
from easydiffusion import model_manager, runtime
from easydiffusion.types import GenerateImageRequest, ModelsData, OutputFormatData, SaveToDiskData
from easydiffusion.types import Image as ResponseImage
from easydiffusion.types import GenerateImageResponse, RenderTaskData, UserInitiatedStop
from easydiffusion.utils import get_printable_request, log, save_images_to_disk
from sdkit.generate import generate_images
from sdkit.utils import (
diffusers_latent_samples_to_images,
gc,
img_to_base64_str,
img_to_buffer,
latent_samples_to_images,
resize_img,
get_image,
log,
)
from .task import Task
from .filter_images import filter_images
class RenderTask(Task):
"For image generation"
def __init__(
self,
req: GenerateImageRequest,
task_data: RenderTaskData,
models_data: ModelsData,
output_format: OutputFormatData,
save_data: SaveToDiskData,
):
super().__init__(task_data.session_id)
task_data.request_id = self.id
self.render_request = req # Initial Request
self.task_data = task_data
self.models_data = models_data
self.output_format = output_format
self.save_data = save_data
self.temp_images: list = [None] * req.num_outputs * (1 if task_data.show_only_filtered_image else 2)
def run(self):
"Runs the image generation task on the assigned thread"
from easydiffusion import task_manager, app
context = runtime.context
config = app.getConfig()
if config.get("block_nsfw", False): # override if set on the server
self.task_data.block_nsfw = True
if "nsfw_checker" not in self.task_data.filters:
self.task_data.filters.append("nsfw_checker")
self.models_data.model_paths["nsfw_checker"] = "nsfw_checker"
def step_callback():
task_manager.keep_task_alive(self)
task_manager.current_state = task_manager.ServerStates.Rendering
if isinstance(task_manager.current_state_error, (SystemExit, StopAsyncIteration)) or isinstance(
self.error, StopAsyncIteration
):
context.stop_processing = True
if isinstance(task_manager.current_state_error, StopAsyncIteration):
self.error = task_manager.current_state_error
task_manager.current_state_error = None
log.info(f"Session {self.session_id} sent cancel signal for task {self.id}")
task_manager.current_state = task_manager.ServerStates.LoadingModel
model_manager.resolve_model_paths(self.models_data)
models_to_force_reload = []
if (
runtime.set_vram_optimizations(context)
or self.has_param_changed(context, "clip_skip")
or self.trt_needs_reload(context)
):
models_to_force_reload.append("stable-diffusion")
model_manager.reload_models_if_necessary(context, self.models_data, models_to_force_reload)
model_manager.fail_if_models_did_not_load(context)
task_manager.current_state = task_manager.ServerStates.Rendering
self.response = make_images(
context,
self.render_request,
self.task_data,
self.models_data,
self.output_format,
self.save_data,
self.buffer_queue,
self.temp_images,
step_callback,
)
def has_param_changed(self, context, param_name):
if not context.test_diffusers:
return False
if "stable-diffusion" not in context.models or "params" not in context.models["stable-diffusion"]:
return True
model = context.models["stable-diffusion"]
new_val = self.models_data.model_params.get("stable-diffusion", {}).get(param_name, False)
return model["params"].get(param_name) != new_val
def trt_needs_reload(self, context):
if not context.test_diffusers:
return False
if "stable-diffusion" not in context.models or "params" not in context.models["stable-diffusion"]:
return True
model = context.models["stable-diffusion"]
# curr_convert_to_trt = model["params"].get("convert_to_tensorrt")
new_convert_to_trt = self.models_data.model_params.get("stable-diffusion", {}).get("convert_to_tensorrt", False)
pipe = model["default"]
is_trt_loaded = hasattr(pipe.unet, "_allocate_trt_buffers") or hasattr(
pipe.unet, "_allocate_trt_buffers_backup"
)
if new_convert_to_trt and not is_trt_loaded:
return True
curr_build_config = model["params"].get("trt_build_config")
new_build_config = self.models_data.model_params.get("stable-diffusion", {}).get("trt_build_config", {})
return new_convert_to_trt and curr_build_config != new_build_config
def make_images(
context,
req: GenerateImageRequest,
task_data: RenderTaskData,
models_data: ModelsData,
output_format: OutputFormatData,
save_data: SaveToDiskData,
data_queue: queue.Queue,
task_temp_images: list,
step_callback,
):
context.stop_processing = False
print_task_info(req, task_data, models_data, output_format, save_data)
images, seeds = make_images_internal(
context, req, task_data, models_data, output_format, save_data, data_queue, task_temp_images, step_callback
)
res = GenerateImageResponse(
req, task_data, models_data, output_format, save_data, images=construct_response(images, seeds, output_format)
)
res = res.json()
data_queue.put(json.dumps(res))
log.info("Task completed")
return res
def print_task_info(
req: GenerateImageRequest,
task_data: RenderTaskData,
models_data: ModelsData,
output_format: OutputFormatData,
save_data: SaveToDiskData,
):
req_str = pprint.pformat(get_printable_request(req, task_data, models_data, output_format, save_data)).replace("[", "\[")
task_str = pprint.pformat(task_data.dict()).replace("[", "\[")
models_data = pprint.pformat(models_data.dict()).replace("[", "\[")
output_format = pprint.pformat(output_format.dict()).replace("[", "\[")
save_data = pprint.pformat(save_data.dict()).replace("[", "\[")
log.info(f"request: {req_str}")
log.info(f"task data: {task_str}")
# log.info(f"models data: {models_data}")
log.info(f"output format: {output_format}")
log.info(f"save data: {save_data}")
def make_images_internal(
context,
req: GenerateImageRequest,
task_data: RenderTaskData,
models_data: ModelsData,
output_format: OutputFormatData,
save_data: SaveToDiskData,
data_queue: queue.Queue,
task_temp_images: list,
step_callback,
):
images, user_stopped = generate_images_internal(
context,
req,
task_data,
models_data,
data_queue,
task_temp_images,
step_callback,
task_data.stream_image_progress,
task_data.stream_image_progress_interval,
)
gc(context)
filters, filter_params = task_data.filters, task_data.filter_params
filtered_images = filter_images(context, images, filters, filter_params) if not user_stopped else images
if save_data.save_to_disk_path is not None:
save_images_to_disk(images, filtered_images, req, task_data, models_data, output_format, save_data)
seeds = [*range(req.seed, req.seed + len(images))]
if task_data.show_only_filtered_image or filtered_images is images:
return filtered_images, seeds
else:
return images + filtered_images, seeds + seeds
def generate_images_internal(
context,
req: GenerateImageRequest,
task_data: RenderTaskData,
models_data: ModelsData,
data_queue: queue.Queue,
task_temp_images: list,
step_callback,
stream_image_progress: bool,
stream_image_progress_interval: int,
):
context.temp_images.clear()
callback = make_step_callback(
context,
req,
task_data,
data_queue,
task_temp_images,
step_callback,
stream_image_progress,
stream_image_progress_interval,
)
try:
if req.init_image is not None and not context.test_diffusers:
req.sampler_name = "ddim"
req.width, req.height = map(lambda x: x - x % 8, (req.width, req.height)) # clamp to 8
if req.control_image and task_data.control_filter_to_apply:
req.control_image = get_image(req.control_image)
req.control_image = resize_img(req.control_image.convert("RGB"), req.width, req.height, clamp_to_8=True)
req.control_image = filter_images(context, req.control_image, task_data.control_filter_to_apply)[0]
if req.init_image is not None and int(req.num_inference_steps * req.prompt_strength) == 0:
req.prompt_strength = 1 / req.num_inference_steps if req.num_inference_steps > 0 else 1
if context.test_diffusers:
pipe = context.models["stable-diffusion"]["default"]
if hasattr(pipe.unet, "_allocate_trt_buffers_backup"):
setattr(pipe.unet, "_allocate_trt_buffers", pipe.unet._allocate_trt_buffers_backup)
delattr(pipe.unet, "_allocate_trt_buffers_backup")
if hasattr(pipe.unet, "_allocate_trt_buffers"):
convert_to_trt = models_data.model_params["stable-diffusion"].get("convert_to_tensorrt", False)
if convert_to_trt:
pipe.unet.forward = pipe.unet._trt_forward
# pipe.vae.decoder.forward = pipe.vae.decoder._trt_forward
log.info(f"Setting unet.forward to TensorRT")
else:
log.info(f"Not using TensorRT for unet.forward")
pipe.unet.forward = pipe.unet._non_trt_forward
# pipe.vae.decoder.forward = pipe.vae.decoder._non_trt_forward
setattr(pipe.unet, "_allocate_trt_buffers_backup", pipe.unet._allocate_trt_buffers)
delattr(pipe.unet, "_allocate_trt_buffers")
if task_data.enable_vae_tiling:
if hasattr(pipe, "enable_vae_tiling"):
pipe.enable_vae_tiling()
else:
if hasattr(pipe, "disable_vae_tiling"):
pipe.disable_vae_tiling()
images = generate_images(context, callback=callback, **req.dict())
user_stopped = False
except UserInitiatedStop:
images = []
user_stopped = True
if context.partial_x_samples is not None:
if context.test_diffusers:
images = diffusers_latent_samples_to_images(context, context.partial_x_samples)
else:
images = latent_samples_to_images(context, context.partial_x_samples)
finally:
if hasattr(context, "partial_x_samples") and context.partial_x_samples is not None:
if not context.test_diffusers:
del context.partial_x_samples
context.partial_x_samples = None
return images, user_stopped
def construct_response(images: list, seeds: list, output_format: OutputFormatData):
return [
ResponseImage(
data=img_to_base64_str(
img,
output_format.output_format,
output_format.output_quality,
output_format.output_lossless,
),
seed=seed,
)
for img, seed in zip(images, seeds)
]
def make_step_callback(
context,
req: GenerateImageRequest,
task_data: RenderTaskData,
data_queue: queue.Queue,
task_temp_images: list,
step_callback,
stream_image_progress: bool,
stream_image_progress_interval: int,
):
n_steps = req.num_inference_steps if req.init_image is None else int(req.num_inference_steps * req.prompt_strength)
last_callback_time = -1
def update_temp_img(x_samples, task_temp_images: list):
partial_images = []
if context.test_diffusers:
images = diffusers_latent_samples_to_images(context, x_samples)
else:
images = latent_samples_to_images(context, x_samples)
if task_data.block_nsfw:
images = filter_images(context, images, "nsfw_checker")
for i, img in enumerate(images):
buf = img_to_buffer(img, output_format="JPEG")
context.temp_images[f"{task_data.request_id}/{i}"] = buf
task_temp_images[i] = buf
partial_images.append({"path": f"/image/tmp/{task_data.request_id}/{i}"})
del images
return partial_images
def on_image_step(x_samples, i, *args):
nonlocal last_callback_time
if context.test_diffusers:
context.partial_x_samples = (x_samples, args[0])
else:
context.partial_x_samples = x_samples
step_time = time.time() - last_callback_time if last_callback_time != -1 else -1
last_callback_time = time.time()
progress = {"step": i, "step_time": step_time, "total_steps": n_steps}
if stream_image_progress and stream_image_progress_interval > 0 and i % stream_image_progress_interval == 0:
progress["output"] = update_temp_img(context.partial_x_samples, task_temp_images)
data_queue.put(json.dumps(progress))
step_callback()
if context.stop_processing:
raise UserInitiatedStop("User requested that we stop processing")
return on_image_step

View File

@ -0,0 +1,47 @@
from threading import Lock
from queue import Queue, Empty as EmptyQueueException
from typing import Any
class Task:
"Task with output queue and completion lock"
def __init__(self, session_id):
self.id = id(self)
self.session_id = session_id
self.render_device = None # Select the task affinity. (Not used to change active devices).
self.error: Exception = None
self.lock: Lock = Lock() # Locks at task start and unlocks when task is completed
self.buffer_queue: Queue = Queue() # Queue of JSON string segments
self.response: Any = None # Copy of the last reponse
async def read_buffer_generator(self):
try:
while not self.buffer_queue.empty():
res = self.buffer_queue.get(block=False)
self.buffer_queue.task_done()
yield res
except EmptyQueueException as e:
yield
@property
def status(self):
if self.lock.locked():
return "running"
if isinstance(self.error, StopAsyncIteration):
return "stopped"
if self.error:
return "error"
if not self.buffer_queue.empty():
return "buffer"
if self.response:
return "completed"
return "pending"
@property
def is_pending(self):
return bool(not self.response and not self.error)
def run(self):
"Override this to implement the task's behavior"
pass

View File

@ -1,4 +1,4 @@
from typing import Any, List, Union
from typing import Any, List, Dict, Union
from pydantic import BaseModel
@ -17,19 +17,58 @@ class GenerateImageRequest(BaseModel):
init_image: Any = None
init_image_mask: Any = None
control_image: Any = None
control_alpha: Union[float, List[float]] = None
prompt_strength: float = 0.8
preserve_init_image_color_profile = False
preserve_init_image_color_profile: bool = False
strict_mask_border: bool = False
sampler_name: str = None # "ddim", "plms", "heun", "euler", "euler_a", "dpm2", "dpm2_a", "lms"
hypernetwork_strength: float = 0
lora_alpha: Union[float, List[float]] = 0
tiling: str = "none" # "none", "x", "y", "xy"
tiling: str = None # None, "x", "y", "xy"
class FilterImageRequest(BaseModel):
image: Any = None
filter: Union[str, List[str]] = None
filter_params: dict = {}
class ModelsData(BaseModel):
"""
Contains the information related to the models involved in a request.
- To load a model: set the relative path(s) to the model in `model_paths`. No effect if already loaded.
- To unload a model: set the model to `None` in `model_paths`. No effect if already unloaded.
Models that aren't present in `model_paths` will not be changed.
"""
model_paths: Dict[str, Union[str, None, List[str]]] = None
"model_type to string path, or list of string paths"
model_params: Dict[str, Dict[str, Any]] = {}
"model_type to dict of parameters"
class OutputFormatData(BaseModel):
output_format: str = "jpeg" # or "png" or "webp"
output_quality: int = 75
output_lossless: bool = False
class SaveToDiskData(BaseModel):
save_to_disk_path: str = None
metadata_output_format: str = "txt" # or "json"
class TaskData(BaseModel):
request_id: str = None
session_id: str = "session"
save_to_disk_path: str = None
class RenderTaskData(TaskData):
vram_usage_level: str = "balanced" # or "low" or "medium"
use_face_correction: Union[str, List[str]] = None # or "GFPGANv1.3"
@ -40,13 +79,15 @@ class TaskData(BaseModel):
use_vae_model: Union[str, List[str]] = None
use_hypernetwork_model: Union[str, List[str]] = None
use_lora_model: Union[str, List[str]] = None
use_controlnet_model: Union[str, List[str]] = None
use_embeddings_model: Union[str, List[str]] = None
filters: List[str] = []
filter_params: Dict[str, Dict[str, Any]] = {}
control_filter_to_apply: Union[str, List[str]] = None
enable_vae_tiling: bool = True
show_only_filtered_image: bool = False
block_nsfw: bool = False
output_format: str = "jpeg" # or "png" or "webp"
output_quality: int = 75
output_lossless: bool = False
metadata_output_format: str = "txt" # or "json"
stream_image_progress: bool = False
stream_image_progress_interval: int = 5
clip_skip: bool = False
@ -59,7 +100,7 @@ class MergeRequest(BaseModel):
model1: str = None
ratio: float = None
out_path: str = "mix"
use_fp16 = True
use_fp16: bool = True
class Image:
@ -80,24 +121,42 @@ class Image:
}
class Response:
class GenerateImageResponse:
render_request: GenerateImageRequest
task_data: TaskData
models_data: ModelsData
images: list
def __init__(self, render_request: GenerateImageRequest, task_data: TaskData, images: list):
def __init__(
self,
render_request: GenerateImageRequest,
task_data: TaskData,
models_data: ModelsData,
output_format: OutputFormatData,
save_data: SaveToDiskData,
images: list,
):
self.render_request = render_request
self.task_data = task_data
self.models_data = models_data
self.output_format = output_format
self.save_data = save_data
self.images = images
def json(self):
del self.render_request.init_image
del self.render_request.init_image_mask
del self.render_request.control_image
task_data = self.task_data.dict()
task_data.update(self.output_format.dict())
task_data.update(self.save_data.dict())
res = {
"status": "succeeded",
"render_request": self.render_request.dict(),
"task_data": self.task_data.dict(),
"task_data": task_data,
# "models_data": self.models_data.dict(), # haven't migrated the UI to the new format (yet)
"output": [],
}
@ -107,5 +166,112 @@ class Response:
return res
class FilterImageResponse:
request: FilterImageRequest
models_data: ModelsData
images: list
def __init__(self, request: FilterImageRequest, models_data: ModelsData, images: list):
self.request = request
self.models_data = models_data
self.images = images
def json(self):
del self.request.image
res = {
"status": "succeeded",
"request": self.request.dict(),
"models_data": self.models_data.dict(),
"output": [],
}
for image in self.images:
res["output"].append(image)
return res
class UserInitiatedStop(Exception):
pass
def convert_legacy_render_req_to_new(old_req: dict):
new_req = dict(old_req)
# new keys
model_paths = new_req["model_paths"] = {}
model_params = new_req["model_params"] = {}
filters = new_req["filters"] = []
filter_params = new_req["filter_params"] = {}
# move the model info
model_paths["stable-diffusion"] = old_req.get("use_stable_diffusion_model")
model_paths["vae"] = old_req.get("use_vae_model")
model_paths["hypernetwork"] = old_req.get("use_hypernetwork_model")
model_paths["lora"] = old_req.get("use_lora_model")
model_paths["controlnet"] = old_req.get("use_controlnet_model")
model_paths["embeddings"] = old_req.get("use_embeddings_model")
model_paths["gfpgan"] = old_req.get("use_face_correction", "")
model_paths["gfpgan"] = model_paths["gfpgan"] if "gfpgan" in model_paths["gfpgan"].lower() else None
model_paths["codeformer"] = old_req.get("use_face_correction", "")
model_paths["codeformer"] = model_paths["codeformer"] if "codeformer" in model_paths["codeformer"].lower() else None
model_paths["realesrgan"] = old_req.get("use_upscale", "")
model_paths["realesrgan"] = model_paths["realesrgan"] if "realesrgan" in model_paths["realesrgan"].lower() else None
model_paths["latent_upscaler"] = old_req.get("use_upscale", "")
model_paths["latent_upscaler"] = (
model_paths["latent_upscaler"] if "latent_upscaler" in model_paths["latent_upscaler"].lower() else None
)
if "control_filter_to_apply" in old_req:
filter_model = old_req["control_filter_to_apply"]
model_paths[filter_model] = filter_model
if old_req.get("block_nsfw"):
model_paths["nsfw_checker"] = "nsfw_checker"
# move the model params
if model_paths["stable-diffusion"]:
model_params["stable-diffusion"] = {
"clip_skip": bool(old_req.get("clip_skip", False)),
"convert_to_tensorrt": bool(old_req.get("convert_to_tensorrt", False)),
"trt_build_config": old_req.get(
"trt_build_config", {"batch_size_range": (1, 1), "dimensions_range": [(768, 1024)]}
),
}
# move the filter params
if model_paths["realesrgan"]:
filter_params["realesrgan"] = {"scale": int(old_req.get("upscale_amount", 4))}
if model_paths["latent_upscaler"]:
filter_params["latent_upscaler"] = {
"prompt": old_req["prompt"],
"negative_prompt": old_req.get("negative_prompt"),
"seed": int(old_req.get("seed", 42)),
"num_inference_steps": int(old_req.get("latent_upscaler_steps", 10)),
"guidance_scale": 0,
}
if model_paths["codeformer"]:
filter_params["codeformer"] = {
"upscale_faces": bool(old_req.get("codeformer_upscale_faces", True)),
"codeformer_fidelity": float(old_req.get("codeformer_fidelity", 0.5)),
}
# set the filters
if old_req.get("block_nsfw"):
filters.append("nsfw_checker")
if model_paths["codeformer"]:
filters.append("codeformer")
elif model_paths["gfpgan"]:
filters.append("gfpgan")
if model_paths["realesrgan"]:
filters.append("realesrgan")
elif model_paths["latent_upscaler"]:
filters.append("latent_upscaler")
return new_req

View File

@ -1,4 +1,5 @@
import logging
import hashlib
log = logging.getLogger("easydiffusion")
@ -6,3 +7,15 @@ from .save_utils import (
save_images_to_disk,
get_printable_request,
)
def sha256sum(filename):
sha256 = hashlib.sha256()
with open(filename, "rb") as f:
while True:
data = f.read(8192) # Read in chunks of 8192 bytes
if not data:
break
sha256.update(data)
return sha256.hexdigest()

View File

@ -7,9 +7,17 @@ from datetime import datetime
from functools import reduce
from easydiffusion import app
from easydiffusion.types import GenerateImageRequest, TaskData
from easydiffusion.types import (
GenerateImageRequest,
TaskData,
RenderTaskData,
OutputFormatData,
SaveToDiskData,
ModelsData,
)
from numpy import base_repr
from sdkit.utils import save_dicts, save_images
from sdkit.models.model_loader.embeddings import get_embedding_token
filename_regex = re.compile("[^a-zA-Z0-9._-]")
img_number_regex = re.compile("([0-9]{5,})")
@ -21,6 +29,9 @@ TASK_TEXT_MAPPING = {
"seed": "Seed",
"use_stable_diffusion_model": "Stable Diffusion model",
"clip_skip": "Clip Skip",
"use_controlnet_model": "ControlNet model",
"control_filter_to_apply": "ControlNet Filter",
"control_alpha": "ControlNet Strength",
"use_vae_model": "VAE model",
"sampler_name": "Sampler",
"width": "Width",
@ -32,7 +43,7 @@ TASK_TEXT_MAPPING = {
"lora_alpha": "LoRA Strength",
"use_hypernetwork_model": "Hypernetwork model",
"hypernetwork_strength": "Hypernetwork Strength",
"use_embedding_models": "Embedding models",
"use_embeddings_model": "Embedding models",
"tiling": "Seamless Tiling",
"use_face_correction": "Use Face Correction",
"use_upscale": "Use Upscaling",
@ -92,7 +103,7 @@ def format_folder_name(format: str, req: GenerateImageRequest, task_data: TaskDa
def format_file_name(
format: str,
req: GenerateImageRequest,
task_data: TaskData,
task_data: RenderTaskData,
now: float,
batch_file_number: int,
folder_img_number: ImageNumber,
@ -114,12 +125,20 @@ def format_file_name(
return filename_regex.sub("_", format)
def save_images_to_disk(images: list, filtered_images: list, req: GenerateImageRequest, task_data: TaskData):
def save_images_to_disk(
images: list,
filtered_images: list,
req: GenerateImageRequest,
task_data: RenderTaskData,
models_data: ModelsData,
output_format: OutputFormatData,
save_data: SaveToDiskData,
):
now = time.time()
app_config = app.getConfig()
folder_format = app_config.get("folder_format", "$id")
save_dir_path = os.path.join(task_data.save_to_disk_path, format_folder_name(folder_format, req, task_data))
metadata_entries = get_metadata_entries_for_request(req, task_data)
save_dir_path = os.path.join(save_data.save_to_disk_path, format_folder_name(folder_format, req, task_data))
metadata_entries = get_metadata_entries_for_request(req, task_data, models_data, output_format, save_data)
file_number = calculate_img_number(save_dir_path, task_data)
make_filename = make_filename_callback(
app_config.get("filename_format", "$p_$tsb64"),
@ -134,19 +153,19 @@ def save_images_to_disk(images: list, filtered_images: list, req: GenerateImageR
filtered_images,
save_dir_path,
file_name=make_filename,
output_format=task_data.output_format,
output_quality=task_data.output_quality,
output_lossless=task_data.output_lossless,
output_format=output_format.output_format,
output_quality=output_format.output_quality,
output_lossless=output_format.output_lossless,
)
if task_data.metadata_output_format:
for metadata_output_format in task_data.metadata_output_format.split(","):
if save_data.metadata_output_format:
for metadata_output_format in save_data.metadata_output_format.split(","):
if metadata_output_format.lower() in ["json", "txt", "embed"]:
save_dicts(
metadata_entries,
save_dir_path,
file_name=make_filename,
output_format=metadata_output_format,
file_format=task_data.output_format,
file_format=output_format.output_format,
)
else:
make_filter_filename = make_filename_callback(
@ -162,39 +181,46 @@ def save_images_to_disk(images: list, filtered_images: list, req: GenerateImageR
images,
save_dir_path,
file_name=make_filename,
output_format=task_data.output_format,
output_quality=task_data.output_quality,
output_lossless=task_data.output_lossless,
output_format=output_format.output_format,
output_quality=output_format.output_quality,
output_lossless=output_format.output_lossless,
)
save_images(
filtered_images,
save_dir_path,
file_name=make_filter_filename,
output_format=task_data.output_format,
output_quality=task_data.output_quality,
output_lossless=task_data.output_lossless,
output_format=output_format.output_format,
output_quality=output_format.output_quality,
output_lossless=output_format.output_lossless,
)
if task_data.metadata_output_format:
for metadata_output_format in task_data.metadata_output_format.split(","):
if save_data.metadata_output_format:
for metadata_output_format in save_data.metadata_output_format.split(","):
if metadata_output_format.lower() in ["json", "txt", "embed"]:
save_dicts(
metadata_entries,
save_dir_path,
file_name=make_filter_filename,
output_format=task_data.metadata_output_format,
file_format=task_data.output_format,
output_format=metadata_output_format,
file_format=output_format.output_format,
)
def get_metadata_entries_for_request(req: GenerateImageRequest, task_data: TaskData):
metadata = get_printable_request(req, task_data)
def get_metadata_entries_for_request(
req: GenerateImageRequest,
task_data: RenderTaskData,
models_data: ModelsData,
output_format: OutputFormatData,
save_data: SaveToDiskData,
):
metadata = get_printable_request(req, task_data, models_data, output_format, save_data)
# if text, format it in the text format expected by the UI
is_txt_format = task_data.metadata_output_format and "txt" in task_data.metadata_output_format.lower().split(",")
is_txt_format = save_data.metadata_output_format and "txt" in save_data.metadata_output_format.lower().split(",")
if is_txt_format:
def format_value(value):
if isinstance(value, list):
return ", ".join([ str(it) for it in value ])
return ", ".join([str(it) for it in value])
return value
metadata = {
@ -208,12 +234,20 @@ def get_metadata_entries_for_request(req: GenerateImageRequest, task_data: TaskD
return entries
def get_printable_request(req: GenerateImageRequest, task_data: TaskData):
def get_printable_request(
req: GenerateImageRequest,
task_data: RenderTaskData,
models_data: ModelsData,
output_format: OutputFormatData,
save_data: SaveToDiskData,
):
req_metadata = req.dict()
task_data_metadata = task_data.dict()
task_data_metadata.update(output_format.dict())
task_data_metadata.update(save_data.dict())
app_config = app.getConfig()
using_diffusers = app_config.get("test_diffusers", False)
using_diffusers = app_config.get("use_v3_engine", True)
# Save the metadata in the order defined in TASK_TEXT_MAPPING
metadata = {}
@ -222,25 +256,13 @@ def get_printable_request(req: GenerateImageRequest, task_data: TaskData):
metadata[key] = req_metadata[key]
elif key in task_data_metadata:
metadata[key] = task_data_metadata[key]
elif key == "use_embedding_models" and using_diffusers:
embeddings_extensions = {".pt", ".bin", ".safetensors"}
def scan_directory(directory_path: str):
used_embeddings = []
for entry in os.scandir(directory_path):
if entry.is_file():
entry_extension = os.path.splitext(entry.name)[1]
if entry_extension not in embeddings_extensions:
continue
embedding_name_regex = regex.compile(r"(^|[\s,])" + regex.escape(os.path.splitext(entry.name)[0]) + r"([+-]*$|[\s,]|[+-]+[\s,])")
if embedding_name_regex.search(req.prompt) or embedding_name_regex.search(req.negative_prompt):
used_embeddings.append(entry.path)
elif entry.is_dir():
used_embeddings.extend(scan_directory(entry.path))
return used_embeddings
used_embeddings = scan_directory(os.path.join(app.MODELS_DIR, "embeddings"))
metadata["use_embedding_models"] = used_embeddings if len(used_embeddings) > 0 else None
if key == "use_embeddings_model" and task_data_metadata[key] and using_diffusers:
embeddings_used = models_data.model_paths["embeddings"]
embeddings_used = embeddings_used if isinstance(embeddings_used, list) else [embeddings_used]
metadata["use_embeddings_model"] = embeddings_used if len(embeddings_used) > 0 else None
# Clean up the metadata
if req.init_image is None and "prompt_strength" in metadata:
del metadata["prompt_strength"]
@ -252,9 +274,26 @@ def get_printable_request(req: GenerateImageRequest, task_data: TaskData):
del metadata["lora_alpha"]
if task_data.use_upscale != "latent_upscaler" and "latent_upscaler_steps" in metadata:
del metadata["latent_upscaler_steps"]
if task_data.use_controlnet_model is None and "control_filter_to_apply" in metadata:
del metadata["control_filter_to_apply"]
if not using_diffusers:
for key in (x for x in ["use_lora_model", "lora_alpha", "clip_skip", "tiling", "latent_upscaler_steps"] if x in metadata):
if using_diffusers:
for key in (x for x in ["use_hypernetwork_model", "hypernetwork_strength"] if x in metadata):
del metadata[key]
else:
for key in (
x
for x in [
"use_lora_model",
"lora_alpha",
"clip_skip",
"tiling",
"latent_upscaler_steps",
"use_controlnet_model",
"control_filter_to_apply",
]
if x in metadata
):
del metadata[key]
return metadata
@ -263,7 +302,7 @@ def get_printable_request(req: GenerateImageRequest, task_data: TaskData):
def make_filename_callback(
filename_format: str,
req: GenerateImageRequest,
task_data: TaskData,
task_data: RenderTaskData,
folder_img_number: int,
suffix=None,
now=None,
@ -280,7 +319,7 @@ def make_filename_callback(
return make_filename
def _calculate_img_number(save_dir_path: str, task_data: TaskData):
def _calculate_img_number(save_dir_path: str, task_data: RenderTaskData):
def get_highest_img_number(accumulator: int, file: os.DirEntry) -> int:
if not file.is_file:
return accumulator
@ -324,5 +363,5 @@ def _calculate_img_number(save_dir_path: str, task_data: TaskData):
_calculate_img_number.session_img_numbers = {}
def calculate_img_number(save_dir_path: str, task_data: TaskData):
def calculate_img_number(save_dir_path: str, task_data: RenderTaskData):
return ImageNumber(lambda: _calculate_img_number(save_dir_path, task_data))

View File

@ -17,12 +17,16 @@
<link rel="stylesheet" href="/media/css/searchable-models.css">
<link rel="stylesheet" href="/media/css/image-modal.css">
<link rel="stylesheet" href="/media/css/plugins.css">
<link rel="stylesheet" href="/media/css/animations.css">
<link rel="stylesheet" href="/media/css/croppr.css" rel="stylesheet"/>
<link rel="manifest" href="/media/manifest.webmanifest">
<script src="/media/js/jquery-3.6.1.min.js"></script>
<script src="/media/js/jquery-confirm.min.js"></script>
<script src="/media/js/jszip.min.js"></script>
<script src="/media/js/FileSaver.min.js"></script>
<script src="/media/js/marked.min.js"></script>
<script src="/media/js/croppr.js"></script>
<script src="/media/js/exif-reader.js"></script>
</head>
<body>
<div id="container">
@ -31,7 +35,7 @@
<h1>
<img id="logo_img" src="/media/images/icon-512x512.png" >
Easy Diffusion
<small><span id="version">v2.5.46</span> <span id="updateBranchLabel"></span></small>
<small><span id="version">v3.0.9c</span> <span id="updateBranchLabel"></span></small>
</h1>
</div>
<div id="server-status">
@ -58,7 +62,14 @@
<div id="editor-inputs-prompt" class="row">
<div id="prompt-toolbar" class="split-toolbar">
<div id="prompt-toolbar-left" class="toolbar-left">
<label for="prompt"><b>Enter Prompt</b></label> <small>or</small> <button id="promptsFromFileBtn" class="tertiaryButton smallButton">Load from a file</button>
<label for="prompt"><b>Enter Prompt</b>
<i class="fa-solid fa-circle-question help-btn"><span class="simple-tooltip right">
You can type your prompts in the below textbox or load them from a file. You can also
reload tasks from metadata embedded in PNG, WEBP and JPEG images (enable embedding from the Settings).
</span></i>
</label>
<small>or</small>
<button id="promptsFromFileBtn" class="tertiaryButton smallButton">Load from a file</button>
</div>
<div id="prompt-toolbar-right" class="toolbar-right">
<button id="image-modifier-dropdown" class="tertiaryButton smallButton">+ Image Modifiers</button>
@ -72,7 +83,7 @@
<a href="https://github.com/easydiffusion/easydiffusion/wiki/Writing-prompts#negative-prompts" target="_blank"><i class="fa-solid fa-circle-question help-btn"><span class="simple-tooltip top">Click to learn more about Negative Prompts</span></i></a>
<small>(optional)</small>
</label>
<button id="negative-embeddings-button" class="tertiaryButton smallButton displayNone">+ Embedding</button>
<button id="negative-embeddings-button" class="tertiaryButton smallButton displayNone">+ Negative Embedding</button>
<div class="collapsible-content">
<textarea id="negative_prompt" name="negative_prompt" placeholder="list the things to remove from the image (e.g. fog, green)"></textarea>
</div>
@ -80,10 +91,15 @@
<div id="editor-inputs-init-image" class="row">
<label for="init_image">Initial Image (img2img) <small>(optional)</small> </label>
<i class="fa-solid fa-circle-question help-btn"><span class="simple-tooltip top">
Add img2img source image using the Browse button, via drag & drop from external file or browser image (incl.
rendered image) or by pasting an image from the clipboard using Ctrl+V.<br /><br />
You may also reload the metadata embedded in a PNG, WEBP or JPEG image (enable embedding from the Settings).
</span></i>
<div id="init_image_preview_container" class="image_preview_container">
<div id="init_image_wrapper">
<img id="init_image_preview" src="" crossorigin="anonymous" />
<div id="init_image_wrapper" class="preview_image_wrapper">
<img id="init_image_preview" class="image_preview" src="" crossorigin="anonymous" />
<span id="init_image_size_box" class="img_bottom_label"></span>
<button class="init_image_clear image_clear_btn"><i class="fa-solid fa-xmark"></i></button>
</div>
@ -108,6 +124,7 @@
</div>
<div id="apply_color_correction_setting" class="pl-5"><input id="apply_color_correction" name="apply_color_correction" type="checkbox"> <label for="apply_color_correction">Preserve color profile <small>(helps during inpainting)</small></label></div>
<div id="strict_mask_border_setting" class="pl-5"><input id="strict_mask_border" name="strict_mask_border" type="checkbox"> <label for="strict_mask_border">Strict Mask Border <small>(won't modify outside the mask, but the mask border might be visible)</small></label></div>
</div>
@ -138,13 +155,25 @@
<div id="editor-settings-entries" class="collapsible-content">
<div><table>
<tr><b class="settings-subheader">Image Settings</b></tr>
<tr class="pl-5"><td><label for="seed">Seed:</label></td><td><input id="seed" name="seed" size="10" value="0" onkeypress="preventNonNumericalInput(event)"> <input id="random_seed" name="random_seed" type="checkbox" checked><label for="random_seed">Random</label></td></tr>
<tr class="pl-5"><td><label for="num_outputs_total">Number of Images:</label></td><td><input id="num_outputs_total" name="num_outputs_total" value="1" size="1" onkeypress="preventNonNumericalInput(event)"> <label><small>(total)</small></label> <input id="num_outputs_parallel" name="num_outputs_parallel" value="1" size="1" onkeypress="preventNonNumericalInput(event)"> <label for="num_outputs_parallel"><small>(in parallel)</small></label></td></tr>
<tr class="pl-5"><td><label for="seed">Seed:</label></td><td><input id="seed" name="seed" size="10" value="0" onkeypress="preventNonNumericalInput(event)" inputmode="numeric"> <input id="random_seed" name="random_seed" type="checkbox" checked><label for="random_seed">Random</label></td></tr>
<tr class="pl-5"><td><label for="num_outputs_total">Number of Images:</label></td>
<td><input id="num_outputs_total" name="num_outputs_total" value="1" type="number" value="1" min="1" step="1" onkeypres"="preventNonNumericalInput(event)" inputmode="numeric">
<label><small>(total)</small></label>
<input id="num_outputs_parallel" name="num_outputs_parallel" value="1" type="number" value="1" min="1" step="1" onkeypress="preventNonNumericalInput(event)" inputmode="numeric">
<label id="num_outputs_parallel_label" for="num_outputs_parallel"><small>(in parallel)</small></label></td>
</tr>
<tr class="pl-5"><td><label for="stable_diffusion_model">Model:</label></td><td class="model-input">
<input id="stable_diffusion_model" type="text" spellcheck="false" autocomplete="off" class="model-filter" data-path="" />
<button id="reload-models" class="secondaryButton reloadModels"><i class='fa-solid fa-rotate'></i></button>
<a href="https://github.com/easydiffusion/easydiffusion/wiki/Custom-Models" target="_blank"><i class="fa-solid fa-circle-question help-btn"><span class="simple-tooltip top-left">Click to learn more about custom models</span></i></a>
</td></tr>
<tr class="pl-5 displayNone" id="enable_trt_config">
<td><label for="convert_to_tensorrt">Enable TensorRT:</label></td>
<td class="diffusers-restart-needed">
<input id="convert_to_tensorrt" name="convert_to_tensorrt" type="checkbox">
<!-- <label><small>Takes upto 20 mins the first time</small></label> -->
</td>
</tr>
<tr class="pl-5 displayNone" id="clip_skip_config">
<td><label for="clip_skip">Clip Skip:</label></td>
<td class="diffusers-restart-needed">
@ -152,6 +181,65 @@
<a href="https://github.com/easydiffusion/easydiffusion/wiki/Clip-Skip" target="_blank"><i class="fa-solid fa-circle-question help-btn"><span class="simple-tooltip top-left">Click to learn more about Clip Skip</span></i></a>
</td>
</tr>
<tr id="controlnet_model_container" class="pl-5">
<td><label for="controlnet_model">ControlNet Image:</label></td>
<td class="diffusers-restart-needed">
<div id="control_image_wrapper" class="preview_image_wrapper">
<img id="control_image_preview" class="image_preview" src="" crossorigin="anonymous" />
<span id="control_image_size_box" class="img_bottom_label"></span>
<button class="control_image_clear image_clear_btn"><i class="fa-solid fa-xmark"></i></button>
</div>
<input id="control_image" name="control_image" type="file" />
<a href="https://github.com/easydiffusion/easydiffusion/wiki/ControlNet" target="_blank"><i class="fa-solid fa-circle-question help-btn"><span class="simple-tooltip top-left">Click to learn more about ControlNets</span></i></a>
<div id="controlnet_config" class="displayNone">
<label><small>Filter to apply:</small></label>
<select id="control_image_filter">
<option value="">None</option>
<optgroup label="Pose">
<option value="openpose">OpenPose (*)</option>
<option value="openpose_face">OpenPose face</option>
<option value="openpose_faceonly">OpenPose face-only</option>
<option value="openpose_hand">OpenPose hand</option>
<option value="openpose_full">OpenPose full</option>
</optgroup>
<optgroup label="Outline">
<option value="canny">Canny (*)</option>
<option value="mlsd">Straight lines</option>
<option value="scribble_hed">Scribble hed (*)</option>
<option value="scribble_hedsafe">Scribble hedsafe</option>
<option value="scribble_pidinet">Scribble pidinet</option>
<option value="scribble_pidsafe">Scribble pidsafe</option>
<option value="softedge_hed">Softedge hed</option>
<option value="softedge_hedsafe">Softedge hedsafe</option>
<option value="softedge_pidinet">Softedge pidinet</option>
<option value="softedge_pidsafe">Softedge pidsafe</option>
</optgroup>
<optgroup label="Depth">
<option value="normal_bae">Normal bae (*)</option>
<option value="depth_midas">Depth midas</option>
<option value="depth_zoe">Depth zoe</option>
<option value="depth_leres">Depth leres</option>
<option value="depth_leres++">Depth leres++</option>
</optgroup>
<optgroup label="Line art">
<option value="lineart_coarse">Lineart coarse</option>
<option value="lineart_realistic">Lineart realistic</option>
<option value="lineart_anime">Lineart anime</option>
</optgroup>
<optgroup label="Misc">
<option value="shuffle">Shuffle</option>
<option value="segment">Segment</option>
</optgroup>
</select>
<br/>
<label for="controlnet_model"><small>Model:</small></label> <input id="controlnet_model" type="text" spellcheck="false" autocomplete="off" class="model-filter" data-path="" />
<br/>
<label><small>Will download the necessary models, the first time.</small></label>
<br/>
<label for="controlnet_alpha_slider"><small>Strength:</small></label> <input id="controlnet_alpha_slider" name="controlnet_alpha_slider" class="editor-slider" value="10" type="range" min="0" max="10"> <input id="controlnet_alpha" name="controlnet_alpha" size="4" pattern="^[0-9\.]+$" onkeypress="preventNonNumericalInput(event)" inputmode="decimal">
</div>
</td>
</tr>
<tr class="pl-5"><td><label for="vae_model">Custom VAE:</label></td><td>
<input id="vae_model" type="text" spellcheck="false" autocomplete="off" class="model-filter" data-path="" />
<a href="https://github.com/easydiffusion/easydiffusion/wiki/VAE-Variational-Auto-Encoder" target="_blank"><i class="fa-solid fa-circle-question help-btn"><span class="simple-tooltip top-left">Click to learn more about VAEs</span></i></a>
@ -180,17 +268,17 @@
<option value="unipc_tu_2" class="k_diffusion-only">UniPC TU 2</option>
<option value="unipc_tq" class="k_diffusion-only">UniPC TQ</option>
</select>
<a href="https://github.com/easydiffusion/easydiffusion/wiki/How-to-Use#samplers" target="_blank"><i class="fa-solid fa-circle-question help-btn"><span class="simple-tooltip top-left">Click to learn more about samplers</span></i></a>
<a href="https://github.com/easydiffusion/easydiffusion/wiki/Samplers" target="_blank"><i class="fa-solid fa-circle-question help-btn"><span class="simple-tooltip top-left">Click to learn more about samplers</span></i></a>
</td></tr>
<tr class="pl-5"><td><label>Image Size: </label></td><td>
<tr class="pl-5"><td><label>Image Size: </label></td><td id="image-size-options">
<select id="width" name="width" value="512">
<option value="128">128 (*)</option>
<option value="128">128</option>
<option value="192">192</option>
<option value="256">256 (*)</option>
<option value="256">256</option>
<option value="320">320</option>
<option value="384">384</option>
<option value="448">448</option>
<option value="512" selected>512 (*)</option>
<option value="512" selected="">512 (*)</option>
<option value="576">576</option>
<option value="640">640</option>
<option value="704">704</option>
@ -204,15 +292,18 @@
<option value="1792">1792</option>
<option value="2048">2048</option>
</select>
<label for="width"><small>(width)</small></label>
<label id="widthLabel" for="width"><small><span>(width)</span></small></label>
<div class="tooltip-container">
<span id="swap-width-height" class="clickable smallButton" style="margin-left: 2px; margin-right:2px;"><i class="fa-solid fa-right-left"><span class="simple-tooltip top-left"> Swap width and height </span></i></span>
</div>
<select id="height" name="height" value="512">
<option value="128">128 (*)</option>
<option value="128">128</option>
<option value="192">192</option>
<option value="256">256 (*)</option>
<option value="256">256</option>
<option value="320">320</option>
<option value="384">384</option>
<option value="448">448</option>
<option value="512" selected>512 (*)</option>
<option value="512" selected="">512 (*)</option>
<option value="576">576</option>
<option value="640">640</option>
<option value="704">704</option>
@ -226,27 +317,52 @@
<option value="1792">1792</option>
<option value="2048">2048</option>
</select>
<label for="height"><small>(height)</small></label>
<label id="heightLabel" for="height"><small><span>(height)</span></small></label>
<div id="recent-resolutions-container">
<span id="recent-resolutions-button" class="clickable"><i class="fa-solid fa-sliders"><span class="simple-tooltip top-left"> Advanced sizes </span></i></span>
<div id="recent-resolutions-popup" class="displayNone">
<small>Custom size:</small><br>
<input id="custom-width" name="custom-width" type="number" min="128" value="512" onkeypress="preventNonNumericalInput(event)" inputmode="numeric">
&times;
<input id="custom-height" name="custom-height" type="number" min="128" value="512" onkeypress="preventNonNumericalInput(event)" inputmode="numeric"><br>
<small>Resize:</small><br>
<input id="resize-slider" name="resize-slider" class="editor-slider" value="1" type="range" min="0.4" max="2" step="0.005" style="width:100%;"><br>
<div id="enlarge-buttons"><button data-factor="0.5" class="tertiaryButton smallButton">×0.5</button>&nbsp;<button data-factor="1.2" class="tertiaryButton smallButton">×1.2</button>&nbsp;<button data-factor="1.5" class="tertiaryButton smallButton">×1.5</button>&nbsp;<button data-factor="2" class="tertiaryButton smallButton">×2</button>&nbsp;<button data-factor="3" class="tertiaryButton smallButton">×3</button></div>
<div class="two-column">
<div class="left-column">
<small>Recently&nbsp;used:</small><br>
<div id="recent-resolution-list">
</div>
</div>
<div class="right-column">
<small>Common&nbsp;sizes:</small><br>
<div id="common-resolution-list">
</div>
</div>
</div>
</div>
</div>
<div id="small_image_warning" class="displayNone">Small image sizes can cause bad image quality</div>
</td></tr>
<tr class="pl-5"><td><label for="num_inference_steps">Inference Steps:</label></td><td> <input id="num_inference_steps" name="num_inference_steps" type="number" min="1" step="1" style="width: 42pt" value="25" onkeypress="preventNonNumericalInput(event)"></td></tr>
<tr class="pl-5"><td><label for="guidance_scale_slider">Guidance Scale:</label></td><td> <input id="guidance_scale_slider" name="guidance_scale_slider" class="editor-slider" value="75" type="range" min="11" max="500"> <input id="guidance_scale" name="guidance_scale" size="4" pattern="^[0-9\.]+$" onkeypress="preventNonNumericalInput(event)"></td></tr>
<tr id="prompt_strength_container" class="pl-5"><td><label for="prompt_strength_slider">Prompt Strength:</label></td><td> <input id="prompt_strength_slider" name="prompt_strength_slider" class="editor-slider" value="80" type="range" min="0" max="99"> <input id="prompt_strength" name="prompt_strength" size="4" pattern="^[0-9\.]+$" onkeypress="preventNonNumericalInput(event)"><br/></td></tr>
<tr class="pl-5"><td><label for="num_inference_steps">Inference Steps:</label></td><td> <input id="num_inference_steps" name="num_inference_steps" type="number" min="1" step="1" style="width: 42pt" value="25" onkeypress="preventNonNumericalInput(event)" inputmode="numeric"></td></tr>
<tr class="pl-5"><td><label for="guidance_scale_slider">Guidance Scale:</label></td><td> <input id="guidance_scale_slider" name="guidance_scale_slider" class="editor-slider" value="75" type="range" min="11" max="500"> <input id="guidance_scale" name="guidance_scale" size="4" pattern="^[0-9\.]+$" onkeypress="preventNonNumericalInput(event)" inputmode="decimal"></td></tr>
<tr id="prompt_strength_container" class="pl-5"><td><label for="prompt_strength_slider">Prompt Strength:</label></td><td> <input id="prompt_strength_slider" name="prompt_strength_slider" class="editor-slider" value="80" type="range" min="0" max="99"> <input id="prompt_strength" name="prompt_strength" size="4" pattern="^[0-9\.]+$" onkeypress="preventNonNumericalInput(event)" inputmode="decimal"><br/></td></tr>
<tr id="lora_model_container" class="pl-5">
<td>
<label for="lora_model">LoRA:</label>
</td>
<td class="diffusers-restart-needed">
<div class="model_entries"></div>
<button class="add_model_entry"><i class="fa-solid fa-plus"></i> add another LoRA</button>
<div id="lora_model" data-path=""></div>
</td>
</tr>
<tr class="pl-5"><td><label for="hypernetwork_model">Hypernetwork:</label></td><td>
<tr id="hypernetwork_model_container" class="pl-5"><td><label for="hypernetwork_model">Hypernetwork:</label></td><td>
<input id="hypernetwork_model" type="text" spellcheck="false" autocomplete="off" class="model-filter" data-path="" />
</td></tr>
<tr id="hypernetwork_strength_container" class="pl-5">
<td><label for="hypernetwork_strength_slider">Hypernetwork Strength:</label></td>
<td> <input id="hypernetwork_strength_slider" name="hypernetwork_strength_slider" class="editor-slider" value="100" type="range" min="0" max="100"> <input id="hypernetwork_strength" name="hypernetwork_strength" size="4" pattern="^[0-9\.]+$" onkeypress="preventNonNumericalInput(event)"><br/></td>
<td> <input id="hypernetwork_strength_slider" name="hypernetwork_strength_slider" class="editor-slider" value="100" type="range" min="0" max="100"> <input id="hypernetwork_strength" name="hypernetwork_strength" size="4" pattern="^[0-9\.]+$" onkeypress="preventNonNumericalInput(event)" inputmode="decimal"><br/></td>
</tr>
<tr id="tiling_container" class="pl-5">
<td><label for="tiling">Seamless Tiling:</label></td>
@ -271,8 +387,15 @@
</span>
</td></tr>
<tr class="pl-5" id="output_quality_row"><td><label for="output_quality">Image Quality:</label></td><td>
<input id="output_quality_slider" name="output_quality" class="editor-slider" value="75" type="range" min="10" max="95"> <input id="output_quality" name="output_quality" size="4" pattern="^[0-9\.]+$" onkeypress="preventNonNumericalInput(event)">
<input id="output_quality_slider" name="output_quality" class="editor-slider" value="75" type="range" min="10" max="95"> <input id="output_quality" name="output_quality" size="4" pattern="^[0-9\.]+$" onkeypress="preventNonNumericalInput(event)" inputmode="numeric">
</td></tr>
<tr class="pl-5">
<td><label for="tiling">Enable VAE Tiling:</label></td>
<td class="diffusers-restart-needed">
<input id="enable_vae_tiling" name="enable_vae_tiling" type="checkbox" checked>
<label><small>Optimizes memory for larger images</small></label>
</td>
</tr>
</table></div>
<div><ul>
@ -281,7 +404,7 @@
<li class="pl-5" id="use_face_correction_container">
<input id="use_face_correction" name="use_face_correction" type="checkbox"> <label for="use_face_correction">Fix incorrect faces and eyes</label> <div style="display:inline-block;"><input id="gfpgan_model" type="text" spellcheck="false" autocomplete="off" class="model-filter" data-path="" /></div>
<table id="codeformer_settings" class="displayNone sub-settings">
<tr class="pl-5"><td><label for="codeformer_fidelity_slider">Strength:</label></td><td><input id="codeformer_fidelity_slider" name="codeformer_fidelity_slider" class="editor-slider" value="5" type="range" min="0" max="10"> <input id="codeformer_fidelity" name="codeformer_fidelity" size="4" pattern="^[0-9\.]+$" onkeypress="preventNonNumericalInput(event)"></td></tr>
<tr class="pl-5"><td><label for="codeformer_fidelity_slider">Strength:</label></td><td><input id="codeformer_fidelity_slider" name="codeformer_fidelity_slider" class="editor-slider" value="5" type="range" min="0" max="10"> <input id="codeformer_fidelity" name="codeformer_fidelity" size="4" pattern="^[0-9\.]+$" onkeypress="preventNonNumericalInput(event)" inputmode="decimal"></td></tr>
<tr class="pl-5"><td><label for="codeformer_upscale_faces">Upscale Faces:</label></td><td><input id="codeformer_upscale_faces" name="codeformer_upscale_faces" type="checkbox" checked> <label><small>(improves the resolution of faces)</small></label></td></tr>
</table>
</li>
@ -298,7 +421,7 @@
<option value="latent_upscaler">Latent Upscaler 2x</option>
</select>
<table id="latent_upscaler_settings" class="displayNone sub-settings">
<tr class="pl-5"><td><label for="latent_upscaler_steps_slider">Upscaling Steps:</label></td><td><input id="latent_upscaler_steps_slider" name="latent_upscaler_steps_slider" class="editor-slider" value="10" type="range" min="1" max="50"> <input id="latent_upscaler_steps" name="latent_upscaler_steps" size="4" pattern="^[0-9\.]+$" onkeypress="preventNonNumericalInput(event)"></td></tr>
<tr class="pl-5"><td><label for="latent_upscaler_steps_slider">Upscaling Steps:</label></td><td><input id="latent_upscaler_steps_slider" name="latent_upscaler_steps_slider" class="editor-slider" value="10" type="range" min="1" max="50"> <input id="latent_upscaler_steps" name="latent_upscaler_steps" size="4" pattern="^[0-9\.]+$" onkeypress="preventNonNumericalInput(event)" inputmode="numeric"></td></tr>
</table>
</li>
<li class="pl-5"><input id="show_only_filtered_image" name="show_only_filtered_image" type="checkbox" checked> <label for="show_only_filtered_image">Show only the corrected/upscaled image</label></li>
@ -306,7 +429,7 @@
</div>
</div>
<label><small><b>Note:</b> The Image Modifiers section has moved to the <code>+ Image Modifiers</code> button at the top, just below the Prompt textbox.</small></label>
<label><small><b>Note:</b> The Image Modifiers section has moved to the <code>+ Image Modifiers</code> button at the top, just above the Prompt textbox.</small></label>
</div>
<div id="preview" class="col-free">
@ -320,7 +443,7 @@
<div id="preview-content">
<div id="preview-tools" class="displayNone">
<button id="clear-all-previews" class="secondaryButton"><i class="fa-solid fa-trash-can icon"></i> Clear All</button>
<button class="tertiaryButton" id="show-download-popup"><i class="fa-solid fa-download"></i> Download images</button>
<button class="tertiaryButton" id="show-download-popup"><i class="fa-solid fa-download"></i><span> Download images</span></button>
<div class="display-settings">
<button id="undo" class="displayNone primaryButton">
Undo <i class="fa-solid fa-rotate-left icon"></i>
@ -343,12 +466,15 @@
<div class="dropdown-content">
<div class="dropdown-item">
<input id="thumbnail_size" name="thumbnail_size" class="editor-slider" type="range" value="70" min="5" max="200" oninput="sliderUpdate(event)">
<input id="thumbnail_size-input" name="thumbnail_size-input" size="3" value="70" pattern="^[0-9.]+$" onkeypress="preventNonNumericalInput(event)" oninput="sliderUpdate(event)">&nbsp;%
<input id="thumbnail_size-input" name="thumbnail_size-input" size="3" value="70" pattern="^[0-9.]+$" onkeypress="preventNonNumericalInput(event)" oninput="sliderUpdate(event)" inputmode="numeric">&nbsp;%
</div>
</div>
</div>
<div class="clearfix" style="clear: both;"></div>
</div>
<div id="supportBanner" class="displayNone">
If you found this project useful and want to help keep it alive, please consider <a href="https://ko-fi.com/easydiffusion" target="_blank">buying me a coffee</a> to help cover the cost of development and maintenance! Thanks for your support!
</div>
</div>
</div>
</div>
@ -359,6 +485,13 @@
<div class="parameters-table" id="system-settings-table"></div>
<br/>
<button id="save-system-settings-btn" class="primaryButton">Save</button>
<div id="install-extras-container" class="displayNone">
<br/>
<div id="install-extras">
<h3><i class="fa fa-cubes-stacked"></i> Optional Packages</h3>
<div class="parameters-table" id="system-settings-install-extras-table"></div>
</div>
</div>
<br/><br/>
<div id="share-easy-diffusion">
<h3><i class="fa fa-user-group"></i> Share Easy Diffusion</h3>
@ -386,28 +519,44 @@
<div class="float-container">
<div class="float-child">
<h1>Help</h1>
<ul id="help-links">
<li><span class="help-section">Using the software</span>
<div id="help-links">
<h4><span class="help-section"><b>Basics</b></span></h4>
<ul>
<li> <a href="https://github.com/easydiffusion/easydiffusion/wiki/How-To-Use" target="_blank"><i class="fa-solid fa-book fa-fw"></i> How to use</a>
<li> <a href="https://github.com/easydiffusion/easydiffusion/wiki/UI-Overview" target="_blank"><i class="fa-solid fa-list fa-fw"></i> UI Overview</a>
<li> <a href="https://github.com/easydiffusion/easydiffusion/wiki/Writing-Prompts" target="_blank"><i class="fa-solid fa-pen-to-square fa-fw"></i> Writing prompts</a>
<li> <a href="https://github.com/easydiffusion/easydiffusion/wiki/Inpainting" target="_blank"><i class="fa-solid fa-paintbrush fa-fw"></i> Inpainting</a>
<li> <a href="https://github.com/easydiffusion/easydiffusion/wiki/Run-on-Multiple-GPUs" target="_blank"><i class="fa-solid fa-paintbrush fa-fw"></i> Run on Multiple GPUs</a>
<li> <a href="https://github.com/easydiffusion/easydiffusion/wiki/How-To-Use" target="_blank">How to use</a></li>
<li> <a href="https://github.com/easydiffusion/easydiffusion/wiki/Writing-Prompts" target="_blank">Writing prompts</a></li>
<li> <a href="https://github.com/easydiffusion/easydiffusion/wiki/Image-Modifiers" target="_blank">Image Modifiers</a></li>
<li> <a href="https://github.com/easydiffusion/easydiffusion/wiki/Inpainting" target="_blank">Inpainting</a></li>
<li> <a href="https://github.com/easydiffusion/easydiffusion/wiki/Samplers" target="_blank">Samplers</a></li>
<li> <a href="https://github.com/easydiffusion/easydiffusion/wiki/UI-Overview" target="_blank">Summary of every UI option</a></li>
<li> <a href="https://github.com/easydiffusion/easydiffusion/wiki/Troubleshooting" target="_blank">Common error messages (and solutions)</a></li>
</ul>
<li><span class="help-section">Installation</span>
<h4><span class="help-section"><b>Intermediate</b></span></h4>
<ul>
<li> <a href="https://github.com/easydiffusion/easydiffusion/wiki/Troubleshooting" target="_blank"><i class="fa-solid fa-circle-question fa-fw"></i> Troubleshooting</a>
<li> <a href="https://github.com/easydiffusion/easydiffusion/wiki/Custom-Models" target="_blank">Custom Models</a></li>
<li> <a href="https://github.com/easydiffusion/easydiffusion/wiki/Prompt-Syntax" target="_blank">Prompt Syntax (weights, emphasis etc)</a></li>
<li> <a href="https://github.com/easydiffusion/easydiffusion/wiki/UI-Plugins" target="_blank">UI Plugins</a></li>
<li> <a href="https://github.com/easydiffusion/easydiffusion/wiki/Embeddings" target="_blank">Embeddings</a></li>
<li> <a href="https://github.com/easydiffusion/easydiffusion/wiki/LoRA" target="_blank">LoRA</a></li>
<li> <a href="https://github.com/easydiffusion/easydiffusion/wiki/SDXL" target="_blank">SDXL</a></li>
<li> <a href="https://github.com/easydiffusion/easydiffusion/wiki/ControlNet" target="_blank">ControlNet</a></li>
<li> <a href="https://github.com/easydiffusion/easydiffusion/wiki/Seamless-Tiling" target="_blank">Seamless Tiling</a></li>
<li> <a href="https://github.com/easydiffusion/easydiffusion/wiki/xFormers" target="_blank">xFormers</a></li>
<li> <a href="https://github.com/easydiffusion/easydiffusion/wiki/The-beta-channel" target="_blank">The beta channel</a></li>
</ul>
<li><span class="help-section">Downloadable Content</span>
<h4><span class="help-section"><b>Advanced topics</b></span></h4>
<ul>
<li> <a href="https://github.com/easydiffusion/easydiffusion/wiki/Custom-Models" target="_blank"><i class="fa-solid fa-images fa-fw"></i> Custom Models</a>
<li> <a href="https://github.com/easydiffusion/easydiffusion/wiki/UI-Plugins" target="_blank"><i class="fa-solid fa-puzzle-piece fa-fw"></i> UI Plugins</a>
<li> <a href="https://github.com/easydiffusion/easydiffusion/wiki/VAE-Variational-Auto-Encoder" target="_blank"><i class="fa-solid fa-hand-sparkles fa-fw"></i> VAE Variational Auto Encoder</a>
<li> <a href="https://github.com/easydiffusion/easydiffusion/wiki/Run-on-Multiple-GPUs" target="_blank">Run on Multiple GPUs</a></li>
<li> <a href="https://github.com/easydiffusion/easydiffusion/wiki/Model-Merging" target="_blank">Model Merging</a></li>
<li> <a href="https://github.com/easydiffusion/easydiffusion/wiki/Custom-Modifiers" target="_blank">Custom Modifiers</a></li>
</ul>
</ul>
<h4><span class="help-section"><b>Misc</b></span></h4>
<ul>
<li> <a href="https://theally.notion.site/The-Definitive-Stable-Diffusion-Glossary-1d1e6d15059c41e6a6b4306b4ecd9df9" target="_blank">Glossary of Stable Diffusion related terms</a></li>
</ul>
</div>
</div>
<div class="float-child">
@ -579,6 +728,8 @@
<span>
</div>
<div id="embeddings-dialog-header-right">
<button id="add-embeddings-thumb" class="tertiaryButton smallButton" style="background-color: var(--background-color4);"><i class="fa-solid fa-folder-plus"></i> Add thumbnail</button>
<input id="add-embeddings-thumb-input" name="add-embeddings-thumb-input" type="file" class="displayNone">
<i id="embeddings-dialog-close-button" class="fa-solid fa-xmark fa-lg"></i>
</div>
</div>
@ -588,7 +739,16 @@
<span class="embeddings-action-text">Expand Categories</span>
</button>
<i class="fa-solid fa-magnifying-glass"></i>
<input id="embeddings-search-box" type="text" spellcheck="false" autocomplete="off" placeholder="Search...">
<input id="embeddings-search-box" type="text" spellcheck="false" autocomplete="off" placeholder="Search..." inputmode="search">
<label for="embedding-card-size-selector"><small>Thumbnail Size:</small></label>
<select id="embedding-card-size-selector" name="embedding-card-size-selector">
<option value="-2">0</option>
<option value="-1" selected>1</option>
<option value="0">2</option>
<option value="1">3</option>
<option value="2">4</option>
<option value="3">5</option>
</select>
<span style="float:right;"><label>Mode:</label>&nbsp;<select id="embeddings-mode"><option value="insert">Insert at cursor position</option><option value="append">Append at the end</option></select>
</div>
<div id="embeddings-list">
@ -596,6 +756,34 @@
</div>
</dialog>
<dialog id="use-as-thumb-dialog">
<div id="use-as-thumb-dialog-header" class="dialog-header">
<div id="use-as-thumb-dialog-header-left" class="dialog-header-left">
<h4>Use as thumbnail</h4>
<span>Use a pictures as thumbnail for embeddings, LORAs, etc.</span>
</div>
<div id="use-as-thumb-dialog-header-right">
<i id="use-as-thumb-dialog-close-button" class="fa-solid fa-xmark fa-lg"></i>
</div>
</div>
<div>
<div class="use-as-thumb-grid">
<div class="use-as-thumb-preview">
<div id="use-as-thumb-img-container"><img id="use-as-thumb-image" src="/media/images/noimg.png" width="512" height="512"></div>
</div>
<div class="use-as-thumb-select">
<label for="use-as-thumb-select">Use the thumbnail for:</label><br>
<select id="use-as-thumb-select" size="16" multiple>
</select>
</div>
<div class="use-as-thumb-buttons">
<button class="tertiaryButton" id="use-as-thumb-save">Save thumbnail</button>
<button class="tertiaryButton" id="use-as-thumb-cancel">Cancel</button>
</div>
</div>
</div>
</dialog>
<div id="image-editor" class="popup image-editor-popup">
<div>
<i class="close-button fa-solid fa-xmark"></i>
@ -631,13 +819,13 @@
<div id="footer-spacer"></div>
<div id="footer">
<div class="line-separator">&nbsp;</div>
<p>If you found this project useful and want to help keep it alive, please <a href="https://ko-fi.com/cmdr2_stablediffusion_ui" target="_blank"><img src="/media/images/kofi.png" id="coffeeButton"></a> to help cover the cost of development and maintenance! Thank you for your support!</p>
<p>Please feel free to join the <a href="https://discord.com/invite/u9yhsFmEkB" target="_blank">discord community</a> or <a href="https://github.com/easydiffusion/easydiffusion/issues" target="_blank">file an issue</a> if you have any problems or suggestions in using this interface.</p>
<div id="footer-legal">
<p><b>Disclaimer:</b> The authors of this project are not responsible for any content generated using this interface.</p>
<p>This license of this software forbids you from sharing any content that violates any laws, produce any harm to a person, disseminate any personal information that would be meant for harm, <br/>spread misinformation and target vulnerable groups. For the full list of restrictions please read <a href="https://github.com/easydiffusion/easydiffusion/blob/main/LICENSE" target="_blank">the license</a>.</p>
<p>By using this software, you consent to the terms and conditions of the license.</p>
</div>
<input id="test_diffusers" type="checkbox" style="display: none" checked />
</div>
</div>
</body>
@ -649,6 +837,8 @@
<script src="media/js/auto-save.js"></script>
<script src="media/js/searchable-models.js"></script>
<script src="media/js/multi-model-selector.js"></script>
<script src="media/js/task-manager.js"></script>
<script src="media/js/main.js"></script>
<script src="media/js/plugins.js"></script>
<script src="media/js/themes.js"></script>
@ -669,10 +859,10 @@ async function init() {
events: {
statusChange: setServerStatus,
idle: onIdle,
ping: tunnelUpdate
ping: onPing
}
})
splashScreen()
// splashScreen()
// load models again, but scan for malicious this time
await getModels(true)

View File

@ -1,12 +1,14 @@
from easydiffusion import model_manager, app, server
from easydiffusion import model_manager, app, server, bucket_manager
from easydiffusion.server import server_api # required for uvicorn
app.init()
server.init()
# Init the app
model_manager.init()
app.init()
app.init_render_threads()
bucket_manager.init()
# start the browser ui
app.open_browser()

View File

@ -0,0 +1,68 @@
@keyframes ldio-8f673ktaleu-1 {
0% { transform: rotate(0deg) }
50% { transform: rotate(-45deg) }
100% { transform: rotate(0deg) }
}
@keyframes ldio-8f673ktaleu-2 {
0% { transform: rotate(180deg) }
50% { transform: rotate(225deg) }
100% { transform: rotate(180deg) }
}
.ldio-8f673ktaleu > div:nth-child(2) {
transform: translate(-15px,0);
}
.ldio-8f673ktaleu > div:nth-child(2) div {
position: absolute;
top: 20px;
left: 20px;
width: 60px;
height: 30px;
border-radius: 60px 60px 0 0;
background: #f3b72e;
animation: ldio-8f673ktaleu-1 1s linear infinite;
transform-origin: 30px 30px
}
.ldio-8f673ktaleu > div:nth-child(2) div:nth-child(2) {
animation: ldio-8f673ktaleu-2 1s linear infinite
}
.ldio-8f673ktaleu > div:nth-child(2) div:nth-child(3) {
transform: rotate(-90deg);
animation: none;
}@keyframes ldio-8f673ktaleu-3 {
0% { transform: translate(95px,0); opacity: 0 }
20% { opacity: 1 }
100% { transform: translate(35px,0); opacity: 1 }
}
.ldio-8f673ktaleu > div:nth-child(1) {
display: block;
}
.ldio-8f673ktaleu > div:nth-child(1) div {
position: absolute;
top: 46px;
left: -4px;
width: 8px;
height: 8px;
border-radius: 50%;
background: #3869c5;
animation: ldio-8f673ktaleu-3 1s linear infinite
}
.ldio-8f673ktaleu > div:nth-child(1) div:nth-child(1) { animation-delay: -0.67s }
.ldio-8f673ktaleu > div:nth-child(1) div:nth-child(2) { animation-delay: -0.33s }
.ldio-8f673ktaleu > div:nth-child(1) div:nth-child(3) { animation-delay: 0s }
.loadingio-spinner-bean-eater-x0y3u8qky4n {
width: 58px;
height: 58px;
display: inline-block;
overflow: hidden;
background: none;
}
.ldio-8f673ktaleu {
width: 100%;
height: 100%;
position: relative;
transform: translateZ(0) scale(0.58);
backface-visibility: hidden;
transform-origin: 0 0; /* see note above */
}
.ldio-8f673ktaleu div { box-sizing: content-box; }
/* generated by https://loading.io/ */

58
ui/media/css/croppr.css Normal file
View File

@ -0,0 +1,58 @@
.croppr-container * {
user-select: none;
-moz-user-select: none;
-webkit-user-select: none;
-ms-user-select: none;
box-sizing: border-box;
-webkit-box-sizing: border-box;
-moz-box-sizing: border-box;
}
.croppr-container img {
vertical-align: middle;
max-width: 100%;
}
.croppr {
position: relative;
display: inline-block;
}
.croppr-overlay {
background: rgba(0,0,0,0.5);
position: absolute;
top: 0;
right: 0;
bottom: 0;
left: 0;
z-index: 1;
cursor: crosshair;
}
.croppr-region {
border: 1px dashed rgba(0, 0, 0, 0.5);
position: absolute;
z-index: 3;
cursor: move;
top: 0;
}
.croppr-imageClipped {
position: absolute;
top: 0;
right: 0;
bottom: 0;
left: 0;
z-index: 2;
pointer-events: none;
}
.croppr-handle {
border: 1px solid black;
background-color: white;
width: 10px;
height: 10px;
position: absolute;
z-index: 4;
top: 0;
}

View File

@ -229,4 +229,27 @@
}
.inpainter .load_mask {
display: flex;
}
}
.editor-canvas-overlay {
cursor: none;
}
.image-brush-preview {
position: fixed;
background: black;
opacity: 0.3;
borderRadius: 50%;
cursor: none;
pointer-events: none;
transform: translate(-50%, -50%);
}
.editor-options-container > * > *:not(.active):not(.button) {
border: 1px dotted slategray;
}
.image_editor_opacity .editor-options-container > * > *:not(.active):not(.button) {
border: 1px dotted slategray;
}

View File

@ -34,6 +34,7 @@ code {
width: 32px;
height: 32px;
transform: translateY(4px);
cursor: pointer;
}
#prompt {
width: 100%;
@ -476,6 +477,7 @@ dialog {
background: var(--background-color2);
color: var(--text-color);
border-radius: 6px;
box-shadow: 0px 0px 30px black;
border: 2px solid rgb(255 255 255 / 10%);
padding: 0px;
}
@ -607,11 +609,18 @@ div.img-preview img {
margin: auto;
padding: 0px;
}
#help-links ul {
list-style-type: disc;
padding-left: 12pt;
}
#help-links li {
padding-bottom: 12pt;
padding-bottom: 6pt;
display: block;
font-size: 10pt;
}
#help-links ul li {
display: list-item;
}
#help-links li .fa-fw {
padding-right: 2pt;
}
@ -794,7 +803,7 @@ div.img-preview img {
margin-bottom: 8px;
}
#init_image_preview_container:not(.has-image) #init_image_wrapper,
#init_image_preview_container:not(.has-image) .preview_image_wrapper,
#init_image_preview_container:not(.has-image) #inpaint_button_container {
display: none;
}
@ -831,14 +840,14 @@ div.img-preview img {
gap: 8px;
}
#init_image_wrapper {
.preview_image_wrapper {
grid-row: span 3;
position: relative;
width: fit-content;
max-height: 150px;
}
#init_image_preview {
.image_preview {
max-height: 150px;
height: 100%;
width: 100%;
@ -1088,7 +1097,7 @@ input::file-selector-button {
.tab-content-inner {
margin: 0px;
}
.tab {
#top-nav .tab {
font-size: 0;
}
.tab .icon {
@ -1114,6 +1123,9 @@ input::file-selector-button {
#preview-tools button .icon {
font-size: 12pt;
}
#show-download-popup .fa-solid {
font-size: 12pt;
}
}
@media screen and (max-width: 500px) {
@ -1202,6 +1214,12 @@ input::file-selector-button {
visibility: visible;
}
}
.tooltip-container {
display: inline-block;
position: relative;
}
.simple-tooltip.right {
right: 0px;
top: 50%;
@ -1418,6 +1436,10 @@ div.task-fs-initimage {
display: none;
position: absolute;
}
div.task-fs-initimage img {
max-height: 70vH;
max-width: 70vW;
}
div.task-initimg:hover div.task-fs-initimage {
display: block;
position: absolute;
@ -1433,9 +1455,13 @@ div.top-right {
right: 8px;
}
.task-fs-initimage .top-right button {
margin-top: 6px;
}
#small_image_warning {
font-size: smaller;
color: var(--status-orange);
font-size: smaller;
color: var(--status-orange);
}
button#save-system-settings-btn {
@ -1460,6 +1486,9 @@ button#save-system-settings-btn {
cursor: pointer;;
}
.validation-failed {
border: solid 2px red;
}
/* SCROLLBARS */
:root {
--scrollbar-width: 14px;
@ -1650,6 +1679,35 @@ body.wait-pause {
}
}
.spinner-container {
width: 80px;
height: 100px;
margin: 100px auto;
margin-top: 30vH;
}
.spinner-block {
position: relative;
box-sizing: border-box;
float: left;
margin: 0 10px 10px 0;
width: 12px;
height: 12px;
border-radius: 3px;
background: var(--accent-color);
}
.spinner-block:nth-child(4n+1) { animation: spinner-wave 2s ease .0s infinite; }
.spinner-block:nth-child(4n+2) { animation: spinner-wave 2s ease .2s infinite; }
.spinner-block:nth-child(4n+3) { animation: spinner-wave 2s ease .4s infinite; }
.spinner-block:nth-child(4n+4) { animation: spinner-wave 2s ease .6s infinite; margin-right: 0; }
@keyframes spinner-wave {
0% { top: 0; opacity: 1; }
50% { top: 30px; opacity: .2; }
100% { top: 0; opacity: 1; }
}
#embeddings-dialog {
overflow: clip;
}
@ -1664,6 +1722,12 @@ body.wait-pause {
overflow-y: scroll;
}
@media screen and (max-width: 1400px) {
#embeddings-list {
width: 80vW;
}
}
#embeddings-list button {
margin: 2px;
color: var(--button-color);
@ -1741,6 +1805,32 @@ body.wait-pause {
float: right;
}
.use-as-thumb-grid { display: grid;
grid-template-columns: 1fr auto;
grid-template-rows: 1fr auto;
gap: 8px 8px;
grid-auto-flow: row;
grid-template-areas:
"uat-preview uat-select"
"uat-preview uat-buttons";
}
.use-as-thumb-preview {
justify-self: center;
align-self: center;
grid-area: uat-preview;
}
.use-as-thumb-select {
grid-area: uat-select;
}
.use-as-thumb-buttons {
justify-self: center;
grid-area: uat-buttons;
}
.diffusers-disabled-on-startup .diffusers-restart-needed {
font-size: 0;
}
@ -1753,3 +1843,200 @@ body.wait-pause {
content: "Please restart Easy Diffusion!";
font-size: 10pt;
}
input#custom-width, input#custom-height {
width: 47pt;
}
div#recent-resolutions-container {
position: relative;
display:inline-block;
}
div#recent-resolutions-popup {
position: absolute;
right: 0px;
margin: 3px;
padding: 0.2em 1em 0.4em 1em;
z-index: 1;
background: var(--background-color3);
border-radius: 4px;
box-shadow: 0 20px 28px 0 rgba(0, 0, 0, 0.15), 0 6px 20px 0 rgba(0, 0, 0, 0.15);
}
div#recent-resolutions-popup small {
opacity: 0.7;
}
div#common-resolution-list button {
background: var(--background-color1);
}
td#image-size-options small {
margin-right: 0px !important;
}
td#image-size-options {
white-space: nowrap;
}
div#recent-resolution-list {
text-align: center;
}
div#enlarge-buttons {
text-align: center;
}
.two-column { display: grid;
grid-template-columns: 1fr 1fr;
grid-template-rows: 1fr;
gap: 0px 0.5em;
grid-auto-flow: row;
grid-template-areas:
"left-column right-column";
}
.left-column {
justify-self: center;
align-self: center;
grid-area: left-column;
}
.right-column {
justify-self: center;
align-self: center;
grid-area: right-column;
}
.clickable {
cursor: pointer;
}
.imgContainer .spinner {
position: absolute;
left: 50%;
top: 50%;
transform: translate(-50%, -50%);
margin: 0;
padding: 0;
background: var(--background-color3);
opacity: 0.95;
border-radius: 5px;
padding: 4pt;
border: 1px solid var(--button-color);
box-shadow: 0px 0px 4px black;
}
.imgContainer .spinnerStatus {
font-size: 10pt;
}
#controlnet_model_container small {
color: var(--text-color)
}
#control_image {
width: 130pt;
}
#controlnet_model {
width: 77%;
}
.drop-area {
width: 45%;
height: 50px;
border: 2px dashed #ccc;
text-align: center;
line-height: 50px;
font-size: small;
color: #ccc;
border-radius: 10px;
display: none;
margin: 12px 10px;
}
#num_outputs_total {
width: 42pt;
}
#num_outputs_parallel {
width: 42pt;
}
.model_entry .model_weight {
width: 50pt;
}
/* hack for fixing Image Modifier Improvements plugin */
#imageTagPopupContainer {
position: absolute;
}
@media screen and (max-width: 400px) {
.editor-slider {
width: 40%;
}
input::-webkit-outer-spin-button,
input::-webkit-inner-spin-button {
-webkit-appearance: none;
margin: 0;
}
input[type=number] {
-moz-appearance: textfield;
/* Firefox */
}
#num_outputs_total {
width: 27pt;
}
#num_outputs_parallel {
width: 27pt;
margin-left: -4pt;
}
.model_entry .model_weight {
width: 30pt;
}
#width {
width: 50pt;
}
#height {
width: 50pt;
}
}
@media screen and (max-width: 460px) {
#widthLabel small span {
display: none;
}
#widthLabel small:after {
content: "(w)";
}
#heightLabel small span {
display: none;
}
#heightLabel small:after {
content: "(h)";
}
#prompt-toolbar-right {
text-align: right;
}
#editor-settings label {
font-size: 9pt;
}
#editor-settings .model-filter {
width: 56%;
}
#vae_model {
width: 65% !important;
}
.model_entry .model_name {
width: 60% !important;
}
}
#supportBanner {
font-size: 9pt;
padding: 5pt;
border: 1px solid var(--background-color2);
margin-bottom: 5pt;
border-radius: 4pt;
padding-top: 6pt;
color: var(--small-label-color);
}

BIN
ui/media/images/noimg.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.3 KiB

View File

@ -15,14 +15,12 @@ const SETTINGS_IDS_LIST = [
"stable_diffusion_model",
"clip_skip",
"vae_model",
"hypernetwork_model",
"sampler_name",
"width",
"height",
"num_inference_steps",
"guidance_scale",
"prompt_strength",
"hypernetwork_strength",
"tiling",
"output_format",
"output_quality",
@ -45,6 +43,7 @@ const SETTINGS_IDS_LIST = [
"sound_toggle",
"vram_usage_level",
"confirm_dangerous_actions",
"profileName",
"metadata_output_format",
"auto_save_settings",
"apply_color_correction",
@ -54,10 +53,20 @@ const SETTINGS_IDS_LIST = [
"zip_toggle",
"tree_toggle",
"json_toggle",
"extract_lora_from_prompt",
"embedding-card-size-selector",
"lora_model",
"enable_vae_tiling",
"controlnet_alpha",
]
const IGNORE_BY_DEFAULT = ["prompt"]
if (!testDiffusers.checked) {
SETTINGS_IDS_LIST.push("hypernetwork_model")
SETTINGS_IDS_LIST.push("hypernetwork_strength")
}
const SETTINGS_SECTIONS = [
// gets the "keys" property filled in with an ordered list of settings in this section via initSettings
{ id: "editor-inputs", name: "Prompt" },
@ -169,23 +178,6 @@ function loadSettings() {
}
})
CURRENTLY_LOADING_SETTINGS = false
} else if (localStorage.length < 2) {
// localStorage is too short for OldSettings
// So this is likely the first time Easy Diffusion is running.
// Initialize vram_usage_level based on the available VRAM
function initGPUProfile(event) {
if (
"detail" in event &&
"active" in event.detail &&
"cuda:0" in event.detail.active &&
event.detail.active["cuda:0"].mem_total < 4.5
) {
vramUsageLevelField.value = "low"
vramUsageLevelField.dispatchEvent(new Event("change"))
}
document.removeEventListener("system_info_update", initGPUProfile)
}
document.addEventListener("system_info_update", initGPUProfile)
} else {
CURRENTLY_LOADING_SETTINGS = true
tryLoadOldSettings()

1189
ui/media/js/croppr.js Executable file

File diff suppressed because it is too large Load Diff

View File

@ -268,7 +268,11 @@ const TASK_MAPPING = {
tiling: {
name: "Tiling",
setUI: (val) => {
tilingField.value = val
if (val === null || val === "None") {
tilingField.value = "none"
} else {
tilingField.value = val
}
},
readUI: () => tilingField.value,
parse: (val) => val,
@ -289,42 +293,67 @@ const TASK_MAPPING = {
readUI: () => vaeModelField.value,
parse: (val) => val,
},
use_controlnet_model: {
name: "ControlNet model",
setUI: (use_controlnet_model) => {
controlnetModelField.value = getModelPath(use_controlnet_model, [".pth", ".safetensors"])
},
readUI: () => controlnetModelField.value,
parse: (val) => val,
},
control_filter_to_apply: {
name: "ControlNet Filter",
setUI: (control_filter_to_apply) => {
controlImageFilterField.value = control_filter_to_apply
},
readUI: () => controlImageFilterField.value,
parse: (val) => val,
},
control_alpha: {
name: "ControlNet Strength",
setUI: (control_alpha) => {
control_alpha = control_alpha || 1.0
controlAlphaField.value = control_alpha
updateControlAlphaSlider()
},
readUI: () => parseFloat(controlAlphaField.value),
parse: (val) => val === null ? 1.0 : parseFloat(val),
},
use_lora_model: {
name: "LoRA model",
setUI: (use_lora_model) => {
// create rows
for (let i = loraModels.length; i < use_lora_model.length; i++) {
createLoraEntry()
}
use_lora_model.forEach((model_name, i) => {
let field = loraModels[i][0]
const oldVal = field.value
if (model_name !== "") {
model_name = getModelPath(model_name, [".ckpt", ".safetensors"])
model_name = model_name !== "" ? model_name : oldVal
let modelPaths = []
use_lora_model = use_lora_model === null ? "" : use_lora_model
use_lora_model = Array.isArray(use_lora_model) ? use_lora_model : [use_lora_model]
use_lora_model.forEach((m) => {
if (m.includes("models\\lora\\")) {
m = m.split("models\\lora\\")[1]
} else if (m.includes("models\\\\lora\\\\")) {
m = m.split("models\\\\lora\\\\")[1]
} else if (m.includes("models/lora/")) {
m = m.split("models/lora/")[1]
}
field.value = model_name
m = m.replaceAll("\\\\", "/")
m = getModelPath(m, [".ckpt", ".safetensors"])
modelPaths.push(m)
})
// clear the remaining entries
let container = document.querySelector("#lora_model_container .model_entries")
for (let i = use_lora_model.length; i < loraModels.length; i++) {
let modelEntry = loraModels[i][2]
container.removeChild(modelEntry)
}
loraModels.splice(use_lora_model.length)
loraModelField.modelNames = modelPaths
},
readUI: () => {
let values = loraModels.map((e) => e[0].value)
values = values.filter((e) => e.trim() !== "")
values = values.length > 0 ? values : "None"
return values
return loraModelField.modelNames
},
parse: (val) => {
val = !val || val === "None" ? "" : val
if (typeof val === "string" && val.includes(",")) {
val = val.split(",")
val = val.map((v) => v.trim())
val = val.map((v) => v.replaceAll("\\", "\\\\"))
val = val.map((v) => v.replaceAll('"', ""))
val = val.map((v) => v.replaceAll("'", ""))
val = val.map((v) => '"' + v + '"')
val = "[" + val + "]"
val = JSON.parse(val)
}
val = Array.isArray(val) ? val : [val]
return val
},
@ -332,31 +361,17 @@ const TASK_MAPPING = {
lora_alpha: {
name: "LoRA Strength",
setUI: (lora_alpha) => {
for (let i = loraModels.length; i < lora_alpha.length; i++) {
createLoraEntry()
}
lora_alpha.forEach((model_strength, i) => {
let field = loraModels[i][1]
field.value = model_strength
})
// clear the remaining entries
let container = document.querySelector("#lora_model_container .model_entries")
for (let i = lora_alpha.length; i < loraModels.length; i++) {
let modelEntry = loraModels[i][2]
container.removeChild(modelEntry)
}
loraModels.splice(lora_alpha.length)
lora_alpha = Array.isArray(lora_alpha) ? lora_alpha : [lora_alpha]
loraModelField.modelWeights = lora_alpha
},
readUI: () => {
let models = loraModels.filter((e) => e[0].value.trim() !== "")
let values = models.map((e) => e[1].value)
values = values.length > 0 ? values : 0
return values
return loraModelField.modelWeights
},
parse: (val) => {
if (typeof val === "string" && val.includes(",")) {
val = "[" + val.replaceAll("'", '"') + "]"
val = JSON.parse(val)
}
val = Array.isArray(val) ? val : [val]
val = val.map((e) => parseFloat(e))
return val
@ -472,11 +487,8 @@ function restoreTaskToUI(task, fieldsToSkip) {
}
if (!("use_lora_model" in task.reqBody)) {
loraModels.forEach((e) => {
e[0].value = ""
e[1].value = 0
e[0].dispatchEvent(new Event("change"))
})
loraModelField.modelNames = []
loraModelField.modelWeights = []
}
// restore the original prompt if provided (e.g. use settings), fallback to prompt as needed (e.g. copy/paste or d&d)
@ -519,10 +531,28 @@ function restoreTaskToUI(task, fieldsToSkip) {
)
initImagePreview.src = task.reqBody.init_image
}
// hide/show controlnet picture as needed
if (IMAGE_REGEX.test(controlImagePreview.src) && task.reqBody.control_image == undefined) {
// hide source image
controlImageClearBtn.dispatchEvent(new Event("click"))
} else if (task.reqBody.control_image !== undefined) {
// listen for inpainter loading event, which happens AFTER the main image loads (which reloads the inpai
controlImagePreview.src = task.reqBody.control_image
}
if ("use_controlnet_model" in task.reqBody && task.reqBody.use_controlnet_model && !("control_alpha" in task.reqBody)) {
controlAlphaField.value = 1.0
updateControlAlphaSlider()
}
}
function readUI() {
const reqBody = {}
for (const key in TASK_MAPPING) {
if (testDiffusers.checked && (key === "use_hypernetwork_model" || key === "hypernetwork_strength")) {
continue
}
reqBody[key] = TASK_MAPPING[key].readUI()
}
return {
@ -569,6 +599,12 @@ const TASK_TEXT_MAPPING = {
use_stable_diffusion_model: "Stable Diffusion model",
use_hypernetwork_model: "Hypernetwork model",
hypernetwork_strength: "Hypernetwork Strength",
use_lora_model: "LoRA model",
lora_alpha: "LoRA Strength",
use_controlnet_model: "ControlNet model",
control_filter_to_apply: "ControlNet Filter",
control_alpha: "ControlNet Strength",
tiling: "Seamless Tiling",
}
function parseTaskFromText(str) {
const taskReqBody = {}

View File

@ -1047,7 +1047,9 @@
}
}
class FilterTask extends Task {
constructor(options = {}) {}
constructor(options = {}) {
super(options)
}
/** Send current task to server.
* @param {*} [timeout=-1] Optional timeout value in ms
* @returns the response from the render request.
@ -1055,9 +1057,27 @@
*/
async post(timeout = -1) {
let jsonResponse = await super.post("/filter", timeout)
//this._setId(jsonResponse.task)
if (typeof jsonResponse?.task !== "number") {
console.warn("Endpoint error response: ", jsonResponse)
const event = Object.assign({ task: this }, jsonResponse)
await eventSource.fireEvent(EVENT_UNEXPECTED_RESPONSE, event)
if ("continueWith" in event) {
jsonResponse = await Promise.resolve(event.continueWith)
}
if (typeof jsonResponse?.task !== "number") {
const err = new Error(jsonResponse?.detail || "Endpoint response does not contains a task ID.")
this.abort(err)
throw err
}
}
this._setId(jsonResponse.task)
if (jsonResponse.stream) {
this.streamUrl = jsonResponse.stream
}
this._setStatus(TaskStatus.waiting)
return jsonResponse
}
checkReqBody() {}
enqueue(progressCallback) {
return Task.enqueueNew(this, FilterTask, progressCallback)
}
@ -1068,6 +1088,65 @@
if (this.isStopped) {
return
}
this._setStatus(TaskStatus.pending)
progressCallback?.call(this, { reqBody: this._reqBody })
Object.freeze(this._reqBody)
// Post task request to backend
let renderRes = undefined
try {
renderRes = yield this.post()
yield progressCallback?.call(this, { renderResponse: renderRes })
} catch (e) {
yield progressCallback?.call(this, { detail: e.message })
throw e
}
try {
// Wait for task to start on server.
yield this.waitUntil({
callback: function() {
return progressCallback?.call(this, {})
},
status: TaskStatus.processing,
})
} catch (e) {
this.abort(err)
throw e
}
// Task started!
// Open the reader.
const reader = this.reader
const task = this
reader.onError = function(response) {
if (progressCallback) {
task.abort(new Error(response.statusText))
return progressCallback.call(task, { response, reader })
}
return Task.prototype.onError.call(task, response)
}
yield progressCallback?.call(this, { reader })
//Start streaming the results.
const streamGenerator = reader.open()
let value = undefined
let done = undefined
yield progressCallback?.call(this, { stream: streamGenerator })
do {
;({ value, done } = yield streamGenerator.next())
if (typeof value !== "object") {
continue
}
if (value.status !== undefined) {
yield progressCallback?.call(this, value)
if (value.status === "succeeded" || value.status === "failed") {
done = true
}
}
} while (!done)
return value
}
static start(task, progressCallback) {
if (typeof task !== "object") {

File diff suppressed because one or more lines are too long

View File

@ -626,6 +626,7 @@ class ImageEditor {
.getImageData(0, 0, this.width, this.height)
.data.some((channel) => channel !== 0)
maskSetting.checked = !is_blank
maskSetting.dispatchEvent(new Event("change"))
}
this.hide()
}

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,256 @@
/**
* A component consisting of multiple model dropdowns, along with a "weight" field per model.
*
* Behaves like a single input element, giving an object in response to the .value field.
*
* Inspired by the design of the ModelDropdown component (searchable-models.js).
*/
class MultiModelSelector {
root
modelType
modelNameFriendly
defaultWeight
weightStep
modelContainer
addNewButton
counter = 0
/* MIMIC A REGULAR INPUT FIELD */
get id() {
return this.root.id
}
get parentElement() {
return this.root.parentElement
}
get parentNode() {
return this.root.parentNode
}
get value() {
return { modelNames: this.modelNames, modelWeights: this.modelWeights }
}
set value(modelData) {
if (typeof modelData !== "object") {
throw new Error("Multi-model selector expects an object containing modelNames and modelWeights as keys!")
}
if (!("modelNames" in modelData) || !("modelWeights" in modelData)) {
throw new Error("modelNames or modelWeights not present in the data passed to the multi-model selector")
}
let newModelNames = modelData["modelNames"]
let newModelWeights = modelData["modelWeights"]
if (newModelNames.length !== newModelWeights.length) {
throw new Error("Need to pass an equal number of modelNames and modelWeights!")
}
// update weight first, name second.
// for some unholy reason this order matters for dispatch chains
// the root of all this unholiness is because searchable-models automatically dispatches an update event
// as soon as the value is updated via JS, which is against the DOM pattern of not dispatching an event automatically
// unless the caller explicitly dispatches the event.
this.modelWeights = newModelWeights
this.modelNames = newModelNames
}
get disabled() {
return false
}
set disabled(state) {
// do nothing
}
getModelElements(ignoreEmpty = false) {
let entries = this.root.querySelectorAll(".model_entry")
entries = [...entries]
let elements = entries.map((e) => {
let modelName = e.querySelector(".model_name").field
let modelWeight = e.querySelector(".model_weight")
if (ignoreEmpty && modelName.value.trim() === "") {
return null
}
return { name: modelName, weight: modelWeight }
})
elements = elements.filter((e) => e !== null)
return elements
}
addEventListener(type, listener, options) {
// do nothing
}
dispatchEvent(event) {
// do nothing
}
appendChild(option) {
// do nothing
}
// remember 'this' - http://blog.niftysnippets.org/2008/04/you-must-remember-this.html
bind(f, obj) {
return function() {
return f.apply(obj, arguments)
}
}
constructor(root, modelType, modelNameFriendly = undefined, defaultWeight = 0.5, weightStep = 0.02) {
this.root = root
this.modelType = modelType
this.modelNameFriendly = modelNameFriendly || modelType
this.defaultWeight = defaultWeight
this.weightStep = weightStep
let self = this
document.addEventListener("refreshModels", function() {
setTimeout(self.bind(self.populateModels, self), 1)
})
this.createStructure()
this.populateModels()
}
createStructure() {
this.modelContainer = document.createElement("div")
this.modelContainer.className = "model_entries"
this.root.appendChild(this.modelContainer)
this.addNewButton = document.createElement("button")
this.addNewButton.className = "add_model_entry"
this.addNewButton.innerHTML = '<i class="fa-solid fa-plus"></i> add another ' + this.modelNameFriendly
this.addNewButton.addEventListener("click", this.bind(this.addModelEntry, this))
this.root.appendChild(this.addNewButton)
}
populateModels() {
if (this.root.dataset.path === "") {
if (this.length === 0) {
this.addModelEntry() // create a single blank entry
}
} else {
this.value = JSON.parse(this.root.dataset.path)
}
}
addModelEntry() {
let idx = this.counter++
let currLength = this.length
const modelElement = document.createElement("div")
modelElement.className = "model_entry"
modelElement.innerHTML = `
<input id="${this.modelType}_${idx}" class="model_name model-filter" type="text" spellcheck="false" autocomplete="off" data-path="" />
<input class="model_weight" type="number" step="${this.weightStep}" value="${this.defaultWeight}" pattern="^-?[0-9]*\.?[0-9]*$" onkeypress="preventNonNumericalInput(event)">
`
this.modelContainer.appendChild(modelElement)
let modelNameEl = modelElement.querySelector(".model_name")
modelNameEl.field = new ModelDropdown(modelNameEl, this.modelType, "None")
let modelWeightEl = modelElement.querySelector(".model_weight")
let self = this
function makeUpdateEvent(type) {
return function(e) {
e.stopPropagation()
let modelData = self.value
self.root.dataset.path = JSON.stringify(modelData)
self.root.dispatchEvent(new Event(type))
}
}
modelNameEl.addEventListener("change", makeUpdateEvent("change"))
modelNameEl.addEventListener("input", makeUpdateEvent("input"))
modelWeightEl.addEventListener("change", makeUpdateEvent("change"))
modelWeightEl.addEventListener("input", makeUpdateEvent("input"))
let removeBtn = document.createElement("button")
removeBtn.className = "remove_model_btn"
removeBtn.setAttribute("title", "Remove model")
removeBtn.innerHTML = '<i class="fa-solid fa-minus"></i>'
if (currLength === 0) {
removeBtn.classList.add("displayNone")
}
removeBtn.addEventListener(
"click",
this.bind(function(e) {
this.modelContainer.removeChild(modelElement)
makeUpdateEvent("change")(e)
}, this)
)
modelElement.appendChild(removeBtn)
}
removeModelEntry() {
if (this.length === 0) {
return
}
let lastEntry = this.modelContainer.lastElementChild
lastEntry.remove()
}
get length() {
return this.getModelElements().length
}
get modelNames() {
return this.getModelElements(true).map((e) => e.name.value)
}
set modelNames(newModelNames) {
this.resizeEntryList(newModelNames.length)
if (newModelNames.length === 0) {
this.getModelElements()[0].name.value = ""
}
// assign to the corresponding elements
let currElements = this.getModelElements()
for (let i = 0; i < newModelNames.length; i++) {
let curr = currElements[i]
curr.name.value = newModelNames[i]
}
}
get modelWeights() {
return this.getModelElements(true).map((e) => e.weight.value)
}
set modelWeights(newModelWeights) {
this.resizeEntryList(newModelWeights.length)
if (newModelWeights.length === 0) {
this.getModelElements()[0].weight.value = this.defaultWeight
}
// assign to the corresponding elements
let currElements = this.getModelElements()
for (let i = 0; i < newModelWeights.length; i++) {
let curr = currElements[i]
curr.weight.value = newModelWeights[i]
}
}
resizeEntryList(newLength) {
if (newLength === 0) {
newLength = 1
}
let currLength = this.length
if (currLength < newLength) {
for (let i = currLength; i < newLength; i++) {
this.addModelEntry()
}
} else {
for (let i = newLength; i < currLength; i++) {
this.removeModelEntry()
}
}
}
}

View File

@ -16,6 +16,7 @@ var ParameterType = {
*/
let parametersTable = document.querySelector("#system-settings-table")
let networkParametersTable = document.querySelector("#system-settings-network-table")
let installExtrasTable = document.querySelector("#system-settings-install-extras-table")
/**
* JSDoc style
@ -96,6 +97,17 @@ var PARAMETERS = [
},
],
},
{
id: "models_dir",
type: ParameterType.custom,
icon: "fa-folder-tree",
label: "Models Folder",
note: "Path to the 'models' folder. Please save and refresh the page after changing this.",
saveInAppConfig: true,
render: (parameter) => {
return `<input id="${parameter.id}" name="${parameter.id}" size="30">`
},
},
{
id: "block_nsfw",
type: ParameterType.checkbox,
@ -120,6 +132,15 @@ var PARAMETERS = [
icon: "fa-arrow-down-short-wide",
default: false,
},
{
id: "extract_lora_from_prompt",
type: ParameterType.checkbox,
label: "Extract LoRA tags from the prompt",
note:
"Automatically extract lora tags like &lt;lora:name:0.4&gt; from the prompt, and apply the correct LoRA (if present)",
icon: "fa-code",
default: true,
},
{
id: "ui_open_browser_on_start",
type: ParameterType.checkbox,
@ -184,6 +205,17 @@ var PARAMETERS = [
icon: "fa-check-double",
default: true,
},
{
id: "profileName",
type: ParameterType.custom,
label: "Profile Name",
note:
"Name of the profile for model manager settings, e.g. thumbnails for embeddings. Use this to have different settings for different users.",
render: (parameter) => {
return `<input id="${parameter.id}" name="${parameter.id}" value="default" size="12">`
},
icon: "fa-user-gear",
},
{
id: "listen_to_network",
type: ParameterType.checkbox,
@ -217,13 +249,13 @@ var PARAMETERS = [
default: false,
},
{
id: "test_diffusers",
id: "use_v3_engine",
type: ParameterType.checkbox,
label: "Test Diffusers",
label: "Use the new v3 engine (diffusers)",
note:
"<b>Experimental! Can have bugs!</b> Use upcoming features (like LoRA) in our new engine. Please press Save, then restart the program after changing this.",
"Use our new v3 engine, with additional features like LoRA, ControlNet, SDXL, Embeddings, Tiling and lots more! Please press Save, then restart the program after changing this.",
icon: "fa-bolt",
default: false,
default: true,
saveInAppConfig: true,
},
{
@ -241,6 +273,29 @@ var PARAMETERS = [
render: () => '<button id="toggle-cloudflare-tunnel" class="primaryButton">Start</button>',
table: networkParametersTable,
},
{
id: "nvidia_tensorrt",
type: ParameterType.custom,
label: "NVIDIA TensorRT",
note: `Faster image generation by converting your Stable Diffusion models to the NVIDIA TensorRT format. You can choose the
models to convert. Download size: approximately 2 GB.<br/><br/>
<b>Early access version:</b> support for LoRA is still under development.
<div id="trt-build-config" class="displayNone">
<h3>Build Config:</h3>
Batch size range:
<label>Min:</label> <input id="trt-build-min-batch" type="number" min="1" value="1" style="width: 40pt" />
<label>Max:</label> <input id="trt-build-max-batch" type="number" min="1" value="1" style="width: 40pt" /><br/><br/>
<b>Build for resolutions</b>:<br/>
<input id="trt-build-res-512" type="checkbox" value="1" /> 512x512 to 768x768<br/>
<input id="trt-build-res-768" type="checkbox" value="1" checked /> 768x768 to 1024x1024<br/>
<input id="trt-build-res-1024" type="checkbox" value="1" /> 1024x1024 to 1280x1280<br/>
<input id="trt-build-res-1280" type="checkbox" value="1" /> 1280x1280 to 1536x1536<br/>
<input id="trt-build-res-1536" type="checkbox" value="1" /> 1536x1536 to 1792x1792<br/>
</div>`,
icon: "fa-angles-up",
render: () => '<button id="toggle-tensorrt-install" class="primaryButton">Install</button>',
table: installExtrasTable,
},
]
function getParameterSettingsEntry(id) {
@ -376,7 +431,9 @@ let listenPortField = document.querySelector("#listen_port")
let useBetaChannelField = document.querySelector("#use_beta_channel")
let uiOpenBrowserOnStartField = document.querySelector("#ui_open_browser_on_start")
let confirmDangerousActionsField = document.querySelector("#confirm_dangerous_actions")
let testDiffusers = document.querySelector("#test_diffusers")
let testDiffusers = document.querySelector("#use_v3_engine")
let profileNameField = document.querySelector("#profileName")
let modelsDirField = document.querySelector("#models_dir")
let saveSettingsBtn = document.querySelector("#save-system-settings-btn")
@ -408,8 +465,6 @@ async function getAppConfig() {
if (config.update_branch === "beta") {
useBetaChannelField.checked = true
document.querySelector("#updateBranchLabel").innerText = "(beta)"
} else {
getParameterSettingsEntry("test_diffusers").classList.add("displayNone")
}
if (config.ui && config.ui.open_browser_on_start === false) {
uiOpenBrowserOnStartField.checked = false
@ -420,12 +475,17 @@ async function getAppConfig() {
if (config.net && config.net.listen_port !== undefined) {
listenPortField.value = config.net.listen_port
}
modelsDirField.value = config.models_dir
const testDiffusersEnabled = config.test_diffusers && config.update_branch !== "main"
let testDiffusersEnabled = true
if (config.use_v3_engine === false) {
testDiffusersEnabled = false
}
testDiffusers.checked = testDiffusersEnabled
document.querySelector("#test_diffusers").checked = testDiffusers.checked // don't break plugins
if (config.config_on_startup) {
if (config.config_on_startup?.test_diffusers && config.update_branch !== "main") {
if (config.config_on_startup?.use_v3_engine) {
document.body.classList.add("diffusers-enabled-on-startup")
document.body.classList.remove("diffusers-disabled-on-startup")
} else {
@ -437,20 +497,36 @@ async function getAppConfig() {
if (!testDiffusersEnabled) {
document.querySelector("#lora_model_container").style.display = "none"
document.querySelector("#tiling_container").style.display = "none"
document.querySelector("#controlnet_model_container").style.display = "none"
document.querySelector("#hypernetwork_model_container").style.display = ""
document.querySelector("#hypernetwork_strength_container").style.display = ""
document.querySelector("#negative-embeddings-button").style.display = "none"
document.querySelectorAll("#sampler_name option.diffusers-only").forEach((option) => {
option.style.display = "none"
})
IMAGE_STEP_SIZE = 64
customWidthField.step = IMAGE_STEP_SIZE
customHeightField.step = IMAGE_STEP_SIZE
} else {
document.querySelector("#lora_model_container").style.display = ""
document.querySelector("#tiling_container").style.display = ""
document.querySelector("#controlnet_model_container").style.display = ""
document.querySelector("#hypernetwork_model_container").style.display = "none"
document.querySelector("#hypernetwork_strength_container").style.display = "none"
document.querySelectorAll("#sampler_name option.k_diffusion-only").forEach((option) => {
option.style.display = "none"
})
document.querySelector("#clip_skip_config").classList.remove("displayNone")
document.querySelector("#embeddings-button").classList.remove("displayNone")
document.querySelector("#negative-embeddings-button").classList.remove("displayNone")
IMAGE_STEP_SIZE = 8
customWidthField.step = IMAGE_STEP_SIZE
customHeightField.step = IMAGE_STEP_SIZE
}
if (config.force_save_metadata) {
metadataOutputFormatField.value = config.force_save_metadata
}
console.log("get config status response", config)
@ -566,7 +642,7 @@ function setDeviceInfo(devices) {
function ID_TO_TEXT(d) {
let info = devices.all[d]
if ("mem_free" in info && "mem_total" in info) {
if ("mem_free" in info && "mem_total" in info && info["mem_total"] > 0) {
return `${info.name} <small>(${d}) (${info.mem_free.toFixed(1)}Gb free / ${info.mem_total.toFixed(
1
)} Gb total)</small>`
@ -582,6 +658,23 @@ function setDeviceInfo(devices) {
systemInfoEl.querySelector("#system-info-cpu").innerText = cpu
systemInfoEl.querySelector("#system-info-gpus-all").innerHTML = allGPUs.join("</br>")
systemInfoEl.querySelector("#system-info-rendering-devices").innerHTML = activeGPUs.join("</br>")
// tensorRT
if (devices.active && testDiffusers.checked && devices.enable_trt === true) {
let nvidiaGPUs = Object.keys(devices.active).filter((d) => {
let gpuName = devices.active[d].name
gpuName = gpuName.toLowerCase()
return (
gpuName.includes("nvidia") ||
gpuName.includes("geforce") ||
gpuName.includes("quadro") ||
gpuName.includes("tesla")
)
})
if (nvidiaGPUs.length > 0) {
document.querySelector("#install-extras-container").classList.remove("displayNone")
}
}
}
function setHostInfo(hosts) {
@ -647,10 +740,13 @@ async function getSystemInfo() {
force = res["enforce_output_dir"]
if (force == true) {
saveToDiskField.checked = true
metadataOutputFormatField.disabled = false
metadataOutputFormatField.disabled = res["enforce_output_metadata"]
diskPathField.disabled = true
}
saveToDiskField.disabled = force
diskPathField.disabled = force
} else {
diskPathField.disabled = !saveToDiskField.checked
metadataOutputFormatField.disabled = !saveToDiskField.checked
}
setDiskPath(res["default_output_dir"], force)
} catch (e) {
@ -743,11 +839,3 @@ navigator.permissions.query({ name: "clipboard-write" }).then(function(result) {
})
document.addEventListener("system_info_update", (e) => setDeviceInfo(e.detail))
useBetaChannelField.addEventListener('change', (e) => {
if (e.target.checked) {
getParameterSettingsEntry("test_diffusers").classList.remove('displayNone')
} else {
getParameterSettingsEntry("test_diffusers").classList.add('displayNone')
}
})

View File

@ -118,13 +118,16 @@ class ModelDropdown {
)
}
saveCurrentSelection(elem, value, path) {
saveCurrentSelection(elem, value, path, dispatchEvent = true) {
this.currentSelection.elem = elem
this.currentSelection.value = value
this.currentSelection.path = path
this.modelFilter.dataset.path = path
this.modelFilter.value = value
this.modelFilter.dispatchEvent(new Event("change"))
if (dispatchEvent) {
this.modelFilter.dispatchEvent(new Event("change"))
}
}
processClick(e) {
@ -348,13 +351,13 @@ class ModelDropdown {
}
}
selectEntry(path) {
selectEntry(path, dispatchEvent = true) {
if (path !== undefined) {
const entries = this.modelElements
for (const elem of entries) {
if (elem.dataset.path == path) {
this.saveCurrentSelection(elem, elem.innerText, elem.dataset.path)
this.saveCurrentSelection(elem, elem.innerText, elem.dataset.path, dispatchEvent)
this.highlightedModelEntry = elem
elem.scrollIntoView({ block: "nearest" })
break
@ -529,7 +532,7 @@ class ModelDropdown {
rootModelList.style.minWidth = modelFilterStyle.width
})
this.selectEntry(this.activeModel)
this.selectEntry(this.activeModel, false)
}
/**
@ -552,17 +555,23 @@ class ModelDropdown {
this.createModelNodeList(`${folderName || ""}/${childFolderName}`, childModels, false)
)
} else {
let modelId = model
let modelName = model
if (typeof model === "object") {
modelId = Object.keys(model)[0]
modelName = model[modelId]
}
const classes = ["model-file"]
if (isRootFolder) {
classes.push("in-root-folder")
}
// Remove the leading slash from the model path
const fullPath = folderName ? `${folderName.substring(1)}/${model}` : model
const fullPath = folderName ? `${folderName.substring(1)}/${modelId}` : modelId
modelsMap.set(
model,
modelId,
createElement("li", { "data-path": fullPath }, classes, [
createElement("i", undefined, ["fa-regular", "fa-file", "icon"]),
model,
modelName,
])
)
}
@ -643,22 +652,6 @@ async function getModels(scanForMalicious = true) {
makeImageBtn.disabled = true
}
/* This code should no longer be needed. Commenting out for now, will cleanup later.
const sd_model_setting_key = "stable_diffusion_model"
const vae_model_setting_key = "vae_model"
const hypernetwork_model_key = "hypernetwork_model"
const stableDiffusionOptions = modelsOptions['stable-diffusion']
const vaeOptions = modelsOptions['vae']
const hypernetworkOptions = modelsOptions['hypernetwork']
// TODO: set default for model here too
SETTINGS[sd_model_setting_key].default = stableDiffusionOptions[0]
if (getSetting(sd_model_setting_key) == '' || SETTINGS[sd_model_setting_key].value == '') {
setSetting(sd_model_setting_key, stableDiffusionOptions[0])
}
*/
// notify ModelDropdown objects to refresh
document.dispatchEvent(new Event("refreshModels"))
} catch (e) {
@ -667,4 +660,7 @@ async function getModels(scanForMalicious = true) {
}
// reload models button
document.querySelector("#reload-models").addEventListener("click", () => getModels())
document.querySelector("#reload-models").addEventListener("click", (e) => {
e.stopPropagation()
getModels()
})

409
ui/media/js/task-manager.js Normal file
View File

@ -0,0 +1,409 @@
const htmlTaskMap = new WeakMap()
const pauseBtn = document.querySelector("#pause")
const resumeBtn = document.querySelector("#resume")
const processOrder = document.querySelector("#process_order_toggle")
let pauseClient = false
async function onIdle() {
const serverCapacity = SD.serverCapacity
if (pauseClient === true) {
await resumeClient()
}
for (const taskEntry of getUncompletedTaskEntries()) {
if (SD.activeTasks.size >= serverCapacity) {
break
}
const task = htmlTaskMap.get(taskEntry)
if (!task) {
const taskStatusLabel = taskEntry.querySelector(".taskStatusLabel")
taskStatusLabel.style.display = "none"
continue
}
await onTaskStart(task)
}
}
function getUncompletedTaskEntries() {
const taskEntries = Array.from(document.querySelectorAll("#preview .imageTaskContainer .taskStatusLabel"))
.filter((taskLabel) => taskLabel.style.display !== "none")
.map(function(taskLabel) {
let imageTaskContainer = taskLabel.parentNode
while (!imageTaskContainer.classList.contains("imageTaskContainer") && imageTaskContainer.parentNode) {
imageTaskContainer = imageTaskContainer.parentNode
}
return imageTaskContainer
})
if (!processOrder.checked) {
taskEntries.reverse()
}
return taskEntries
}
async function onTaskStart(task) {
if (!task.isProcessing || task.batchesDone >= task.batchCount) {
return
}
if (typeof task.startTime !== "number") {
task.startTime = Date.now()
}
if (!("instances" in task)) {
task["instances"] = []
}
task["stopTask"].innerHTML = '<i class="fa-solid fa-circle-stop"></i> Stop'
task["taskStatusLabel"].innerText = "Starting"
task["taskStatusLabel"].classList.add("waitingTaskLabel")
if (task.previewTaskReq !== undefined) {
let controlImagePreview = task.taskConfig.querySelector(".controlnet-img-preview > img")
try {
let result = await SD.filter(task.previewTaskReq)
controlImagePreview.src = result.output[0]
let controlImageLargePreview = task.taskConfig.querySelector(
".controlnet-img-preview .task-fs-initimage img"
)
controlImageLargePreview.src = controlImagePreview.src
} catch (error) {
console.log("filter error", error)
}
delete task.previewTaskReq
}
let newTaskReqBody = task.reqBody
if (task.batchCount > 1) {
// Each output render batch needs it's own task reqBody instance to avoid altering the other runs after they are completed.
newTaskReqBody = Object.assign({}, task.reqBody)
if (task.batchesDone == task.batchCount - 1) {
// Last batch of the task
// If the number of parallel jobs is no factor of the total number of images, the last batch must create less than "parallel jobs count" images
// E.g. with numOutputsTotal = 6 and num_outputs = 5, the last batch shall only generate 1 image.
newTaskReqBody.num_outputs = task.numOutputsTotal - task.reqBody.num_outputs * (task.batchCount - 1)
}
}
const startSeed = task.seed || newTaskReqBody.seed
const genSeeds = Boolean(
typeof newTaskReqBody.seed !== "number" || (newTaskReqBody.seed === task.seed && task.numOutputsTotal > 1)
)
if (genSeeds) {
newTaskReqBody.seed = parseInt(startSeed) + task.batchesDone * task.reqBody.num_outputs
}
const outputContainer = document.createElement("div")
outputContainer.className = "img-batch"
task.outputContainer.insertBefore(outputContainer, task.outputContainer.firstChild)
const eventInfo = { reqBody: newTaskReqBody }
const callbacksPromises = PLUGINS["TASK_CREATE"].map((hook) => {
if (typeof hook !== "function") {
console.error("The provided TASK_CREATE hook is not a function. Hook: %o", hook)
return Promise.reject(new Error("hook is not a function."))
}
try {
return Promise.resolve(hook.call(task, eventInfo))
} catch (err) {
console.error(err)
return Promise.reject(err)
}
})
await Promise.allSettled(callbacksPromises)
let instance = eventInfo.instance
if (!instance) {
const factory = PLUGINS.OUTPUTS_FORMATS.get(eventInfo.reqBody?.output_format || newTaskReqBody.output_format)
if (factory) {
instance = await Promise.resolve(factory(eventInfo.reqBody || newTaskReqBody))
}
if (!instance) {
console.error(
`${factory ? "Factory " + String(factory) : "No factory defined"} for output format ${eventInfo.reqBody
?.output_format || newTaskReqBody.output_format}. Instance is ${instance ||
"undefined"}. Using default renderer.`
)
instance = new SD.RenderTask(eventInfo.reqBody || newTaskReqBody)
}
}
task["instances"].push(instance)
task.batchesDone++
document.dispatchEvent(new CustomEvent("before_task_start", { detail: { task: task } }))
instance.enqueue(getTaskUpdater(task, newTaskReqBody, outputContainer)).then(
(renderResult) => {
onRenderTaskCompleted(task, newTaskReqBody, instance, outputContainer, renderResult)
},
(reason) => {
onTaskErrorHandler(task, newTaskReqBody, instance, reason)
}
)
document.dispatchEvent(new CustomEvent("after_task_start", { detail: { task: task } }))
}
function getTaskUpdater(task, reqBody, outputContainer) {
const outputMsg = task["outputMsg"]
const progressBar = task["progressBar"]
const progressBarInner = progressBar.querySelector("div")
const batchCount = task.batchCount
let lastStatus = undefined
return async function(event) {
if (this.status !== lastStatus) {
lastStatus = this.status
switch (this.status) {
case SD.TaskStatus.pending:
task["taskStatusLabel"].innerText = "Pending"
task["taskStatusLabel"].classList.add("waitingTaskLabel")
break
case SD.TaskStatus.waiting:
task["taskStatusLabel"].innerText = "Waiting"
task["taskStatusLabel"].classList.add("waitingTaskLabel")
task["taskStatusLabel"].classList.remove("activeTaskLabel")
break
case SD.TaskStatus.processing:
case SD.TaskStatus.completed:
task["taskStatusLabel"].innerText = "Processing"
task["taskStatusLabel"].classList.add("activeTaskLabel")
task["taskStatusLabel"].classList.remove("waitingTaskLabel")
break
case SD.TaskStatus.stopped:
break
case SD.TaskStatus.failed:
if (!SD.isServerAvailable()) {
logError(
"Stable Diffusion is still starting up, please wait. If this goes on beyond a few minutes, Stable Diffusion has probably crashed. Please check the error message in the command-line window.",
event,
outputMsg
)
} else if (typeof event?.response === "object") {
let msg = "Stable Diffusion had an error reading the response:<br/><pre>"
if (this.exception) {
msg += `Error: ${this.exception.message}<br/>`
}
try {
// 'Response': body stream already read
msg += "Read: " + (await event.response.text())
} catch (e) {
msg += "Unexpected end of stream. "
}
const bufferString = event.reader.bufferedString
if (bufferString) {
msg += "Buffered data: " + bufferString
}
msg += "</pre>"
logError(msg, event, outputMsg)
}
break
}
}
if ("update" in event) {
const stepUpdate = event.update
if (!("step" in stepUpdate)) {
return
}
// task.instances can be a mix of different tasks with uneven number of steps (Render Vs Filter Tasks)
const instancesWithProgressUpdates = task.instances.filter((instance) => instance.step !== undefined)
const overallStepCount =
instancesWithProgressUpdates.reduce(
(sum, instance) =>
sum +
(instance.isPending
? Math.max(0, instance.step || stepUpdate.step) /
(instance.total_steps || stepUpdate.total_steps)
: 1),
0 // Initial value
) * stepUpdate.total_steps // Scale to current number of steps.
const totalSteps = instancesWithProgressUpdates.reduce(
(sum, instance) => sum + (instance.total_steps || stepUpdate.total_steps),
stepUpdate.total_steps * (batchCount - task.batchesDone) // Initial value at (unstarted task count * Nbr of steps)
)
const percent = Math.min(100, 100 * (overallStepCount / totalSteps)).toFixed(0)
const timeTaken = stepUpdate.step_time // sec
const stepsRemaining = Math.max(0, totalSteps - overallStepCount)
const timeRemaining = timeTaken < 0 ? "" : millisecondsToStr(stepsRemaining * timeTaken * 1000)
outputMsg.innerHTML = `Batch ${task.batchesDone} of ${batchCount}. Generating image(s): ${percent}%. Time remaining (approx): ${timeRemaining}`
outputMsg.style.display = "block"
progressBarInner.style.width = `${percent}%`
if (stepUpdate.output) {
document.dispatchEvent(
new CustomEvent("on_task_step", {
detail: {
task: task,
reqBody: reqBody,
stepUpdate: stepUpdate,
outputContainer: outputContainer,
},
})
)
}
}
}
}
function onRenderTaskCompleted(task, reqBody, instance, outputContainer, stepUpdate) {
if (typeof stepUpdate === "object") {
if (stepUpdate.status === "succeeded") {
document.dispatchEvent(
new CustomEvent("on_render_task_success", {
detail: {
task: task,
reqBody: reqBody,
stepUpdate: stepUpdate,
outputContainer: outputContainer,
},
})
)
} else {
task.isProcessing = false
document.dispatchEvent(
new CustomEvent("on_render_task_fail", {
detail: {
task: task,
reqBody: reqBody,
stepUpdate: stepUpdate,
outputContainer: outputContainer,
},
})
)
}
}
if (task.isProcessing && task.batchesDone < task.batchCount) {
task["taskStatusLabel"].innerText = "Pending"
task["taskStatusLabel"].classList.add("waitingTaskLabel")
task["taskStatusLabel"].classList.remove("activeTaskLabel")
return
}
if ("instances" in task && task.instances.some((ins) => ins != instance && ins.isPending)) {
return
}
task.isProcessing = false
task["stopTask"].innerHTML = '<i class="fa-solid fa-trash-can"></i> Remove'
task["taskStatusLabel"].style.display = "none"
let time = millisecondsToStr(Date.now() - task.startTime)
if (task.batchesDone == task.batchCount) {
if (!task.outputMsg.innerText.toLowerCase().includes("error")) {
task.outputMsg.innerText = `Processed ${task.numOutputsTotal} images in ${time}`
}
task.progressBar.style.height = "0px"
task.progressBar.style.border = "0px solid var(--background-color3)"
task.progressBar.classList.remove("active")
// setStatus("request", "done", "success")
} else {
task.outputMsg.innerText += `. Task ended after ${time}`
}
// if (randomSeedField.checked) { // we already update this before the task starts
// seedField.value = task.seed
// }
if (SD.activeTasks.size > 0) {
return
}
const uncompletedTasks = getUncompletedTaskEntries()
if (uncompletedTasks && uncompletedTasks.length > 0) {
return
}
if (pauseClient) {
resumeBtn.click()
}
document.dispatchEvent(
new CustomEvent("on_all_tasks_complete", {
detail: {},
})
)
}
function resumeClient() {
if (pauseClient) {
document.body.classList.remove("wait-pause")
document.body.classList.add("pause")
}
return new Promise((resolve) => {
let playbuttonclick = function() {
resumeBtn.removeEventListener("click", playbuttonclick)
resolve("resolved")
}
resumeBtn.addEventListener("click", playbuttonclick)
})
}
function abortTask(task) {
if (!task.isProcessing) {
return false
}
task.isProcessing = false
task.progressBar.classList.remove("active")
task["taskStatusLabel"].style.display = "none"
task["stopTask"].innerHTML = '<i class="fa-solid fa-trash-can"></i> Remove'
if (!task.instances?.some((r) => r.isPending)) {
return
}
task.instances.forEach((instance) => {
try {
instance.abort()
} catch (e) {
console.error(e)
}
})
}
async function stopAllTasks() {
getUncompletedTaskEntries().forEach((taskEntry) => {
const taskStatusLabel = taskEntry.querySelector(".taskStatusLabel")
if (taskStatusLabel) {
taskStatusLabel.style.display = "none"
}
const task = htmlTaskMap.get(taskEntry)
if (!task) {
return
}
abortTask(task)
})
}
function onTaskErrorHandler(task, reqBody, instance, reason) {
if (!task.isProcessing) {
return
}
console.log("Render request %o, Instance: %o, Error: %s", reqBody, instance, reason)
abortTask(task)
const outputMsg = task["outputMsg"]
logError(
"Stable Diffusion had an error. Please check the logs in the command-line window. <br/><br/>" +
reason +
"<br/><pre>" +
reason.stack +
"</pre>",
task,
outputMsg
)
// setStatus("request", "error", "error")
}
pauseBtn.addEventListener("click", function() {
pauseClient = true
pauseBtn.style.display = "none"
resumeBtn.style.display = "inline"
document.body.classList.add("wait-pause")
})
resumeBtn.addEventListener("click", function() {
pauseClient = false
resumeBtn.style.display = "none"
pauseBtn.style.display = "inline"
document.body.classList.remove("pause")
document.body.classList.remove("wait-pause")
})

View File

@ -1097,6 +1097,48 @@ async function deleteKeys(keyToDelete) {
}
}
/**
* @param {String} Data URL of the image
* @param {Integer} Top left X-coordinate of the crop area
* @param {Integer} Top left Y-coordinate of the crop area
* @param {Integer} Width of the crop area
* @param {Integer} Height of the crop area
* @return {String}
*/
function cropImageDataUrl(dataUrl, x, y, width, height) {
return new Promise((resolve, reject) => {
const image = new Image()
image.src = dataUrl
image.onload = () => {
const canvas = document.createElement('canvas')
canvas.width = width
canvas.height = height
const ctx = canvas.getContext('2d')
ctx.drawImage(image, x, y, width, height, 0, 0, width, height)
const croppedDataUrl = canvas.toDataURL('image/png')
resolve(croppedDataUrl)
}
image.onerror = (error) => {
reject(error)
}
})
}
/**
* @param {String} HTML representing a single element
* @return {Element}
*/
function htmlToElement(html) {
var template = document.createElement('template');
html = html.trim(); // Never return a text node of whitespace as the result
template.innerHTML = html;
return template.content.firstChild;
}
function modalDialogCloseOnBackdropClick(dialog) {
dialog.addEventListener('mousedown', function (event) {
// Firefox creates an event with clientX|Y = 0|0 when choosing an <option>.
@ -1156,4 +1198,37 @@ function makeDialogDraggable(element) {
})() )
}
function logMsg(msg, level, outputMsg) {
if (outputMsg.hasChildNodes()) {
outputMsg.appendChild(document.createElement("br"))
}
if (level === "error") {
outputMsg.innerHTML += '<span style="color: red">Error: ' + msg + "</span>"
} else if (level === "warn") {
outputMsg.innerHTML += '<span style="color: orange">Warning: ' + msg + "</span>"
} else {
outputMsg.innerText += msg
}
console.log(level, msg)
}
function logError(msg, res, outputMsg) {
logMsg(msg, "error", outputMsg)
console.log("request error", res)
console.trace()
// setStatus("request", "error", "error")
}
function playSound() {
const audio = new Audio("/media/ding.mp3")
audio.volume = 0.2
var promise = audio.play()
if (promise !== undefined) {
promise
.then((_) => {})
.catch((error) => {
console.warn("browser blocked autoplay")
})
}
}

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,94 @@
/*
LoRA Prompt Parser 1.0
by Patrice
Copying and pasting a prompt with a LoRA tag will automatically select the corresponding option in the Easy Diffusion dropdown and remove the LoRA tag from the prompt. The LoRA must be already available in the corresponding Easy Diffusion dropdown (this is not a LoRA downloader).
*/
(function() {
"use strict"
promptField.addEventListener('input', function(e) {
let loraExtractSetting = document.getElementById("extract_lora_from_prompt")
if (!loraExtractSetting.checked) {
return
}
const { LoRA, prompt } = extractLoraTags(e.target.value);
//console.log('e.target: ' + JSON.stringify(LoRA));
if (LoRA !== null && LoRA.length > 0) {
promptField.value = prompt.replace(/,+$/, ''); // remove any trailing ,
if (testDiffusers?.checked === false) {
showToast("LoRA's are only supported with diffusers. Just stripping the LoRA tag from the prompt.")
}
}
if (LoRA !== null && LoRA.length > 0 && testDiffusers?.checked) {
let modelNames = LoRA.map(e => e.lora_model_0)
let modelWeights = LoRA.map(e => e.lora_alpha_0)
loraModelField.value = {modelNames: modelNames, modelWeights: modelWeights}
showToast("Prompt successfully processed")
}
//promptField.dispatchEvent(new Event('change'));
});
// extract LoRA tags from strings
function extractLoraTags(prompt) {
// Define the regular expression for the tags
const regex = /<(?:lora|lyco):([^:>]+)(?::([^:>]*))?(?::([^:>]*))?>/gi
// Initialize an array to hold the matches
let matches = []
// Iterate over the string, finding matches
for (const match of prompt.matchAll(regex)) {
const modelFileName = match[1].trim()
const loraPathes = getAllModelPathes("lora", modelFileName)
if (loraPathes.length > 0) {
const loraPath = loraPathes[0]
// Initialize an object to hold a match
let loraTag = {
lora_model_0: loraPath,
}
//console.log("Model:" + modelFileName);
// If weight is provided, add it to the loraTag object
if (match[2] !== undefined && match[2] !== '') {
loraTag.lora_alpha_0 = parseFloat(match[2].trim())
}
else
{
loraTag.lora_alpha_0 = 0.5
}
// If blockweights are provided, add them to the loraTag object
if (match[3] !== undefined && match[3] !== '') {
loraTag.blockweights = match[3].trim()
}
// Add the loraTag object to the array of matches
matches.push(loraTag);
//console.log(JSON.stringify(matches));
}
else
{
showToast("LoRA not found: " + match[1].trim(), 5000, true)
}
}
// Clean up the prompt string, e.g. from "apple, banana, <lora:...>, orange, <lora:...> , pear <lora:...>, <lora:...>" to "apple, banana, orange, pear"
let cleanedPrompt = prompt.replace(regex, '').replace(/(\s*,\s*(?=\s*,|$))|(^\s*,\s*)|\s+/g, ' ').trim();
//console.log('Matches: ' + JSON.stringify(matches));
// Return the array of matches and cleaned prompt string
return {
LoRA: matches,
prompt: cleanedPrompt
}
}
})()

View File

@ -1,454 +0,0 @@
;(function() {
"use strict"
///////////////////// Function section
function smoothstep(x) {
return x * x * (3 - 2 * x)
}
function smootherstep(x) {
return x * x * x * (x * (x * 6 - 15) + 10)
}
function smootheststep(x) {
let y = -20 * Math.pow(x, 7)
y += 70 * Math.pow(x, 6)
y -= 84 * Math.pow(x, 5)
y += 35 * Math.pow(x, 4)
return y
}
function getCurrentTime() {
const now = new Date()
let hours = now.getHours()
let minutes = now.getMinutes()
let seconds = now.getSeconds()
hours = hours < 10 ? `0${hours}` : hours
minutes = minutes < 10 ? `0${minutes}` : minutes
seconds = seconds < 10 ? `0${seconds}` : seconds
return `${hours}:${minutes}:${seconds}`
}
function addLogMessage(message) {
const logContainer = document.getElementById("merge-log")
logContainer.innerHTML += `<i>${getCurrentTime()}</i> ${message}<br>`
// Scroll to the bottom of the log
logContainer.scrollTop = logContainer.scrollHeight
document.querySelector("#merge-log-container").style.display = "block"
}
function addLogSeparator() {
const logContainer = document.getElementById("merge-log")
logContainer.innerHTML += "<hr>"
logContainer.scrollTop = logContainer.scrollHeight
}
function drawDiagram(fn) {
const SIZE = 300
const canvas = document.getElementById("merge-canvas")
canvas.height = canvas.width = SIZE
const ctx = canvas.getContext("2d")
// Draw coordinate system
ctx.scale(1, -1)
ctx.translate(0, -canvas.height)
ctx.lineWidth = 1
ctx.beginPath()
ctx.strokeStyle = "white"
ctx.moveTo(0, 0)
ctx.lineTo(0, SIZE)
ctx.lineTo(SIZE, SIZE)
ctx.lineTo(SIZE, 0)
ctx.lineTo(0, 0)
ctx.lineTo(SIZE, SIZE)
ctx.stroke()
ctx.beginPath()
ctx.setLineDash([1, 2])
const n = SIZE / 10
for (let i = n; i < SIZE; i += n) {
ctx.moveTo(0, i)
ctx.lineTo(SIZE, i)
ctx.moveTo(i, 0)
ctx.lineTo(i, SIZE)
}
ctx.stroke()
ctx.beginPath()
ctx.setLineDash([])
ctx.beginPath()
ctx.strokeStyle = "black"
ctx.lineWidth = 3
// Plot function
const numSamples = 20
for (let i = 0; i <= numSamples; i++) {
const x = i / numSamples
const y = fn(x)
const canvasX = x * SIZE
const canvasY = y * SIZE
if (i === 0) {
ctx.moveTo(canvasX, canvasY)
} else {
ctx.lineTo(canvasX, canvasY)
}
}
ctx.stroke()
// Plot alpha values (yellow boxes)
let start = parseFloat(document.querySelector("#merge-start").value)
let step = parseFloat(document.querySelector("#merge-step").value)
let iterations = document.querySelector("#merge-count").value >> 0
ctx.beginPath()
ctx.fillStyle = "yellow"
for (let i = 0; i < iterations; i++) {
const alpha = (start + i * step) / 100
const x = alpha * SIZE
const y = fn(alpha) * SIZE
if (x <= SIZE) {
ctx.rect(x - 3, y - 3, 6, 6)
ctx.fill()
} else {
ctx.strokeStyle = "red"
ctx.moveTo(0, 0)
ctx.lineTo(0, SIZE)
ctx.lineTo(SIZE, SIZE)
ctx.lineTo(SIZE, 0)
ctx.lineTo(0, 0)
ctx.lineTo(SIZE, SIZE)
ctx.stroke()
addLogMessage("<i>Warning: maximum ratio is &#8805; 100%</i>")
}
}
}
function updateChart() {
let fn = (x) => x
switch (document.querySelector("#merge-interpolation").value) {
case "SmoothStep":
fn = smoothstep
break
case "SmootherStep":
fn = smootherstep
break
case "SmoothestStep":
fn = smootheststep
break
}
drawDiagram(fn)
}
createTab({
id: "merge",
icon: "fa-code-merge",
label: "Merge models",
css: `
#tab-content-merge .tab-content-inner {
max-width: 100%;
padding: 10pt;
}
.merge-container {
margin-left: 15%;
margin-right: 15%;
text-align: left;
display: inline-grid;
grid-template-columns: 1fr 1fr;
grid-template-rows: auto auto auto;
gap: 0px 0px;
grid-auto-flow: row;
grid-template-areas:
"merge-input merge-config"
"merge-buttons merge-buttons";
}
.merge-container p {
margin-top: 3pt;
margin-bottom: 3pt;
}
.merge-config .tab-content {
background: var(--background-color1);
border-radius: 3pt;
}
.merge-config .tab-content-inner {
text-align: left;
}
.merge-input {
grid-area: merge-input;
padding-left:1em;
}
.merge-config {
grid-area: merge-config;
padding:1em;
}
.merge-config input {
margin-bottom: 3px;
}
.merge-config select {
margin-bottom: 3px;
}
.merge-buttons {
grid-area: merge-buttons;
padding:1em;
text-align: center;
}
#merge-button {
padding: 8px;
width:20em;
}
div#merge-log {
height:150px;
overflow-x:hidden;
overflow-y:scroll;
background:var(--background-color1);
border-radius: 3pt;
}
div#merge-log i {
color: hsl(var(--accent-hue), 100%, calc(2*var(--accent-lightness)));
font-family: monospace;
}
.disabled {
background: var(--background-color4);
color: var(--text-color);
}
#merge-type-tabs {
border-bottom: 1px solid black;
}
#merge-log-container {
display: none;
}
.merge-container #merge-warning {
color: rgb(153, 153, 153);
}`,
content: `
<div class="merge-container panel-box">
<div class="merge-input">
<p><label for="#mergeModelA">Select Model A:</label></p>
<input id="mergeModelA" type="text" spellcheck="false" autocomplete="off" class="model-filter" data-path="" />
<p><label for="#mergeModelB">Select Model B:</label></p>
<input id="mergeModelB" type="text" spellcheck="false" autocomplete="off" class="model-filter" data-path="" />
<br/><br/>
<p id="merge-warning"><small><b>Important:</b> Please merge models of similar type.<br/>For e.g. <code>SD 1.4</code> models with only <code>SD 1.4/1.5</code> models,<br/><code>SD 2.0</code> with <code>SD 2.0</code>-type, and <code>SD 2.1</code> with <code>SD 2.1</code>-type models.</small></p>
<br/>
<table>
<tr>
<td><label for="#merge-filename">Output file name:</label></td>
<td><input id="merge-filename" size=24> <i class="fa-solid fa-circle-question help-btn"><span class="simple-tooltip top-left">Base name of the output file.<br>Mix ratio and file suffix will be appended to this.</span></i></td>
</tr>
<tr>
<td><label for="#merge-fp">Output precision:</label></td>
<td><select id="merge-fp">
<option value="fp16">fp16 (smaller file size)</option>
<option value="fp32">fp32 (larger file size)</option>
</select>
<i class="fa-solid fa-circle-question help-btn"><span class="simple-tooltip top-left">Image generation uses fp16, so it's a good choice.<br>Use fp32 if you want to use the result models for more mixes</span></i>
</td>
</tr>
<tr>
<td><label for="#merge-format">Output file format:</label></td>
<td><select id="merge-format">
<option value="safetensors">Safetensors (recommended)</option>
<option value="ckpt">CKPT/Pickle (legacy format)</option>
</select>
</td>
</tr>
</table>
<br/>
<div id="merge-log-container">
<p><label for="#merge-log">Log messages:</label></p>
<div id="merge-log"></div>
</div>
</div>
<div class="merge-config">
<div class="tab-container">
<span id="tab-merge-opts-single" class="tab active">
<span>Make a single file</small></span>
</span>
<span id="tab-merge-opts-batch" class="tab">
<span>Make multiple variations</small></span>
</span>
</div>
<div>
<div id="tab-content-merge-opts-single" class="tab-content active">
<div class="tab-content-inner">
<small>Saves a single merged model file, at the specified merge ratio.</small><br/><br/>
<label for="#single-merge-ratio-slider">Merge ratio:</label>
<input id="single-merge-ratio-slider" name="single-merge-ratio-slider" class="editor-slider" value="50" type="range" min="1" max="1000">
<input id="single-merge-ratio" size=2 value="5">%
<i class="fa-solid fa-circle-question help-btn"><span class="simple-tooltip top-left">Model A's contribution to the mix. The rest will be from Model B.</span></i>
</div>
</div>
<div id="tab-content-merge-opts-batch" class="tab-content">
<div class="tab-content-inner">
<small>Saves multiple variations of the model, at different merge ratios.<br/>Each variation will be saved as a separate file.</small><br/><br/>
<table>
<tr><td><label for="#merge-count">Number of variations:</label></td>
<td> <input id="merge-count" size=2 value="5"></td>
<td> <i class="fa-solid fa-circle-question help-btn"><span class="simple-tooltip top-left">Number of models to create</span></i></td></tr>
<tr><td><label for="#merge-start">Starting merge ratio:</label></td>
<td> <input id="merge-start" size=2 value="5">%</td>
<td> <i class="fa-solid fa-circle-question help-btn"><span class="simple-tooltip top-left">Smallest share of model A in the mix</span></i></td></tr>
<tr><td><label for="#merge-step">Increment each step:</label></td>
<td> <input id="merge-step" size=2 value="10">%</td>
<td> <i class="fa-solid fa-circle-question help-btn"><span class="simple-tooltip top-left">Share of model A added into the mix per step</span></i></td></tr>
<tr><td><label for="#merge-interpolation">Interpolation model:</label></td>
<td> <select id="merge-interpolation">
<option>Exact</option>
<option>SmoothStep</option>
<option>SmootherStep</option>
<option>SmoothestStep</option>
</select></td>
<td> <i class="fa-solid fa-circle-question help-btn"><span class="simple-tooltip top-left">Sigmoid function to be applied to the model share before mixing</span></i></td></tr>
</table>
<br/>
<small>Preview of variation ratios:</small><br/>
<canvas id="merge-canvas" width="400" height="400"></canvas>
</div>
</div>
</div>
</div>
<div class="merge-buttons">
<button id="merge-button" class="primaryButton">Merge models</button>
</div>
</div>`,
onOpen: ({ firstOpen }) => {
if (!firstOpen) {
return
}
const tabSettingsSingle = document.querySelector("#tab-merge-opts-single")
const tabSettingsBatch = document.querySelector("#tab-merge-opts-batch")
linkTabContents(tabSettingsSingle)
linkTabContents(tabSettingsBatch)
console.log("Activate")
let mergeModelAField = new ModelDropdown(document.querySelector("#mergeModelA"), "stable-diffusion")
let mergeModelBField = new ModelDropdown(document.querySelector("#mergeModelB"), "stable-diffusion")
updateChart()
// slider
const singleMergeRatioField = document.querySelector("#single-merge-ratio")
const singleMergeRatioSlider = document.querySelector("#single-merge-ratio-slider")
function updateSingleMergeRatio() {
singleMergeRatioField.value = singleMergeRatioSlider.value / 10
singleMergeRatioField.dispatchEvent(new Event("change"))
}
function updateSingleMergeRatioSlider() {
if (singleMergeRatioField.value < 0) {
singleMergeRatioField.value = 0
} else if (singleMergeRatioField.value > 100) {
singleMergeRatioField.value = 100
}
singleMergeRatioSlider.value = singleMergeRatioField.value * 10
singleMergeRatioSlider.dispatchEvent(new Event("change"))
}
singleMergeRatioSlider.addEventListener("input", updateSingleMergeRatio)
singleMergeRatioField.addEventListener("input", updateSingleMergeRatioSlider)
updateSingleMergeRatio()
document.querySelector(".merge-config").addEventListener("change", updateChart)
document.querySelector("#merge-button").addEventListener("click", async function(e) {
// Build request template
let model0 = mergeModelAField.value
let model1 = mergeModelBField.value
let request = { model0: model0, model1: model1 }
request["use_fp16"] = document.querySelector("#merge-fp").value == "fp16"
let iterations = document.querySelector("#merge-count").value >> 0
let start = parseFloat(document.querySelector("#merge-start").value)
let step = parseFloat(document.querySelector("#merge-step").value)
if (isTabActive(tabSettingsSingle)) {
start = parseFloat(singleMergeRatioField.value)
step = 0
iterations = 1
addLogMessage(`merge ratio = ${start}%`)
} else {
addLogMessage(`start = ${start}%`)
addLogMessage(`step = ${step}%`)
}
if (start + (iterations - 1) * step >= 100) {
addLogMessage("<i>Aborting: maximum ratio is &#8805; 100%</i>")
addLogMessage("Reduce the number of variations or the step size")
addLogSeparator()
document.querySelector("#merge-count").focus()
return
}
if (document.querySelector("#merge-filename").value == "") {
addLogMessage("<i>Aborting: No output file name specified</i>")
addLogSeparator()
document.querySelector("#merge-filename").focus()
return
}
// Disable merge button
e.target.disabled = true
e.target.classList.add("disabled")
let cursor = $("body").css("cursor")
let label = document.querySelector("#merge-button").innerHTML
$("body").css("cursor", "progress")
document.querySelector("#merge-button").innerHTML = "Merging models ..."
addLogMessage("Merging models")
addLogMessage("Model A: " + model0)
addLogMessage("Model B: " + model1)
// Batch main loop
for (let i = 0; i < iterations; i++) {
let alpha = (start + i * step) / 100
if (isTabActive(tabSettingsBatch)) {
switch (document.querySelector("#merge-interpolation").value) {
case "SmoothStep":
alpha = smoothstep(alpha)
break
case "SmootherStep":
alpha = smootherstep(alpha)
break
case "SmoothestStep":
alpha = smootheststep(alpha)
break
}
}
addLogMessage(`merging batch job ${i + 1}/${iterations}, alpha = ${alpha.toFixed(5)}...`)
request["out_path"] = document.querySelector("#merge-filename").value
request["out_path"] += "-" + alpha.toFixed(5) + "." + document.querySelector("#merge-format").value
addLogMessage(`&nbsp;&nbsp;filename: ${request["out_path"]}`)
// sdkit documentation: "ratio - the ratio of the second model. 1 means only the second model will be used."
request["ratio"] = 1-alpha
let res = await fetch("/model/merge", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify(request),
})
const data = await res.json()
addLogMessage(JSON.stringify(data))
}
addLogMessage(
"<b>Done.</b> The models have been saved to your <tt>models/stable-diffusion</tt> folder."
)
addLogSeparator()
// Re-enable merge button
$("body").css("cursor", cursor)
document.querySelector("#merge-button").innerHTML = label
e.target.disabled = false
e.target.classList.remove("disabled")
// Update model list
stableDiffusionModelField.innerHTML = ""
vaeModelField.innerHTML = ""
hypernetworkModelField.innerHTML = ""
await getModels()
})
},
})
})()

View File

@ -0,0 +1,770 @@
;(function() {
"use strict"
let mergeCSS = `
/*********** Main tab ***********/
.tab-centered {
justify-content: center;
}
#model-tool-tab-content {
background-color: var(--background-color3);
}
#model-tool-tab-content .tab-content-inner {
text-align: initial;
}
#model-tool-tab-bar .tab {
margin-bottom: 0px;
border-top-left-radius: var(--input-border-radius);
background-color: var(--background-color3);
padding: 6px 6px 0.8em 6px;
}
#tab-content-merge .tab-content-inner {
max-width: 100%;
padding: 10pt;
}
/*********** Merge UI ***********/
.merge-model-container {
margin-left: 15%;
margin-right: 15%;
text-align: left;
display: inline-grid;
grid-template-columns: 1fr 1fr;
grid-template-rows: auto auto auto;
gap: 0px 0px;
grid-auto-flow: row;
grid-template-areas:
"merge-input merge-config"
"merge-buttons merge-buttons";
}
.merge-model-container p {
margin-top: 3pt;
margin-bottom: 3pt;
}
.merge-config .tab-content {
background: var(--background-color1);
border-radius: 3pt;
}
.merge-config .tab-content-inner {
text-align: left;
}
.merge-input {
grid-area: merge-input;
padding-left:1em;
}
.merge-config {
grid-area: merge-config;
padding:1em;
}
.merge-config input {
margin-bottom: 3px;
}
.merge-config select {
margin-bottom: 3px;
}
.merge-buttons {
grid-area: merge-buttons;
padding:1em;
text-align: center;
}
#merge-button {
padding: 8px;
width:20em;
}
div#merge-log {
height:150px;
overflow-x:hidden;
overflow-y:scroll;
background:var(--background-color1);
border-radius: 3pt;
}
div#merge-log i {
color: hsl(var(--accent-hue), 100%, calc(2*var(--accent-lightness)));
font-family: monospace;
}
.disabled {
background: var(--background-color4);
color: var(--text-color);
}
#merge-type-tabs {
border-bottom: 1px solid black;
}
#merge-log-container {
display: none;
}
.merge-model-container #merge-warning {
color: var(--small-label-color);
}
/*********** LORA UI ***********/
.lora-manager-grid {
display: grid;
gap: 0px 8px;
grid-auto-flow: row;
}
@media screen and (min-width: 1501px) {
.lora-manager-grid textarea {
height:350px;
}
.lora-manager-grid {
grid-template-columns: auto 1fr 1fr;
grid-template-rows: auto 1fr;
grid-template-areas:
"selector selector selector"
"thumbnail keywords notes";
}
}
@media screen and (min-width: 1001px) and (max-width: 1500px) {
.lora-manager-grid textarea {
height:250px;
}
.lora-manager-grid {
grid-template-columns: auto auto;
grid-template-rows: auto auto auto;
grid-template-areas:
"selector selector"
"thumbnail keywords"
"thumbnail notes";
}
}
@media screen and (max-width: 1000px) {
.lora-manager-grid textarea {
height:200px;
}
.lora-manager-grid {
grid-template-columns: auto;
grid-template-rows: auto auto auto auto;
grid-template-areas:
"selector"
"keywords"
"thumbnail"
"notes";
}
}
.lora-manager-grid-selector {
grid-area: selector;
justify-self: start;
}
.lora-manager-grid-thumbnail {
grid-area: thumbnail;
justify-self: center;
}
.lora-manager-grid-keywords {
grid-area: keywords;
}
.lora-manager-grid-notes {
grid-area: notes;
}
.lora-manager-grid p {
margin-bottom: 2px;
}
`
let mergeUI = `
<div class="merge-model-container panel-box">
<div class="merge-input">
<p><label for="#mergeModelA">Select Model A:</label></p>
<input id="mergeModelA" type="text" spellcheck="false" autocomplete="off" class="model-filter" data-path="" />
<p><label for="#mergeModelB">Select Model B:</label></p>
<input id="mergeModelB" type="text" spellcheck="false" autocomplete="off" class="model-filter" data-path="" />
<br/><br/>
<p id="merge-warning"><small><b>Important:</b> Please merge models of similar type.<br/>For e.g. <code>SD 1.4</code> models with only <code>SD 1.4/1.5</code> models,<br/><code>SD 2.0</code> with <code>SD 2.0</code>-type, and <code>SD 2.1</code> with <code>SD 2.1</code>-type models.</small></p>
<br/>
<table>
<tr>
<td><label for="#merge-filename">Output file name:</label></td>
<td><input id="merge-filename" size=24> <i class="fa-solid fa-circle-question help-btn"><span class="simple-tooltip top-left">Base name of the output file.<br>Mix ratio and file suffix will be appended to this.</span></i></td>
</tr>
<tr>
<td><label for="#merge-fp">Output precision:</label></td>
<td><select id="merge-fp">
<option value="fp16">fp16 (smaller file size)</option>
<option value="fp32">fp32 (larger file size)</option>
</select>
<i class="fa-solid fa-circle-question help-btn"><span class="simple-tooltip top-left">Image generation uses fp16, so it's a good choice.<br>Use fp32 if you want to use the result models for more mixes</span></i>
</td>
</tr>
<tr>
<td><label for="#merge-format">Output file format:</label></td>
<td><select id="merge-format">
<option value="safetensors">Safetensors (recommended)</option>
<option value="ckpt">CKPT/Pickle (legacy format)</option>
</select>
</td>
</tr>
</table>
<br/>
<div id="merge-log-container">
<p><label for="#merge-log">Log messages:</label></p>
<div id="merge-log"></div>
</div>
</div>
<div class="merge-config">
<div class="tab-container">
<span id="tab-merge-opts-single" class="tab active">
<span>Make a single file</small></span>
</span>
<span id="tab-merge-opts-batch" class="tab">
<span>Make multiple variations</small></span>
</span>
</div>
<div>
<div id="tab-content-merge-opts-single" class="tab-content active">
<div class="tab-content-inner">
<small>Saves a single merged model file, at the specified merge ratio.</small><br/><br/>
<label for="#single-merge-ratio-slider">Merge ratio:</label>
<input id="single-merge-ratio-slider" name="single-merge-ratio-slider" class="editor-slider" value="50" type="range" min="1" max="1000">
<input id="single-merge-ratio" size=2 value="5">%
<i class="fa-solid fa-circle-question help-btn"><span class="simple-tooltip top-left">Model A's contribution to the mix. The rest will be from Model B.</span></i>
</div>
</div>
<div id="tab-content-merge-opts-batch" class="tab-content">
<div class="tab-content-inner">
<small>Saves multiple variations of the model, at different merge ratios.<br/>Each variation will be saved as a separate file.</small><br/><br/>
<table>
<tr><td><label for="#merge-count">Number of variations:</label></td>
<td> <input id="merge-count" size=2 value="5"></td>
<td> <i class="fa-solid fa-circle-question help-btn"><span class="simple-tooltip top-left">Number of models to create</span></i></td></tr>
<tr><td><label for="#merge-start">Starting merge ratio:</label></td>
<td> <input id="merge-start" size=2 value="5">%</td>
<td> <i class="fa-solid fa-circle-question help-btn"><span class="simple-tooltip top-left">Smallest share of model A in the mix</span></i></td></tr>
<tr><td><label for="#merge-step">Increment each step:</label></td>
<td> <input id="merge-step" size=2 value="10">%</td>
<td> <i class="fa-solid fa-circle-question help-btn"><span class="simple-tooltip top-left">Share of model A added into the mix per step</span></i></td></tr>
<tr><td><label for="#merge-interpolation">Interpolation model:</label></td>
<td> <select id="merge-interpolation">
<option>Exact</option>
<option>SmoothStep</option>
<option>SmootherStep</option>
<option>SmoothestStep</option>
</select></td>
<td> <i class="fa-solid fa-circle-question help-btn"><span class="simple-tooltip top-left">Sigmoid function to be applied to the model share before mixing</span></i></td></tr>
</table>
<br/>
<small>Preview of variation ratios:</small><br/>
<canvas id="merge-canvas" width="400" height="400"></canvas>
</div>
</div>
</div>
</div>
<div class="merge-buttons">
<button id="merge-button" class="primaryButton">Merge models</button>
</div>
</div>`
let loraUI=`
<div class="panel-box lora-manager-grid">
<div class="lora-manager-grid-selector">
<label for="#loraModel">Select Lora:</label>
<input id="loraModel" type="text" spellcheck="false" autocomplete="off" class="model-filter" data-path="" />
</div>
<div class="lora-manager-grid-thumbnail">
<p style="height:2em;">Thumbnail:</p>
<div style="position:relative; height:256px; width:256px;background-color:#222;border-radius:1em;margin-bottom:1em;">
<i id="lora-manager-image-placeholder" class="fa-regular fa-image" style="font-size:500%;color:#555;position:absolute; top: 50%; left: 50%; transform: translate(-50%,-50%);"></i>
<img id="lora-manager-image" class="displayNone" style="border-radius:6px;max-height:256px;max-width:256px;"/>
</div>
<div style="text-align:center;">
<button class="tertiaryButton" id="lora-manager-upload-button"><i class="fa-solid fa-upload"></i> Upload new thumbnail</button>
<input id="lora-manager-upload-input" name="lora-manager-upload-input" type="file" class="displayNone">
<!-- button class="tertiaryButton"><i class="fa-solid fa-trash-can"></i> Remove</button -->
</div>
</div>
<div class="lora-manager-grid-keywords">
<p style="height:2em;">Keywords:
<span style="float:right;margin-bottom:4px;"><button id="lora-keyword-from-civitai" class="tertiaryButton smallButton">Import from Civitai</button></span></p>
<textarea style="width:100%;resize:vertical;" id="lora-manager-keywords" placeholder="Put LORA specific keywords here..."></textarea>
<p style="color:var(--small-label-color);">
<b>LORA model keywords</b> can be used via the <code>+&nbsp;Embeddings</code> button. They get added to the embedding
keyword menu when the LORA has been selected in the image settings.
</p>
</div>
<div class="lora-manager-grid-notes">
<p style="height:2em;">Notes:</p>
<textarea style="width:100%;resize:vertical;" id="lora-manager-notes" placeholder="Place for things you want to remember..."></textarea>
<p id="civitai-section" class="displayNone">
<b>Civitai model page:</b>
<a id="civitai-model-page" target="_blank"></a>
</p>
</div>
</div>`
let tabHTML=`
<div id="model-tool-tab-bar" class="tab-container tab-centered">
<span id="tab-model-loraUI" class="tab active">
<span><i class="fa-solid fa-key"></i> Lora Keywords</small></span>
</span>
<span id="tab-model-mergeUI" class="tab">
<span><i class="fa-solid fa-code-merge"></i> Merge Models</small></span>
</span>
</div>
<div id="model-tool-tab-content" class="panel-box">
<div id="tab-content-model-loraUI" class="tab-content active">
<div class="tab-content-inner">
${loraUI}
</div>
</div>
<div id="tab-content-model-mergeUI" class="tab-content">
<div class="tab-content-inner">
${mergeUI}
</div>
</div>
</div>`
///////////////////// Function section
function smoothstep(x) {
return x * x * (3 - 2 * x)
}
function smootherstep(x) {
return x * x * x * (x * (x * 6 - 15) + 10)
}
function smootheststep(x) {
let y = -20 * Math.pow(x, 7)
y += 70 * Math.pow(x, 6)
y -= 84 * Math.pow(x, 5)
y += 35 * Math.pow(x, 4)
return y
}
function getCurrentTime() {
const now = new Date()
let hours = now.getHours()
let minutes = now.getMinutes()
let seconds = now.getSeconds()
hours = hours < 10 ? `0${hours}` : hours
minutes = minutes < 10 ? `0${minutes}` : minutes
seconds = seconds < 10 ? `0${seconds}` : seconds
return `${hours}:${minutes}:${seconds}`
}
function addLogMessage(message) {
const logContainer = document.getElementById("merge-log")
logContainer.innerHTML += `<i>${getCurrentTime()}</i> ${message}<br>`
// Scroll to the bottom of the log
logContainer.scrollTop = logContainer.scrollHeight
document.querySelector("#merge-log-container").style.display = "block"
}
function addLogSeparator() {
const logContainer = document.getElementById("merge-log")
logContainer.innerHTML += "<hr>"
logContainer.scrollTop = logContainer.scrollHeight
}
function drawDiagram(fn) {
const SIZE = 300
const canvas = document.getElementById("merge-canvas")
canvas.height = canvas.width = SIZE
const ctx = canvas.getContext("2d")
// Draw coordinate system
ctx.scale(1, -1)
ctx.translate(0, -canvas.height)
ctx.lineWidth = 1
ctx.beginPath()
ctx.strokeStyle = "white"
ctx.moveTo(0, 0)
ctx.lineTo(0, SIZE)
ctx.lineTo(SIZE, SIZE)
ctx.lineTo(SIZE, 0)
ctx.lineTo(0, 0)
ctx.lineTo(SIZE, SIZE)
ctx.stroke()
ctx.beginPath()
ctx.setLineDash([1, 2])
const n = SIZE / 10
for (let i = n; i < SIZE; i += n) {
ctx.moveTo(0, i)
ctx.lineTo(SIZE, i)
ctx.moveTo(i, 0)
ctx.lineTo(i, SIZE)
}
ctx.stroke()
ctx.beginPath()
ctx.setLineDash([])
ctx.beginPath()
ctx.strokeStyle = "black"
ctx.lineWidth = 3
// Plot function
const numSamples = 20
for (let i = 0; i <= numSamples; i++) {
const x = i / numSamples
const y = fn(x)
const canvasX = x * SIZE
const canvasY = y * SIZE
if (i === 0) {
ctx.moveTo(canvasX, canvasY)
} else {
ctx.lineTo(canvasX, canvasY)
}
}
ctx.stroke()
// Plot alpha values (yellow boxes)
let start = parseFloat(document.querySelector("#merge-start").value)
let step = parseFloat(document.querySelector("#merge-step").value)
let iterations = document.querySelector("#merge-count").value >> 0
ctx.beginPath()
ctx.fillStyle = "yellow"
for (let i = 0; i < iterations; i++) {
const alpha = (start + i * step) / 100
const x = alpha * SIZE
const y = fn(alpha) * SIZE
if (x <= SIZE) {
ctx.rect(x - 3, y - 3, 6, 6)
ctx.fill()
} else {
ctx.strokeStyle = "red"
ctx.moveTo(0, 0)
ctx.lineTo(0, SIZE)
ctx.lineTo(SIZE, SIZE)
ctx.lineTo(SIZE, 0)
ctx.lineTo(0, 0)
ctx.lineTo(SIZE, SIZE)
ctx.stroke()
addLogMessage("<i>Warning: maximum ratio is &#8805; 100%</i>")
}
}
}
function updateChart() {
let fn = (x) => x
switch (document.querySelector("#merge-interpolation").value) {
case "SmoothStep":
fn = smoothstep
break
case "SmootherStep":
fn = smootherstep
break
case "SmoothestStep":
fn = smootheststep
break
}
drawDiagram(fn)
}
function initMergeUI() {
const tabSettingsSingle = document.querySelector("#tab-merge-opts-single")
const tabSettingsBatch = document.querySelector("#tab-merge-opts-batch")
linkTabContents(tabSettingsSingle)
linkTabContents(tabSettingsBatch)
let mergeModelAField = new ModelDropdown(document.querySelector("#mergeModelA"), "stable-diffusion")
let mergeModelBField = new ModelDropdown(document.querySelector("#mergeModelB"), "stable-diffusion")
updateChart()
// slider
const singleMergeRatioField = document.querySelector("#single-merge-ratio")
const singleMergeRatioSlider = document.querySelector("#single-merge-ratio-slider")
function updateSingleMergeRatio() {
singleMergeRatioField.value = singleMergeRatioSlider.value / 10
singleMergeRatioField.dispatchEvent(new Event("change"))
}
function updateSingleMergeRatioSlider() {
if (singleMergeRatioField.value < 0) {
singleMergeRatioField.value = 0
} else if (singleMergeRatioField.value > 100) {
singleMergeRatioField.value = 100
}
singleMergeRatioSlider.value = singleMergeRatioField.value * 10
singleMergeRatioSlider.dispatchEvent(new Event("change"))
}
singleMergeRatioSlider.addEventListener("input", updateSingleMergeRatio)
singleMergeRatioField.addEventListener("input", updateSingleMergeRatioSlider)
updateSingleMergeRatio()
document.querySelector(".merge-config").addEventListener("change", updateChart)
document.querySelector("#merge-button").addEventListener("click", async function(e) {
// Build request template
let model0 = mergeModelAField.value
let model1 = mergeModelBField.value
let request = { model0: model0, model1: model1 }
request["use_fp16"] = document.querySelector("#merge-fp").value == "fp16"
let iterations = document.querySelector("#merge-count").value >> 0
let start = parseFloat(document.querySelector("#merge-start").value)
let step = parseFloat(document.querySelector("#merge-step").value)
if (isTabActive(tabSettingsSingle)) {
start = parseFloat(singleMergeRatioField.value)
step = 0
iterations = 1
addLogMessage(`merge ratio = ${start}%`)
} else {
addLogMessage(`start = ${start}%`)
addLogMessage(`step = ${step}%`)
}
if (start + (iterations - 1) * step >= 100) {
addLogMessage("<i>Aborting: maximum ratio is &#8805; 100%</i>")
addLogMessage("Reduce the number of variations or the step size")
addLogSeparator()
document.querySelector("#merge-count").focus()
return
}
if (document.querySelector("#merge-filename").value == "") {
addLogMessage("<i>Aborting: No output file name specified</i>")
addLogSeparator()
document.querySelector("#merge-filename").focus()
return
}
// Disable merge button
e.target.disabled = true
e.target.classList.add("disabled")
let cursor = $("body").css("cursor")
let label = document.querySelector("#merge-button").innerHTML
$("body").css("cursor", "progress")
document.querySelector("#merge-button").innerHTML = "Merging models ..."
addLogMessage("Merging models")
addLogMessage("Model A: " + model0)
addLogMessage("Model B: " + model1)
// Batch main loop
for (let i = 0; i < iterations; i++) {
let alpha = (start + i * step) / 100
if (isTabActive(tabSettingsBatch)) {
switch (document.querySelector("#merge-interpolation").value) {
case "SmoothStep":
alpha = smoothstep(alpha)
break
case "SmootherStep":
alpha = smootherstep(alpha)
break
case "SmoothestStep":
alpha = smootheststep(alpha)
break
}
}
addLogMessage(`merging batch job ${i + 1}/${iterations}, alpha = ${alpha.toFixed(5)}...`)
request["out_path"] = document.querySelector("#merge-filename").value
request["out_path"] += "-" + alpha.toFixed(5) + "." + document.querySelector("#merge-format").value
addLogMessage(`&nbsp;&nbsp;filename: ${request["out_path"]}`)
// sdkit documentation: "ratio - the ratio of the second model. 1 means only the second model will be used."
request["ratio"] = 1-alpha
let res = await fetch("/model/merge", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify(request),
})
const data = await res.json()
addLogMessage(JSON.stringify(data))
}
addLogMessage(
"<b>Done.</b> The models have been saved to your <tt>models/stable-diffusion</tt> folder."
)
addLogSeparator()
// Re-enable merge button
$("body").css("cursor", cursor)
document.querySelector("#merge-button").innerHTML = label
e.target.disabled = false
e.target.classList.remove("disabled")
// Update model list
stableDiffusionModelField.innerHTML = ""
vaeModelField.innerHTML = ""
hypernetworkModelField.innerHTML = ""
await getModels()
})
}
const LoraUI = {
modelField: undefined,
keywordsField: undefined,
notesField: undefined,
civitaiImportBtn: undefined,
civitaiSecion: undefined,
civitaiAnchor: undefined,
image: undefined,
imagePlaceholder: undefined,
init() {
LoraUI.modelField = new ModelDropdown(document.querySelector("#loraModel"), "lora", "None")
LoraUI.keywordsField = document.querySelector("#lora-manager-keywords")
LoraUI.notesField = document.querySelector("#lora-manager-notes")
LoraUI.civitaiImportBtn = document.querySelector("#lora-keyword-from-civitai")
LoraUI.civitaiSection = document.querySelector("#civitai-section")
LoraUI.civitaiAnchor = document.querySelector("#civitai-model-page")
LoraUI.image = document.querySelector("#lora-manager-image")
LoraUI.imagePlaceholder = document.querySelector("#lora-manager-image-placeholder")
LoraUI.uploadBtn = document.querySelector("#lora-manager-upload-button")
LoraUI.uploadInput = document.querySelector("#lora-manager-upload-input")
LoraUI.modelField.addEventListener("change", LoraUI.updateFields)
LoraUI.keywordsField.addEventListener("focusout", LoraUI.saveInfos)
LoraUI.notesField.addEventListener("focusout", LoraUI.saveInfos)
LoraUI.civitaiImportBtn.addEventListener("click", LoraUI.importFromCivitai)
LoraUI.uploadBtn.addEventListener("click", (e) => LoraUI.uploadInput.click())
LoraUI.uploadInput.addEventListener("change", LoraUI.uploadLoraThumb)
document.addEventListener("saveThumb", LoraUI.updateFields)
LoraUI.updateFields()
},
uploadLoraThumb(e) {
console.log(e)
if (LoraUI.uploadInput.files.length === 0) {
return
}
let reader = new FileReader()
let file = LoraUI.uploadInput.files[0]
reader.addEventListener("load", (event) => {
let img = document.createElement("img")
img.src = reader.result
onUseAsThumbnailClick(
{
use_lora_model: LoraUI.modelField.value,
},
img
)
})
if (file) {
reader.readAsDataURL(file)
}
},
updateFields() {
document.getElementById("civitai-section").classList.add("displayNone")
Bucket.retrieve(`modelinfo/lora/${LoraUI.modelField.value}`)
.then((info) => {
if (info == null) {
LoraUI.keywordsField.value = ""
LoraUI.notesField.value = ""
LoraUI.hideCivitaiLink()
} else {
LoraUI.keywordsField.value = info.keywords.join("\n")
LoraUI.notesField.value = info.notes
if ("civitai" in info && info["civitai"] != null) {
LoraUI.showCivitaiLink(info.civitai)
} else {
LoraUI.hideCivitaiLink()
}
}
})
Bucket.getImageAsDataURL(`${profileNameField.value}/lora/${LoraUI.modelField.value}.png`)
.then((data) => {
LoraUI.image.src=data
LoraUI.image.classList.remove("displayNone")
LoraUI.imagePlaceholder.classList.add("displayNone")
})
.catch((error) => {
LoraUI.image.classList.add("displayNone")
LoraUI.imagePlaceholder.classList.remove("displayNone")
})
},
saveInfos() {
let info = {
keywords: LoraUI.keywordsField.value
.split("\n")
.filter((x) => (x != "")),
notes: LoraUI.notesField.value,
civitai: LoraUI.civitaiSection.checkVisibility() ? LoraUI.civitaiAnchor.href : null,
}
Bucket.store(`modelinfo/lora/${LoraUI.modelField.value}`, info)
},
importFromCivitai() {
document.body.style["cursor"] = "progress"
fetch("/sha256/lora/"+LoraUI.modelField.value)
.then((result) => result.json())
.then((json) => fetch("https://civitai.com/api/v1/model-versions/by-hash/" + json.digest))
.then((result) => result.json())
.then((json) => {
document.body.style["cursor"] = "default"
if (json == null) {
return
}
if ("trainedWords" in json) {
LoraUI.keywordsField.value = json["trainedWords"].join("\n")
} else {
showToast("No keyword info found.")
}
if ("modelId" in json) {
LoraUI.showCivitaiLink("https://civitai.com/models/" + json.modelId)
} else {
LoraUI.hideCivitaiLink()
}
LoraUI.saveInfos()
})
},
showCivitaiLink(href) {
LoraUI.civitaiSection.classList.remove("displayNone")
LoraUI.civitaiAnchor.href = href
LoraUI.civitaiAnchor.innerHTML = LoraUI.civitaiAnchor.href
},
hideCivitaiLink() {
LoraUI.civitaiSection.classList.add("displayNone")
}
}
createTab({
id: "merge",
icon: "fa-toolbox",
label: "Model tools",
css: mergeCSS,
content: tabHTML,
onOpen: ({ firstOpen }) => {
if (!firstOpen) {
return
}
initMergeUI()
LoraUI.init()
const tabMergeUI = document.querySelector("#tab-model-mergeUI")
const tabLoraUI = document.querySelector("#tab-model-loraUI")
linkTabContents(tabMergeUI)
linkTabContents(tabLoraUI)
},
})
})()
async function getLoraKeywords(model) {
return Bucket.retrieve(`modelinfo/lora/${model}`)
.then((info) => info ? info.keywords : [])
}

View File

@ -0,0 +1,80 @@
// christmas hack, courtesy: https://pajasevi.github.io/CSSnowflakes/
;(function(){
"use strict";
function makeItSnow() {
const styleSheet = document.createElement("style")
styleSheet.textContent = `
/* customizable snowflake styling */
.snowflake {
color: #fff;
font-size: 1em;
font-family: Arial, sans-serif;
text-shadow: 0 0 5px #000;
}
.snowflake,.snowflake .inner{animation-iteration-count:infinite;animation-play-state:running}@keyframes snowflakes-fall{0%{transform:translateY(0)}100%{transform:translateY(110vh)}}@keyframes snowflakes-shake{0%,100%{transform:translateX(0)}50%{transform:translateX(80px)}}.snowflake{position:fixed;top:-10%;z-index:9999;-webkit-user-select:none;user-select:none;cursor:default;animation-name:snowflakes-shake;animation-duration:3s;animation-timing-function:ease-in-out}.snowflake .inner{animation-duration:10s;animation-name:snowflakes-fall;animation-timing-function:linear}.snowflake:nth-of-type(0){left:1%;animation-delay:0s}.snowflake:nth-of-type(0) .inner{animation-delay:0s}.snowflake:first-of-type{left:10%;animation-delay:1s}.snowflake:first-of-type .inner,.snowflake:nth-of-type(8) .inner{animation-delay:1s}.snowflake:nth-of-type(2){left:20%;animation-delay:.5s}.snowflake:nth-of-type(2) .inner,.snowflake:nth-of-type(6) .inner{animation-delay:6s}.snowflake:nth-of-type(3){left:30%;animation-delay:2s}.snowflake:nth-of-type(11) .inner,.snowflake:nth-of-type(3) .inner{animation-delay:4s}.snowflake:nth-of-type(4){left:40%;animation-delay:2s}.snowflake:nth-of-type(10) .inner,.snowflake:nth-of-type(4) .inner{animation-delay:2s}.snowflake:nth-of-type(5){left:50%;animation-delay:3s}.snowflake:nth-of-type(5) .inner{animation-delay:8s}.snowflake:nth-of-type(6){left:60%;animation-delay:2s}.snowflake:nth-of-type(7){left:70%;animation-delay:1s}.snowflake:nth-of-type(7) .inner{animation-delay:2.5s}.snowflake:nth-of-type(8){left:80%;animation-delay:0s}.snowflake:nth-of-type(9){left:90%;animation-delay:1.5s}.snowflake:nth-of-type(9) .inner{animation-delay:3s}.snowflake:nth-of-type(10){left:25%;animation-delay:0s}.snowflake:nth-of-type(11){left:65%;animation-delay:2.5s}
`
document.head.appendChild(styleSheet)
const snowflakes = document.createElement("div")
snowflakes.id = "snowflakes-container"
snowflakes.innerHTML = `
<div class="snowflakes" aria-hidden="true">
<div class="snowflake">
<div class="inner">❅</div>
</div>
<div class="snowflake">
<div class="inner">❅</div>
</div>
<div class="snowflake">
<div class="inner">❅</div>
</div>
<div class="snowflake">
<div class="inner">❅</div>
</div>
<div class="snowflake">
<div class="inner">❅</div>
</div>
<div class="snowflake">
<div class="inner">❅</div>
</div>
<div class="snowflake">
<div class="inner">❅</div>
</div>
<div class="snowflake">
<div class="inner">❅</div>
</div>
<div class="snowflake">
<div class="inner">❅</div>
</div>
<div class="snowflake">
<div class="inner">❅</div>
</div>
<div class="snowflake">
<div class="inner">❅</div>
</div>
<div class="snowflake">
<div class="inner">❅</div>
</div>
</div>`
document.body.appendChild(snowflakes)
const script = document.createElement("script")
script.innerHTML = `
$(document).ready(function() {
setTimeout(function() {
$("#snowflakes-container").fadeOut("slow", function() {$(this).remove()})
}, 10 * 1000)
})
`
document.body.appendChild(script)
}
let date = new Date()
if (date.getMonth() === 11 && date.getDate() >= 12) {
makeItSnow()
}
})()