Compare commits

...

361 Commits
v1.2 ... v2.16

Author SHA1 Message Date
858a1c7ae0 Include C:\Windows\System32 in the path anyway, to avoid the errors some users have 2022-09-24 13:23:04 +05:30
906d90c304 v2.16 2022-09-24 09:36:09 +05:30
02c0bac71f Merge branch 'beta' 2022-09-23 21:22:58 +05:30
9bb091d31e Fix a bug where DDIM wouldn't get the correct steps from the UI after the first run 2022-09-23 21:22:44 +05:30
a0e201a9ef Merge branch 'beta' 2022-09-23 20:37:25 +05:30
119d5ba7ff Disable sampler selection if an image is used 2022-09-23 20:37:11 +05:30
a35454b1b3 Merge pull request #229 from cmdr2/beta
Live Preview, In-Painting Editor, New Sampler Options, VRAM (Memory) Usage Improvements, Progress Bar, Re-organized UI with more space for images, Update to the latest fork version
2022-09-23 20:31:21 +05:30
8cb340be9d Merge branch 'main' into beta 2022-09-23 20:19:12 +05:30
ae108bb603 Increase the max guidance scale 2022-09-23 17:19:11 +05:30
ca4229c732 Rearrange the UI, to use the bigger screen better, move the advanced settings to a different setting, links to help & community resources; cleaner image settings 2022-09-23 17:07:54 +05:30
5c827703a1 Report the steps for img2img correctly 2022-09-23 13:38:33 +05:30
a3de0820b3 Fix the 'Expected all tensors to be on the same device' error 2022-09-23 11:44:50 +05:30
83cb473a45 Fix the ddim_timesteps attribute missing error for txt2img with the ddim sampler 2022-09-23 11:14:06 +05:30
e7f9db5e56 Fix blurry img2img 2022-09-23 11:05:35 +05:30
c675caf3f9 Fix a missing ddim_steps error for the DDIM sampler 2022-09-23 00:54:35 +05:30
956b3d89db New samplers for txt2img: "ddim", "plms", "heun", "euler", "euler_a", "dpm2", "dpm2_a", "lms" 2022-09-23 00:19:05 +05:30
b0c15bc430 Latest commit on basujindal's fork for the Linux version 2022-09-22 23:04:29 +05:30
b934a6b6e9 Compatibility mode for API calls made without the streaming flag 2022-09-22 22:55:57 +05:30
df73be495e Live preview warning 2022-09-22 22:23:42 +05:30
efca13c8c0 Live preview warning 2022-09-22 22:22:07 +05:30
e49b0b9381 Warning for live preview 2022-09-22 22:21:01 +05:30
a69cd85ed7 Bump version 2022-09-22 22:15:09 +05:30
7b520942dc Update to the latest commit on basujindal's SD fork; More VRAM garbage-collection; Speed up live preview by displaying only every 5th step 2022-09-22 22:14:25 +05:30
f98225cdb6 Better errors when the server fails 2022-09-22 18:04:11 +05:30
310abcd8b9 Improve the error reporting; Ensure that the stop button has been reset if there's an error 2022-09-22 15:47:38 +05:30
f42eaaea86 Fix incorrect patch for env 2022-09-22 12:48:11 +05:30
bfdb74979f Restrict to pandas 1.4.4, the new 1.5 version breaks the numpy dependency 2022-09-22 12:25:57 +05:30
28e002e248 Bump version 2022-09-21 23:00:19 +05:30
7d12dbd4b2 Free up VRAM when possible 2022-09-21 21:53:25 +05:30
6f60e71ea4 'almost done' is misleading sometimes 2022-09-21 20:39:14 +05:30
16c842366a Don't apply filters if the user stops a task 2022-09-21 19:29:27 +05:30
97ba151e09 Favicon 2022-09-21 18:28:43 +05:30
18f452d968 Fix another bug with placeholder images; Move the logic for whether show_only_filtered is applied to the server 2022-09-21 17:41:42 +05:30
bb6db783d8 Fix a bug where the stop button wouldn't go away after the task finished. This happened when the upscaling/face correction was on, and insufficient image placeholders were created 2022-09-21 16:26:13 +05:30
49ce302bd4 Merge branch 'beta' of github.com:cmdr2/stable-diffusion-ui into beta 2022-09-21 15:14:34 +05:30
95f01007a3 Fix a bug where negative steps remaining would mess up the countdown 2022-09-21 15:14:22 +05:30
108e516b80 Don't use colorama 2022-09-16 21:32:08 +05:30
1c5097b81b Hotfix for pywavelet version, attempt 1 2022-09-16 00:36:31 +05:30
ef1bbda49c Inpainting editor 2022-09-15 23:29:55 +05:30
5fed14cb78 Bump version 2022-09-15 17:54:35 +05:30
7e7c110851 Image mask (inpainting) 2022-09-15 17:54:03 +05:30
444834a891 Merge pull request #194 from cmdr2/main
Merge from main
2022-09-15 15:11:17 +05:30
2587727087 Merge branch 'main' of github.com:cmdr2/stable-diffusion-ui 2022-09-15 13:21:14 +05:30
a6456b068d Merge branch 'main' of github.com:cmdr2/stable-diffusion-ui 2022-09-15 13:20:44 +05:30
4ccf26c23f Patch the openAI code only if it exists 2022-09-15 13:19:42 +05:30
3927dfa71d Update Troubleshooting.md 2022-09-15 13:06:54 +05:30
31c324bcc3 Killed due to low RAM 2022-09-15 13:06:37 +05:30
476d6fe85d Force-install antlr4, since pip (incorrectly) skips installing it occasionally 2022-09-15 12:10:21 +05:30
ee4d468bce Force-install antlr4, since pip (incorrectly) skips installing it occasionally 2022-09-15 12:06:37 +05:30
47fca55b0c Disable the start button after starting an image - regression after the stop button was added 2022-09-15 10:59:48 +05:30
219f310a25 Disable live preview by default 2022-09-14 22:48:36 +05:30
27071cfa29 Live preview of images 2022-09-14 22:29:42 +05:30
1d88a5b42e Show the approximate time remaining 2022-09-14 17:48:26 +05:30
74a9c46f08 Show the final output, if it was sent in a single streaming packet 2022-09-14 17:21:00 +05:30
7a540f2a88 Plural in images 2022-09-14 17:15:05 +05:30
9f48d5e5ff Show the progress percentage while generating images 2022-09-14 16:52:03 +05:30
64cc2567bd Merge pull request #185 from cmdr2/beta
Stop button, finer control for guidance scale and prompt weight and a few bug fixes
2022-09-14 11:47:47 +05:30
3b47eb3b07 Bug fix - tasks with an initial image were not resizing the initial image to the desired dimension 2022-09-14 11:36:55 +05:30
d13e08e53b Make Prompt Strength editable via a text box; Restrict the max prompt strength to 0.99, since 1 caused CUDA errors for a few users; Color tweak for the stop button 2022-09-14 11:01:12 +05:30
4685461282 Allow editing the guidance scale using a textbox for finer control; Fix a bug where the image name would contain the wrong guidance scale 2022-09-14 10:46:07 +05:30
885759abc5 Return the image metadata and disk path in the response 2022-09-14 10:15:35 +05:30
0c0c8e503e Reset the stop button once a task finishes normally 2022-09-13 22:57:32 +05:30
5605cfe213 Don't run the remaining batches of a stopped task 2022-09-13 22:55:45 +05:30
3e3fc54da4 Stop button in the UI 2022-09-13 22:25:28 +05:30
e59c66ae26 Backend changes to support stopping a task mid-way. Uses a custom patch for the stable-diffusion codebase, to make it call a callback for DDIM 2022-09-13 19:59:41 +05:30
d74eef8088 Merge pull request #178 from mrbusysky/main
Read me updates
2022-09-13 16:39:43 +05:30
88a240b0f6 Update README.md 2022-09-13 04:02:01 -07:00
ee21f41b25 Create config-v6.png 2022-09-13 04:01:03 -07:00
9ec2010ac2 Log python version in the linux build 2022-09-13 11:41:38 +05:30
e928fee26f Log python version 2022-09-13 11:38:28 +05:30
79f6723678 Merge pull request #169 from cmdr2/beta
PYTHONPATH during installer
2022-09-13 11:10:32 +05:30
db1fbad0db Force install basicsr, investigating a support query 2022-09-13 11:00:55 +05:30
812a0a14fc Export PYTHONPATH on linux 2022-09-13 10:36:50 +05:30
852875b440 Try force installing basicsr 1.4.2 2022-09-12 23:42:48 +05:30
d1dd1b8a9b Merge pull request #164 from mrbusysky/main
Updating issues to allow multiple cats
2022-09-12 23:21:24 +05:30
30974482c5 The seed displayed in the UI was wrong 2022-09-12 23:15:41 +05:30
982696fb3b Update to ignore . idea folders 2022-09-12 10:40:30 -07:00
cf43dc7b5c Updating issues to allow multiple cats 2022-09-12 10:36:54 -07:00
4444525c01 Set the PYTHONPATH explicitly for the conda env 2022-09-12 23:06:51 +05:30
760cc89449 Merge pull request #161 from cmdr2/beta
Fix linux file size check
2022-09-12 21:59:43 +05:30
ba26f22f53 Compare file sizes 2022-09-12 21:44:16 +05:30
e1f37a2f3c Fix the comparison check for linux 2022-09-12 21:34:53 +05:30
e59287d736 Tweak the linux file size check code 2022-09-12 21:12:44 +05:30
5cda0c7684 Merge pull request #160 from cmdr2/beta
Waifu weights
2022-09-12 20:46:32 +05:30
717a1d8f57 Merge branch 'beta' of github.com:cmdr2/stable-diffusion-ui into beta 2022-09-12 20:45:07 +05:30
a955730086 Allow using waifu 7 GB weights 2022-09-12 20:44:54 +05:30
d5ccee7bbb Merge pull request #156 from cmdr2/main
Merge main
2022-09-12 19:32:25 +05:30
97e2a17ce1 Merge pull request #155 from cmdr2/beta
Allow using the 7 GB model as well
2022-09-12 17:55:42 +05:30
a32a58bd0f Allow using the 7 GB model as well 2022-09-12 17:36:50 +05:30
093201ef65 Merge pull request #153 from cmdr2/beta
Tell conda to skip any pre-installed packages in the users' home/.loc…
2022-09-12 17:00:32 +05:30
ef46603f4e Tell conda to skip any pre-installed packages in the users' home/.local folder, since that can cause conflicts 2022-09-12 16:39:23 +05:30
3483f63b72 Merge branch 'main' of github.com:cmdr2/stable-diffusion-ui 2022-09-12 13:04:59 +05:30
8bee8060f9 Higher resolution options 2022-09-12 13:04:49 +05:30
db58c9aca9 Merge pull request #145 from MrManny/main
Minor README.md updates
2022-09-12 12:29:25 +05:30
75f5ec8575 Merge pull request #152 from cmdr2/beta
Auto-switch to CPU for GPUs with less 3 GB of VRAM
2022-09-12 11:17:04 +05:30
246ceebe0e Merge branch 'main' into beta 2022-09-12 11:12:41 +05:30
a294b128c7 Update README.md
Some minor text adjustments
2022-09-11 13:04:45 +02:00
7879bf19eb Update README.md
Add note about Stable Diffusion model version
2022-09-11 13:00:13 +02:00
80082d9c26 Merge pull request #141 from youmslinky/main
Update CONTRIBUTING.md to instruct commenting out multiple lines
2022-09-11 11:30:28 +05:30
9fe1709bf7 Catch 16xx cards without the NVIDIA name in them 2022-09-11 11:16:05 +05:30
fec21f1208 Catch 16xx cards without the NVIDIA name in them 2022-09-11 11:10:54 +05:30
1d4b34c0dd Print GPU name 2022-09-11 10:59:45 +05:30
d88e0f16ac Use CPU mode for graphics cards with less than 3 GB of RAM 2022-09-11 10:34:04 +05:30
8d31c474df Update CONTRIBUTING.md to instruct commenting out multiple lines
Need to comment out most of the lines so the `ui` folder symlink doesn't get removed and the `.sh` and `.bat` file changes don't get rolled back on next startup.
2022-09-10 15:39:38 -05:00
a1e5a2cb67 Update modifiers.json 2022-09-10 22:58:16 +05:30
3668c87e0d Update modifiers.json 2022-09-10 22:58:13 +05:30
28c8dc4bcc Update CONTRIBUTING.md 2022-09-10 22:45:12 +05:30
a02915dadb Update CONTRIBUTING.md 2022-09-10 22:44:08 +05:30
c916f46ac3 Update README.md 2022-09-10 15:55:48 +05:30
5230fbaf6e Update README.md 2022-09-10 15:55:26 +05:30
813e65e586 Face Correction (GFPGAN) and Upscaling (RealESRGAN)
GFPGAN and RealESRGAN
2022-09-10 15:12:23 +05:30
387605f443 Show a label for the update channel, next to the version number 2022-09-10 15:08:07 +05:30
ff335ecadd Merge branch 'main' into beta 2022-09-10 14:55:57 +05:30
74ccee2aa4 Update Troubleshooting.md 2022-09-10 10:44:33 +05:30
4976f35979 Update Troubleshooting.md 2022-09-10 01:25:19 +05:30
9c09a4d393 Update Troubleshooting.md 2022-09-10 01:24:45 +05:30
905bcd8d1b Fix on linux 2022-09-10 00:37:04 +05:30
6883618825 Fix on linux 2022-09-10 00:36:54 +05:30
8beaf2107e Fix 2022-09-10 00:34:57 +05:30
cd1db214b0 Fix 2022-09-10 00:34:51 +05:30
a6b4d59d94 Fix unnecessary warning for config.json 2022-09-10 00:29:46 +05:30
ff590d3090 Fix unnecessary warning for config.json 2022-09-10 00:29:31 +05:30
c16e425980 Fix 2022-09-10 00:23:45 +05:30
622322c878 Fix 2022-09-10 00:23:19 +05:30
7c580e276a Merge branch 'main' into beta 2022-09-10 00:14:51 +05:30
927013cd57 Hotfix for broken openAI dependency - bad json file 2022-09-10 00:13:16 +05:30
666bf0ebb4 Merge branch 'main' into beta 2022-09-09 21:54:27 +05:30
c283d3181f Don't cache the index page 2022-09-09 21:54:13 +05:30
28dfe2140c Merge branch 'main' into beta 2022-09-09 21:31:48 +05:30
75ac3450e6 Don't allow setting the channel while the server is still starting up 2022-09-09 21:31:34 +05:30
f727dd26ed Set the set to 0 if deselecting from random; Allow 10 million random images by increasing the seed size 2022-09-09 21:11:05 +05:30
d21c0bfc18 Invert the logic - show only the filtered image by default 2022-09-09 21:06:51 +05:30
65b2c056c6 Revert "Revert "Revert "Revert "Merge pull request #112 from cmdr2/develop""""
This reverts commit 0dd38870e0.
2022-09-09 21:05:24 +05:30
53533e71e9 Revert "Revert "Revert "Revert "Merge main""""
This reverts commit 9d92174b1d.
2022-09-09 21:02:33 +05:30
90a28732af Revert "Revert "Revert "Revert "Merge branch 'develop' of github.com:cmdr2/stable-diffusion-ui into develop""""
This reverts commit eb6d19a4dc.
2022-09-09 21:01:15 +05:30
98b1e50c86 Merge pull request #127 from cmdr2/beta
UI for setting the beta/main channel
2022-09-09 20:39:49 +05:30
512ffa9030 UI for setting the beta/main channel 2022-09-09 20:34:32 +05:30
dbd37a0961 Remove unnecessary quotes in the update_branch name 2022-09-09 19:08:50 +05:30
4eaba01de0 I don't think the ui was getting updated on Linux 2022-09-09 19:08:29 +05:30
09cdbe6b90 Download updates from the configured channel 2022-09-09 18:26:00 +05:30
0d33964a03 Fix a transient bug in the installer code (windows) where a script overwriting itself would cause problems 2022-09-09 17:24:30 +05:30
b14523ecfa Re-enable updates for linux 2022-09-09 14:06:37 +05:30
38fa083503 Update README.md 2022-09-09 11:11:08 +05:30
fff050ef14 Uninstall info 2022-09-09 11:10:55 +05:30
1a10c60e4f Update Troubleshooting.md 2022-09-09 10:13:51 +05:30
eb6d19a4dc Revert "Revert "Revert "Merge branch 'develop' of github.com:cmdr2/stable-diffusion-ui into develop"""
This reverts commit c9fe2e8a66.
2022-09-08 23:46:02 +05:30
9d92174b1d Revert "Revert "Revert "Merge main"""
This reverts commit a715022049.
2022-09-08 23:45:44 +05:30
0dd38870e0 Revert "Revert "Revert "Merge pull request #112 from cmdr2/develop"""
This reverts commit 788dcbf471.
2022-09-08 23:45:21 +05:30
964d752e11 Fix 2022-09-08 23:28:22 +05:30
a4305540f0 use a certain working version of SD 2022-09-08 23:23:02 +05:30
788dcbf471 Revert "Revert "Merge pull request #112 from cmdr2/develop""
This reverts commit 9051bf6e68.
2022-09-08 23:19:35 +05:30
a715022049 Revert "Revert "Merge main""
This reverts commit d92fb1ec95.
2022-09-08 23:19:20 +05:30
c9fe2e8a66 Revert "Revert "Merge branch 'develop' of github.com:cmdr2/stable-diffusion-ui into develop""
This reverts commit 85b6540c9f.
2022-09-08 23:19:02 +05:30
5170f508f7 Revert "Emergency fix"
This reverts commit 72900eaf93.
2022-09-08 23:18:50 +05:30
72900eaf93 Emergency fix 2022-09-08 23:12:32 +05:30
85b6540c9f Revert "Merge branch 'develop' of github.com:cmdr2/stable-diffusion-ui into develop"
This reverts commit 10ed23e144, reversing
changes made to 253e75c747.
2022-09-08 22:56:13 +05:30
d92fb1ec95 Revert "Merge main"
This reverts commit ff515f9bb0, reversing
changes made to 10ed23e144.
2022-09-08 22:55:12 +05:30
9051bf6e68 Revert "Merge pull request #112 from cmdr2/develop"
This reverts commit 598de3697d, reversing
changes made to 0eae17075f.
2022-09-08 22:54:54 +05:30
598de3697d Merge pull request #112 from cmdr2/develop
Face Correction (GFPGAN) and Upscaling (RealESRGAN)
2022-09-08 21:31:01 +05:30
ff515f9bb0 Merge main 2022-09-08 21:22:47 +05:30
10ed23e144 Merge branch 'develop' of github.com:cmdr2/stable-diffusion-ui into develop 2022-09-08 21:20:39 +05:30
253e75c747 v2.1 - Face correction (GFPGAN) and Upscaling (RealESRGAN) 2022-09-08 21:20:27 +05:30
6efbe62dca Don't log /ping healthcheck requests 2022-09-08 18:43:22 +05:30
0eae17075f Clobber Errors 2022-09-08 18:12:47 +05:30
c7c47635f7 Merge pull request #109 from elwynelwyn/add-shebangs
Add shebangs to all sh files
2022-09-08 15:21:29 +05:30
8e1445d27a Update CONTRIBUTING.md 2022-09-08 15:18:33 +05:30
0fc92942cf Update CONTRIBUTING.md 2022-09-08 15:17:24 +05:30
2628a061f7 Update CONTRIBUTING.md 2022-09-08 15:10:44 +05:30
78971ac504 Update CONTRIBUTING.md 2022-09-08 14:55:36 +05:30
107a0e1b7d Update CONTRIBUTING.md 2022-09-08 14:55:19 +05:30
fc6954a541 Update CONTRIBUTING.md 2022-09-08 14:48:07 +05:30
2f60afb039 Update CONTRIBUTING.md 2022-09-08 14:47:50 +05:30
1a2da16e12 Merge pull request #110 from elwynelwyn/add-contributing-docs
Add contributing doc with instructions for dev env
2022-09-08 14:47:18 +05:30
678e0912ae Add contributing doc with instructions for dev env 2022-09-08 21:06:26 +12:00
8d596c07df Add shebangs to all sh files 2022-09-08 19:53:06 +12:00
dc9f6013e8 Merge pull request #105 from frode/develop
Update modifiers.json
2022-09-08 12:47:43 +05:30
e5a21fda32 Update Troubleshooting.md 2022-09-08 11:38:55 +05:30
2a0f920bcd Merge pull request #107 from cmdr2/develop
Develop
2022-09-08 10:41:38 +05:30
dbd2f2003d Merge pull request #95 from cmdr2/main
Merge main
2022-09-08 10:41:06 +05:30
4922877816 Update modifiers.json 2022-09-08 00:37:49 +02:00
e914378dd9 Update modifiers.json 2022-09-08 00:36:27 +02:00
9ec35d2bfe Update modifiers.json 2022-09-08 00:35:01 +02:00
d96cc79814 Update modifiers.json
Added Jack Kirby to the list of artists
2022-09-08 00:32:52 +02:00
7b92703624 Update modifiers.json
Added missing artists to the list of available styles
2022-09-08 00:29:49 +02:00
22089c528a Executable linux script 2022-09-07 20:10:35 +05:30
20ebdc46e9 Merge pull request #101 from cmdr2/develop
can't overwrite self - linux install script
2022-09-07 20:02:11 +05:30
3224cd73ed can't overwrite self - linux install script 2022-09-07 20:01:39 +05:30
024c7f6a15 Merge pull request #97 from cmdr2/develop
Preserve the state of the advanced and modifiers panel across restarts
2022-09-07 16:31:32 +05:30
96d4b52da3 Preserve the state of the advanced and modifiers panel across restarts 2022-09-07 16:29:10 +05:30
d1e29b8a9d Merge pull request #94 from cmdr2/develop
Store the 'save to disk' setting
2022-09-07 16:06:18 +05:30
0e02714114 Store the 'save to disk' setting 2022-09-07 16:05:39 +05:30
04ad3c0386 Merge pull request #83 from ArtificialLegacy/main
generating in parallel not updating seed correctly.
2022-09-07 15:55:07 +05:30
14985bcdcd Merge pull request #93 from cmdr2/develop
Also check for antlr4 during the post-install tests
2022-09-07 15:49:39 +05:30
a1908de302 Also check for antlr4 during the post-install tests 2022-09-07 15:49:14 +05:30
8820f10e01 Merge pull request #92 from cmdr2/develop
Develop
2022-09-07 15:38:25 +05:30
e1116938ec Update troubleshooting for green images. This is now caught and fixed in the code, requiring no user intervention 2022-09-07 15:37:11 +05:30
c30678af98 Fix the link for the troubleshooting page in all the scripts 2022-09-07 15:35:19 +05:30
1d4e06b884 Use full precision automatically for NVIDIA 1650 and 1660 2022-09-07 15:32:34 +05:30
39d6fcac73 Merge pull request #90 from cmdr2/develop
Troubleshooting
2022-09-07 14:46:38 +05:30
f44c1d4536 Merge branch 'develop' of github.com:cmdr2/stable-diffusion-ui into develop 2022-09-07 14:44:10 +05:30
c4349951da Create a troubleshooting page; Clearer troubleshooting steps in the installation error messages 2022-09-07 14:44:01 +05:30
5a42704ac4 Fix yes prompt for build.sh 2022-09-07 13:28:38 +05:30
52e94fe650 Show a warning if running build.bat, that it is meant for developers not users who accidentally downloaded the repo 2022-09-07 13:26:09 +05:30
466b9da56c Merge pull request #89 from cmdr2/develop
Check if the windows install dir isn't at the top of the drive, to avoid clobber errors
2022-09-07 13:04:27 +05:30
b280288e83 Merge branch 'develop' of github.com:cmdr2/stable-diffusion-ui into develop 2022-09-07 13:03:27 +05:30
7388c13c63 Check if the installation dir isn't at the top of a drive (on windows) and show a warning 2022-09-07 13:02:41 +05:30
b4f4ccec99 Merge pull request #88 from cmdr2/main
Merge main
2022-09-07 12:20:45 +05:30
e5e3f02440 Test update 2022-09-07 12:19:06 +05:30
874bfa0c54 Merge pull request #87 from cmdr2/develop
Develop
2022-09-07 12:17:18 +05:30
92a4f7adb8 Merge pull request #86 from cmdr2/main
Merge main
2022-09-07 12:16:58 +05:30
0772417f33 Test changing the main bootstrap code 2022-09-07 12:16:06 +05:30
14f1b6df4b Merge pull request #85 from cmdr2/develop
Model file size verification
2022-09-07 11:51:38 +05:30
0d9c8a804d Check the exact file size for the model file before allowing the installer to continue 2022-09-07 11:51:12 +05:30
0dfbfafb82 fixes generating in parallel not updating seed correctly 2022-09-06 21:24:08 -04:00
051ef564e7 Merge pull request #79 from iJacqu3s/patch-1
Added computer graphics tags
2022-09-06 22:53:21 +05:30
bafd3612e6 Added computer graphics tags and a few other miscellaneous tags to help with shape, look and materials 2022-09-06 19:05:07 +02:00
05f9f7ce9d Added Computer Graphics tags for better control over shape, look and materials 2022-09-06 18:54:41 +02:00
88acf49305 Merge pull request #78 from cmdr2/main
Merge main
2022-09-06 19:44:34 +05:30
74d9901ec9 Merge pull request #72 from SaulDoesCode/patch-1
More modifiers
2022-09-06 19:43:17 +05:30
64d1b56497 Merge pull request #77 from cmdr2/develop
Linux script bug fixes
2022-09-06 19:35:27 +05:30
0cc540b12a Incorrect file size check in linux 2022-09-06 19:13:03 +05:30
a1712a654d Add post-installation tests for the linux install script 2022-09-06 19:01:21 +05:30
fe21889ab0 Merge pull request #75 from cmdr2/develop
Check whether the dependencies were downloaded correctly, else displa…
2022-09-06 18:33:15 +05:30
e50c84ff28 Check whether the dependencies were downloaded correctly, else display an error 2022-09-06 18:32:36 +05:30
44824acf34 Update modifiers.json 2022-09-06 13:42:23 +02:00
ec3253620e Merge pull request #74 from cmdr2/develop
Check the filesize of the ckpt model file; set variables didn't do wh…
2022-09-06 16:14:17 +05:30
f902466882 Update modifiers.json 2022-09-06 12:41:56 +02:00
b5d2a23c64 Check the filesize of the ckpt model file; set variables didn't do what I thought it did, I'm new to DOS Batch scripting 2022-09-06 16:10:17 +05:30
0fd4804f95 Update modifiers.json 2022-09-06 12:35:28 +02:00
729d0ea417 Update modifiers.json 2022-09-06 12:31:52 +02:00
8f2f644230 Update modifiers.json 2022-09-06 12:28:10 +02:00
183edf9eaf Update modifiers.json 2022-09-06 12:22:42 +02:00
46c37403a6 More modifiers 2022-09-06 11:49:39 +02:00
c57e15bdf1 Merge pull request #71 from cmdr2/develop
Fix the bug where it would throw an error and require a restart, even if successful
2022-09-06 11:54:08 +05:30
bc0e53a59b Merge branch 'develop' of github.com:cmdr2/stable-diffusion-ui into develop 2022-09-06 11:43:18 +05:30
2c38b51996 Fix a bug where it would show a failure even after conda created the environment successfully. Was caused by git clone returning an exit status of 1 even when successful 2022-09-06 11:43:12 +05:30
67e36788b0 Merge pull request #66 from cmdr2/main
Merge from main
2022-09-05 20:39:12 +05:30
9b8ed32c74 Merge pull request #52 from Hakorr/patch-2
Fixed artist names and added more entries
2022-09-05 20:32:55 +05:30
833063c916 Update FUNDING.yml 2022-09-05 20:05:33 +05:30
3b2c8e0a97 Update FUNDING.yml 2022-09-05 19:41:21 +05:30
ee90d1f258 Update FUNDING.yml 2022-09-05 19:40:29 +05:30
f90f42c25c Update FUNDING.yml 2022-09-05 19:39:52 +05:30
d6555cb344 Create FUNDING.yml 2022-09-05 19:00:57 +05:30
286f057a14 Moved 16-bit visual style next to 8-bit 2022-09-05 16:23:39 +03:00
00b89c2bc7 Merge pull request #65 from cmdr2/develop
Ko-fi button
2022-09-05 18:48:31 +05:30
b4a3de4cff Sorted alphabetically, added more entries 2022-09-05 16:08:58 +03:00
ec49c96219 Ko-fi button 2022-09-05 18:03:19 +05:30
1eb420bcda Merge pull request #64 from cmdr2/main
Merge main
2022-09-05 17:59:50 +05:30
9ba810ccb6 Merge pull request #63 from cmdr2/develop
Version
2022-09-05 17:29:07 +05:30
7a99241c76 Version 2022-09-05 17:28:40 +05:30
617066c7e1 Merge pull request #62 from cmdr2/develop
Configurable path to save to disk
2022-09-05 17:26:33 +05:30
835dd4da9d Configurable path to save to disk 2022-09-05 17:25:25 +05:30
15da928655 Merge pull request #61 from cmdr2/develop
Shorter filenames for saved images; Don't crash other images if one fails to save
2022-09-05 16:53:35 +05:30
7460e6f73b Merge pull request #60 from cmdr2/main
Merge pull request #59 from cmdr2/develop
2022-09-05 16:52:45 +05:30
4104b4d0a3 Merge branch 'develop' of github.com:cmdr2/stable-diffusion-ui into develop 2022-09-05 16:51:53 +05:30
a29259a8b6 Shorter filenames for saved images; Don't crash other images if one fails to save 2022-09-05 16:51:43 +05:30
8925c6caf9 Merge pull request #59 from cmdr2/develop
Download buttons
2022-09-05 16:11:00 +05:30
7b2a85a118 Update README.md 2022-09-05 16:09:55 +05:30
f8980aecf0 Update README.md 2022-09-05 16:09:28 +05:30
a4f44f02ed Update README.md 2022-09-05 16:09:03 +05:30
3fe76a6bd3 Download buttons 2022-09-05 16:08:17 +05:30
8c060b468b Update README.md 2022-09-05 16:03:12 +05:30
6136039682 Github links for v2 installer download 2022-09-05 16:02:59 +05:30
f2954eeb3c Download buttons for Windows and Linux 2022-09-05 16:01:50 +05:30
f186c41f4d Merge branch 'develop' of github.com:cmdr2/stable-diffusion-ui into develop 2022-09-05 15:30:14 +05:30
7d69f4d3ed Fix #58 - While a .. in the path shouldn't cause any problems, just avoiding it entirely 2022-09-05 15:30:04 +05:30
4baec1b185 Merge pull request #55 from cmdr2/main
Readme
2022-09-05 13:41:51 +05:30
90c4361363 Update README.md 2022-09-05 13:39:43 +05:30
10aaa48068 Merge pull request #54 from cmdr2/develop
v2.06
2022-09-05 13:25:30 +05:30
458b0150ef Bump version 2022-09-05 13:23:54 +05:30
44a3f63f99 Show suggestions if an out of memory error is encountered by the user 2022-09-05 13:22:34 +05:30
96a6b11ab4 Show a warning if the initial image is larger than 768x768, can cause out of memory errors 2022-09-05 13:10:19 +05:30
9ef9ce76f8 Fixed artist names and added more entries
- new artists
- new emotions
- removed the "good" emotion
- one new color
- one new visual style
2022-09-04 20:55:29 +03:00
94835c46a0 Update README.md 2022-09-04 23:00:22 +05:30
58b3d31526 Note about CPU 2022-09-04 19:34:40 +05:30
b023f5c0da Fix readme usage 2022-09-04 19:33:51 +05:30
78b87c6ddd Readme for v2 2022-09-04 19:32:52 +05:30
edde4dc2fa v2 moving to the main branch 2022-09-04 19:31:34 +05:30
d9a6e41265 Switch to a standardized model link, for v2 public release 2022-09-04 19:15:42 +05:30
64f0f1aa2c Fix bug with broken tag name in filename generator 2022-09-04 00:28:38 +05:30
b54c057c83 bump version 2022-09-04 00:20:14 +05:30
baa4acaf79 Use - instead of : in filename 2022-09-04 00:19:36 +05:30
3c6bb41939 Merge branch 'main' of github.com:cmdr2/stable-diffusion-ui 2022-09-04 00:17:16 +05:30
3f64d3729e Merge pull request #44 from caranicas/informative-filenames
updated filename
2022-09-04 00:16:52 +05:30
21dc2ece1b Store images during a session in the same folder; Store the metadata for each image as a txt file next to it 2022-09-04 00:12:48 +05:30
fcac6c4f8c fix prompt value
fix accidental underscore addition to the text display
2022-09-03 12:38:30 -04:00
e5dc932717 Allow more width and height options 2022-09-03 21:22:56 +05:30
6b65b05e2f updated filename 2022-09-03 11:06:34 -04:00
d52b973b44 Rename the linux v2 start script to start.sh 2022-09-03 20:04:55 +05:30
79d3f4ca9e Update test works for linux v2 script 2022-09-03 18:50:33 +05:30
06dd22d89a Test update of main linux v2 script 2022-09-03 18:49:18 +05:30
e79e425cf5 Typo in linux v2 script for env var 2022-09-03 18:48:11 +05:30
b4b2c351b4 Conda needs to be reactivated in the final script, l
inux v2
2022-09-03 18:26:14 +05:30
73acaadf70 Linux v2 newlines 2022-09-03 18:13:21 +05:30
18ef36bbc3 Init conda linux v2 2022-09-03 18:08:52 +05:30
021315f0f5 Merge branch 'main' of github.com:cmdr2/stable-diffusion-ui 2022-09-03 17:55:48 +05:30
a4ee103ff0 Simplified script for linux v2 2022-09-03 17:55:38 +05:30
618173c5f0 Merge pull request #40 from MrManny/feature/additional-modifiers
Additional modifiers
2022-09-03 14:52:39 +05:30
4cb906571b Add Leonardo Da Vinci as an artist modifier
How I could forget him in the first place is beyond me
2022-09-03 11:06:18 +02:00
6092b6c4cf Add a few assorted styles
mainly courtesy of https://promptomania.com/stable-diffusion-prompt-builder/
2022-09-03 10:59:45 +02:00
9bf17a1c8d v2 linux, try without prefix mode 2022-09-03 14:12:47 +05:30
542379dcf4 Add additional artist modifiers 2022-09-03 10:39:18 +02:00
a565bb5889 Sort modifiers alphabetically within their group 2022-09-03 10:35:37 +02:00
9a96ff2edc v2 Linux script update 2022-09-03 13:04:07 +05:30
9fa2e363cc Updated linux scripts 2022-09-03 12:34:23 +05:30
a9939a31cf img2img tip 2022-09-03 11:45:50 +05:30
64fffbcdec Increment version 2022-09-03 11:44:12 +05:30
a65f8f5d5c Preserve across restarts the settings for 'use cpu', 'use full precision', 'use turbo' 2022-09-03 11:43:05 +05:30
c5475fb028 Linux v2 activate script 2022-09-02 23:31:51 +05:30
f267c46595 Make linux start command executable 2022-09-02 23:11:28 +05:30
b1f67a9a65 Merge branch 'main' of github.com:cmdr2/stable-diffusion-ui 2022-09-02 23:09:04 +05:30
044a7524a3 v2 start linux script 2022-09-02 23:08:53 +05:30
495b15e065 Make the new Linux v2 scripts executable 2022-09-02 22:48:43 +05:30
f4e6c399f2 Linux scripts for v2 2022-09-02 21:54:32 +05:30
4519acb77e Increment version test 2022-09-02 18:43:31 +05:30
cf1ba6d459 v2 scripts 2022-09-02 16:55:08 +05:30
c28cb67484 Script change 2022-09-02 16:31:34 +05:30
9017ee9a40 v2: Add version number 2022-09-02 16:15:36 +05:30
52086a2d39 v2 2022-09-02 16:13:03 +05:30
facec59fe8 v2: Use config.bat for host and port 2022-09-02 16:00:53 +05:30
75eb79bd55 v2 script file 2022-09-02 15:45:47 +05:30
307209945c Merge branch 'main' of github.com:cmdr2/stable-diffusion-ui 2022-09-02 15:42:08 +05:30
8db9f40001 v2 scripts, trap more errors 2022-09-02 15:41:53 +05:30
472b8d0e51 Keep v2 files in the repo, for the updater 2022-09-02 13:58:36 +05:30
dbebd32a6e Update README.md 2022-09-02 08:49:39 +05:30
ad8d2a913b Update README.md 2022-09-01 22:50:03 +05:30
38bd247d64 Update README.md 2022-09-01 22:49:37 +05:30
6a4e972de6 Update README.md 2022-09-01 22:48:35 +05:30
cc14ac0bac Merge branch 'main' of github.com:cmdr2/stable-diffusion-ui 2022-09-01 12:44:01 +05:30
2e0d7fdbb8 Discord link for dev 2022-09-01 12:43:56 +05:30
2284eea2d8 Update README.md 2022-09-01 12:13:48 +05:30
867b5b2ee4 Update README.md 2022-09-01 12:01:56 +05:30
85e29fffc9 Update README.md 2022-09-01 11:59:46 +05:30
fa84d812f1 Update config screenshot 2022-08-31 12:25:03 +05:30
4ffa8420dd New UI Screenshot with image modifier tags 2022-08-31 12:18:19 +05:30
7dc1c54578 New UI; A library of image modifier tags like 'realistic', 'artstation', 'pencil sketch' etc 2022-08-31 12:16:25 +05:30
05434d3575 Update README.md 2022-08-31 08:06:02 +05:30
ddea9b9f38 Update README.md 2022-08-30 22:00:52 +05:30
9445ee41cf Update README.md 2022-08-30 22:00:41 +05:30
8325b4e5aa Beta notice for v2 2022-08-30 22:00:01 +05:30
1ff9db3714 Use port 9000 2022-08-29 09:29:08 +05:30
12ff102a21 Merge pull request #24 from ChrisAcrobat/pr-1
Update docker-compose.yml
2022-08-28 21:47:25 +05:30
f660111751 Merge pull request #19 from dsmmcken/dsmmcken-patch-1
Allow other image file types
2022-08-27 22:15:14 +05:30
Don
bac08306fb Allow other image file types
underlying library accepts a wide range of filetypes, no need to restrict it to just png. I wanted to use a jpg.
2022-08-27 11:55:00 -04:00
b860dbd9a6 Revert to using docker-compose commands, since ./server wasn't working reliably on some platforms 2022-08-27 14:14:24 +05:30
c7713f559d Update docker-compose.yml 2022-08-27 10:34:29 +02:00
7c60189f29 Reduce opacity of the labels over the image until the mouse moves over it 2022-08-27 11:21:10 +05:30
35dc13ffcf Update screenshot for the new num_outputs config 2022-08-27 10:55:59 +05:30
6b55f385c7 Allow generating an arbitrary number of output images, with parallel batches of 1 or 4 depending on the GPU's capabilities 2022-08-27 10:53:46 +05:30
75a9466232 Avoid running a background task in ./server, seems to be causing issues on some distros 2022-08-27 00:25:49 +05:30
2ffe7cc843 Merge branch 'main' of github.com:cmdr2/stable-diffusion-ui 2022-08-27 00:20:32 +05:30
d111911c18 Button to download the generated image 2022-08-27 00:20:20 +05:30
d55935f8c0 Update README.md 2022-08-26 22:53:42 +05:30
4def55368c Troubleshooting: use the latest WSL kernel 2022-08-26 22:36:44 +05:30
661eff90fc Use the latest Stable Diffusion model 2022-08-26 22:31:08 +05:30
66d809e9ed Update screenshots with the new port 9000 2022-08-26 21:41:20 +05:30
1bd812a2cf Add a note about using ./server commands instead of docker-compose directly 2022-08-26 21:14:23 +05:30
96e4a4ae03 Switch to port 9000, and post a redirection notice at port 8000. This avoids a port conflicts since port 8000 is often used by other servers, and hopefully avoids a common source of new-user issues in the future 2022-08-26 21:04:55 +05:30
f526b2df94 Update screenshot for img2img and masking 2022-08-26 19:55:04 +05:30
a2a4b32f7b Screenshot showing img2img and masking 2022-08-26 19:53:10 +05:30
cb914ea77f Add new screenshots 2022-08-26 19:51:31 +05:30
55 changed files with 4554 additions and 657 deletions

3
.github/FUNDING.yml vendored Normal file
View File

@ -0,0 +1,3 @@
# These are supported funding model platforms
ko_fi: cmdr2_stablediffusion_ui

38
.github/ISSUE_TEMPLATE/bug_report.md vendored Normal file
View File

@ -0,0 +1,38 @@
---
name: Bug report
about: Create a report to help us improve
title: ''
labels: bug
assignees: ''
---
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Desktop (please complete the following information):**
- OS:
- Browser:
- Version:
**Smartphone (please complete the following information):**
- Device:
- OS:
- Browser
- Version
**Additional context**
Add any other context about the problem here.

View File

@ -0,0 +1,20 @@
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: enhancement
assignees: ''
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.

4
.gitignore vendored
View File

@ -1 +1,5 @@
__pycache__
installer
installer.tar
dist
.idea/*

55
CONTRIBUTING.md Normal file
View File

@ -0,0 +1,55 @@
Hi there, these instructions are meant for the developers of this project.
If you only want to use the Stable Diffusion UI, you've downloaded the wrong file. In that case, please download and follow the instructions at https://github.com/cmdr2/stable-diffusion-ui#installation
Thanks
# For developers:
If you would like to contribute to this project, there is a discord for dicussion:
[![Discord Server](https://badgen.net/badge/icon/discord?icon=discord&label)](https://discord.com/invite/u9yhsFmEkB)
## Development environment for UI (frontend and server) changes
This is in-flux, but one way to get a development environment running for editing the UI of this project is:
(swap `.sh` or `.bat` in instructions depending on your environment, and be sure to adjust any paths to match where you're working)
1) `git clone` the repository, e.g. to `/projects/stable-diffusion-ui-repo`
2) Download the pre-built end user archive from the link on github, and extract it, e.g. to `/projects/stable-diffusion-ui-archive`
3) `cd /projects/stable-diffusion-ui-archive` and run the script to set up and start the project, e.g. `start.sh`
4) Check you can view and generate images on `localhost:9000`
5) Close the server, and edit `/projects/stable-diffusion-ui-archive/scripts/on_env_start.sh`
6) Comment out the lines near the bottom that copies the `files/ui` folder, e.g:
for `.sh`
```
# rm -rf ui
# cp -Rf sd-ui-files/ui .
# cp sd-ui-files/scripts/on_sd_start.sh scripts/
# cp sd-ui-files/scripts/start.sh .
```
for `.bat`
```
REM @xcopy sd-ui-files\ui ui /s /i /Y
REM @copy sd-ui-files\scripts\on_sd_start.bat scripts\ /Y
REM @copy "sd-ui-files\scripts\Start Stable Diffusion UI.cmd" . /Y
```
7) Comment out the line at the top of `/projects/stable-diffusion-ui-archive/scripts/on_sd_start.sh` that copies `on_env_start`. For e.g. `@copy sd-ui-files\scripts\on_env_start.bat scripts\ /Y`
8) Delete the current `ui` folder at `/projects/stable-diffusion-ui-archive/ui`
9) Now make a symlink between the repository clone (where you will be making changes) and this archive (where you will be running stable diffusion):
`ln -s /projects/stable-diffusion-ui-repo/ui /projects/stable-diffusion-ui-archive/ui`
or for Windows
`mklink /D \projects\stable-diffusion-ui-archive\ui \projects\stable-diffusion-ui-repo\ui` (link name first, source repo dir second)
9) Run the archive again `start.sh` and ensure you can still use the UI.
10) Congrats, now any changes you make in your repo `ui` folder are linked to this running archive of the app and can be previewed in the browser.
Check the `ui/frontend/build/README.md` for instructions on running and building the React code.
## Development environment for Installer changes
Build the Windows installer using Windows, and the Linux installer using Linux. Don't mix the two, and don't use WSL. An Ubuntu VM is fine for building the Linux installer on a Windows host.
1. Install Miniconda 3 or Anaconda.
2. Install `conda install -c conda-forge -y conda-pack`
3. Open the Anaconda Prompt. Do not use WSL if you're building for Windows.
4. Run `build.bat` or `./build.sh` depending on whether you're in Windows or Linux.
5. Compress the `stable-diffusion-ui` folder created inside the `dist` folder. Make a `zip` for Windows, and `tar.xz` for Linux (smaller files, and Linux users already have tar).
6. Make a new GitHub release and upload the Windows and Linux installer builds.

View File

@ -1,15 +0,0 @@
FROM python:3.9
RUN mkdir /app
WORKDIR /app
RUN apt update
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
EXPOSE 8000
ENTRYPOINT ["uvicorn", "main:app", "--reload", "--host", "0.0.0.0"]

View File

@ -0,0 +1,24 @@
Congrats on downloading Stable Diffusion UI, version 2!
If you haven't downloaded Stable Diffusion UI yet, please download from https://github.com/cmdr2/stable-diffusion-ui#installation
After downloading, to install please follow these instructions:
For Windows:
- Please double-click the "Start Stable Diffusion UI.cmd" file inside the "stable-diffusion-ui" folder.
For Linux:
- Please open a terminal, and go to the "stable-diffusion-ui" directory. Then run ./start.sh
That file will automatically install everything. After that it will start the Stable Diffusion interface in a web browser.
To start the UI in the future, please run the same command mentioned above.
If you have any problems, please:
1. Try the troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting
2. Or, seek help from the community at https://discord.com/invite/u9yhsFmEkB
3. Or, file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues
Thanks
cmdr2 (and contributors to the project)

View File

@ -0,0 +1,8 @@
Hi there,
What you have downloaded is meant for the developers of this project, not for users.
If you only want to use the Stable Diffusion UI, you've downloaded the wrong file.
Please download and follow the instructions at https://github.com/cmdr2/stable-diffusion-ui#installation
Thanks

View File

@ -1,37 +1,57 @@
# Stable Diffusion UI
### A simple way to install and use [Stable Diffusion](https://replicate.com/stability-ai/stable-diffusion) on your own computer
# Stable Diffusion UI v2
### A simple 1-click way to install and use [Stable Diffusion](https://github.com/CompVis/stable-diffusion) on your own computer. No dependencies or technical knowledge required.
---
<p float="left">
<a href="#installation"><img src="https://github.com/cmdr2/stable-diffusion-ui/raw/develop/media/download-win.png" width="200" /></a>
<a href="#installation"><img src="https://github.com/cmdr2/stable-diffusion-ui/raw/develop/media/download-linux.png" width="200" /></a>
</p>
🎉 **New!** `img2img` and `inpaint` (masking) are now supported! You can provide an image to generate new images based on it (and an optional text prompt). You can also use the generated image as the new input image in 1-click, to refine it further. (Thanks [Andreas](https://github.com/andreasjansson)!)
[![Discord Server](https://badgen.net/badge/icon/discord?icon=discord&label)](https://discord.com/invite/u9yhsFmEkB) (for support, and development discussion) | [Troubleshooting guide for common problems](Troubleshooting.md)
# What does this do?
Two things:
1. Automatically downloads and installs Stable Diffusion on your own computer (no need to mess with conda or environments)
2. Gives you a simple browser-based UI to talk to your local Stable Diffusion. Enter text prompts and view the generated image. No API keys required.
️‍🔥🎉 **New!** Live Preview, More Samplers, In-Painting, Face Correction (GFPGAN) and Upscaling (RealESRGAN) have been added!
All the processing will happen on your computer locally, it does not transmit your prompts or process on any remote server.
This distribution currently uses Stable Diffusion 1.4. Once the model for 1.5 becomes publicly available, the model in this distribution will be updated.
<img src="https://github.com/cmdr2/stable-diffusion-ui/raw/main/media/shot-v4.jpg" height="500" alt="Screenshot of tool">
# Features in the new v2 Version:
- **No Dependencies or Technical Knowledge Required**: 1-click install for Windows 10/11 and Linux. *No dependencies*, no need for WSL or Docker or Conda or technical setup. Just download and run!
- **Face Correction (GFPGAN) and Upscaling (RealESRGAN)**
- **In-Painting**
- **Live Preview**: See the image as the AI is drawing it
- **Lots of Samplers**
- **Image Modifiers**: A library of *modifier tags* like *"Realistic"*, *"Pencil Sketch"*, *"ArtStation"* etc. Experiment with various styles quickly.
- **New UI**: with cleaner design
- Supports "*Text to Image*" and "*Image to Image*"
- **NSFW Setting**: A setting in the UI to control *NSFW content*
- **Use CPU setting**: If you don't have a compatible graphics card, but still want to run it on your CPU.
- **Auto-updater**: Gets you the latest improvements and bug-fixes to a rapidly evolving project.
- **Low Memory Usage**: Creates 512x512 images with less than 4GB of VRAM!
![Screenshot](media/shot-v8.jpg?raw=true)
# System Requirements
1. Computer capable of running Stable Diffusion.
2. Linux or Windows 11 (with [WSL](https://docs.microsoft.com/en-us/windows/wsl/install)) or Windows 10 v2004+ (Build 19041+) with [WSL](https://docs.microsoft.com/en-us/windows/wsl/install).
3. Requires (a) [Docker](https://docs.docker.com/engine/install/), (b) [docker-compose v1.29](https://docs.docker.com/compose/install/), and (c) [nvidia-container-toolkit](https://stackoverflow.com/a/58432877).
1. Windows 10/11, or Linux. Experimental support for Mac is coming soon.
2. An NVIDIA graphics card, preferably with 6GB or more of VRAM. But if you don't have a compatible graphics card, you can still use it with a "Use CPU" setting. It'll be very slow, but it should still work.
**Important:** If you're using Windows, please install docker inside your [WSL](https://docs.microsoft.com/en-us/windows/wsl/install)'s Linux. Install docker for the Linux distro in your WSL. **Don't install Docker for Windows.**
You do not need anything else. You do not need WSL, Docker or Conda. The installer will take care of it.
# Installation
1. Clone this repository: `git clone https://github.com/cmdr2/stable-diffusion-ui.git` or [download the zip file](https://github.com/cmdr2/stable-diffusion-ui/archive/refs/heads/main.zip) and unzip.
2. Open your terminal, and in the project directory run: `./server` (warning: this will take some time during the first run, since it'll download Stable Diffusion's [docker image](https://replicate.com/stability-ai/stable-diffusion), nearly 17 GiB)
3. Open http://localhost:8000 in your browser. That's it!
1. **Download** [for Windows](https://github.com/cmdr2/stable-diffusion-ui/releases/download/v2.05/stable-diffusion-ui-win64.zip) or [for Linux](https://github.com/cmdr2/stable-diffusion-ui/releases/download/v2.05/stable-diffusion-ui-linux.tar.xz).
If you're getting errors, please check the [Troubleshooting](#troubleshooting) section below.
2. **Extract**:
- For Windows: After unzipping the file, please move the `stable-diffusion-ui` folder to your `C:` (or any drive like D:, at the top root level), e.g. `C:\stable-diffusion-ui`. This will avoid a common problem with Windows (file path length limits).
- For Linux: After extracting the .tar.xz file, please open a terminal, and go to the `stable-diffusion-ui` directory.
3. **Run**:
- For Windows: `Start Stable Diffusion UI.cmd` by double-clicking it.
- For Linux: In the terminal, run `./start.sh` (or `bash start.sh`)
This will automatically install Stable Diffusion, set it up, and start the interface. No additional steps are needed.
**To Uninstall:** Just delete the `stable-diffusion-ui` folder to uninstall all the downloaded packages.
To stop the server, please run `./server stop`
# Usage
Open http://localhost:8000 in your browser (after running `./server` from step 2 previously).
Open http://localhost:9000 in your browser (after running step 3 previously). It may take a few moments for the back-end to be ready.
## With a text description
1. Enter a text prompt, like `a photograph of an astronaut riding a horse` in the textbox.
@ -43,50 +63,35 @@ Open http://localhost:8000 in your browser (after running `./server` from step 2
2. An optional text prompt can help you further describe the kind of image you want to generate.
3. Press `Make Image`. See the image generated using your prompt.
You can also set an `Image Mask` for telling Stable Diffusion to draw in only the black areas in your image mask. White areas in your mask will be ignored.
You can use Face Correction or Upscaling to improve the image further.
**Pro tip:** You can also click `Use as Input` on a generated image, to use it as the input image for your next generation. This can be useful for sequentially refining the generated image with a single click.
**Another tip:** Images with the same aspect ratio of your generated image work best. E.g. 1:1 if you're generating images sized 512x512.
## Problems?
Please [file an issue](https://github.com/cmdr2/stable-diffusion-ui/issues) if this did not work for you (after trying the common [troubleshooting](#troubleshooting) steps)!
## Problems? Troubleshooting
Please try the common [troubleshooting](Troubleshooting.md) steps. If that doesn't fix it, please ask on the [discord server](https://discord.com/invite/u9yhsFmEkB), or [file an issue](https://github.com/cmdr2/stable-diffusion-ui/issues).
# Advanced Settings
You can also set the configuration like `seed`, `width`, `height`, `num_outputs`, `num_inference_steps` and `guidance_scale` using the 'show' button next to 'Advanced settings'.
Use the same `seed` number to get the same image for a certain prompt. This is useful for refining a prompt without losing the basic image design. Enable the `random images` checkbox to get random images.
![Screenshot of advanced settings](media/config-v2.jpg?raw=true)
![Screenshot of advanced settings](media/config-v6.png?raw=true)
# Troubleshooting
## './docker-compose.yml' is invalid:
> ERROR: The Compose file './docker-compose.yml' is invalid because:
> services.stability-ai.deploy.resources.reservations value Additional properties are not allowed ('devices' was unexpected)
# What is this? Why no Docker?
This version is a 1-click installer. You don't need WSL or Docker or anything beyond a working NVIDIA GPU with an updated driver. You don't need to use the command-line at all. Even if you don't have a compatible GPU, you can run it on your CPU (albeit very slowly).
Please ensure you have `docker-compose` version 1.29 or higher. Check `docker-compose --version`, and if required [update it to 1.29](https://docs.docker.com/compose/install/). (Thanks [HVRyan](https://github.com/HVRyan))
It'll download the necessary files from the original [Stable Diffusion](https://github.com/CompVis/stable-diffusion) git repository, and set it up. It'll then start the browser-based interface like before.
## RuntimeError: Found no NVIDIA driver on your system:
If you have an NVIDIA GPU and the latest [NVIDIA driver](http://www.nvidia.com/Download/index.aspx), please ensure that you've installed [nvidia-container-toolkit](https://stackoverflow.com/a/58432877). (Thanks [u/exintrovert420](https://www.reddit.com/user/exintrovert420/))
## Some other process is already running at port 8000 / port 8000 could not be bound
You can override the port used. Please change `docker-compose.yml` inside the project directory, and update the line `8000:8000` to `1337:8000` (or where 1337 is whichever port number you want).
After doing this, please restart your server, by running `./server restart`.
After this, you can access the server at `http://localhost:1337` (where 1337 is the new port you specified earlier).
# Behind the scenes
This project is a quick way to get started with Stable Diffusion. You do not need to have Stable Diffusion already installed, and do not need any API keys. This project will automatically download Stable Diffusion's docker image, the first time it is run.
This project runs Stable Diffusion in a docker container behind the scenes, using Stable Diffusion's [Docker image](https://replicate.com/stability-ai/stable-diffusion) on replicate.com.
The NSFW option is currently off (temporarily), so it'll allow NSFW images, for those people who are unable to run their prompts without hitting the NSFW filter incorrectly.
# Bugs reports and code contributions welcome
If there are any problems or suggestions, please feel free to [file an issue](https://github.com/cmdr2/stable-diffusion-ui/issues).
If there are any problems or suggestions, please feel free to ask on the [discord server](https://discord.com/invite/u9yhsFmEkB) or [file an issue](https://github.com/cmdr2/stable-diffusion-ui/issues).
Also, please feel free to submit a pull request, if you have any code contributions in mind.
Also, please feel free to submit a pull request, if you have any code contributions in mind. Join the [discord server](https://discord.com/invite/u9yhsFmEkB) for development-related discussions, and for helping other users.
# Disclaimer
The authors of this project are not responsible for any content generated using this interface.
This license of this software forbids you from sharing any content that violates any laws, produce any harm to a person, disseminate any personal information that would be meant for harm, spread misinformation and target vulnerable groups. For the full list of restrictions please read [the license](LICENSE).
The license of this software forbids you from sharing any content that violates any laws, produce any harm to a person, disseminate any personal information that would be meant for harm, spread misinformation, or target vulnerable groups. For the full list of restrictions please read [the license](LICENSE). You agree to these terms by using this software.

46
Troubleshooting.md Normal file
View File

@ -0,0 +1,46 @@
Common issues and their solutions. If these solutions don't work, please feel free to ask at the [discord server](https://discord.com/invite/u9yhsFmEkB) or [file an issue](https://github.com/cmdr2/stable-diffusion-ui/issues).
## RuntimeError: CUDA out of memory
This can happen if your PC has less than 6GB of VRAM.
Try disabling the "Turbo mode" setting under "Advanced Settings", since that takes an additional 1 GB of VRAM (to increase the speed).
Additionally, a common reason for this error is that you're using an initial image larger than 768x768 pixels. Try using a smaller initial image.
Also try generating smaller sized images.
## No ldm found, or antlr4 or any other missing module, or ClobberError: This transaction has incompatible packages due to a shared path
On Windows, please ensure that you had placed the `stable-diffusion-ui` folder after unzipping to the root of C: or D: (or any drive). For e.g. `C:\stable-diffusion-ui`. **Note:** This has to be done **before** you start the installation process. If you have already installed (and are facing this error), please delete the installed folder, and start fresh by unzipping and placing the folder at the top of your drive.
This error can also be caused if you already have conda/miniconda/anaconda installed, due to package conflicts. Please open your Anaconda Prompt, and run `conda clean --all` to clean up unused packages.
If nothing works, this could be due to a corrupted installation. Please try reinstalling this, by deleting the installed folder, and unzipping from the downloaded zip file.
## Killed uvicorn server:app --app-dir ... --port 9000 --host 0.0.0.0
This happens if your PC ran out of RAM. Stable Diffusion requires a lot of RAM, and requires atleast 10 GB of RAM to work well. You can also try closing all other applications before running Stable Diffusion UI.
## Green image generated
This usually happens if you're running NVIDIA 1650 or 1660 Super. To solve this, please close and run the Stable Diffusion command on your computer. If you're using the older Docker-based solution (v1), please upgrade to v2: https://github.com/cmdr2/stable-diffusion-ui/tree/v2#installation
If you're still seeing this error, please try enabling "Full Precision" under "Advanced Settings" in the Stable Diffusion UI.
## './docker-compose.yml' is invalid:
> ERROR: The Compose file './docker-compose.yml' is invalid because:
> services.stability-ai.deploy.resources.reservations value Additional properties are not allowed ('devices' was unexpected)
Please ensure you have `docker-compose` version 1.29 or higher. Check `docker-compose --version`, and if required [update it to 1.29](https://docs.docker.com/compose/install/). (Thanks [HVRyan](https://github.com/HVRyan))
## RuntimeError: Found no NVIDIA driver on your system:
If you have an NVIDIA GPU and the latest [NVIDIA driver](http://www.nvidia.com/Download/index.aspx), please ensure that you've installed [nvidia-container-toolkit](https://stackoverflow.com/a/58432877). (Thanks [u/exintrovert420](https://www.reddit.com/user/exintrovert420/))
## Some other process is already running at port 9000 / port 9000 could not be bound
You can override the port used. Please change `docker-compose.yml` inside the project directory, and update the line `9000:9000` to `1337:9000` (where 1337 is whichever port number you want).
After doing this, please restart your server, by running `./server restart`.
After this, you can access the server at `http://localhost:1337` (where 1337 is the new port you specified earlier).
## RuntimeError: CUDA error: unknown error
Please ensure that you have an NVIDIA GPU and the latest [NVIDIA driver](http://www.nvidia.com/Download/index.aspx), and that you've installed [nvidia-container-toolkit](https://stackoverflow.com/a/58432877).
Also, if you are using WSL (Windows), please ensure you have the latest WSL kernel by running `wsl --shutdown` and then `wsl --update`. (Thanks [AndrWeisR](https://github.com/AndrWeisR))

49
build.bat Normal file
View File

@ -0,0 +1,49 @@
@echo off
@echo "Hi there, what you are running is meant for the developers of this project, not for users." & echo.
@echo "If you only want to use the Stable Diffusion UI, you've downloaded the wrong file."
@echo "Please download and follow the instructions at https://github.com/cmdr2/stable-diffusion-ui#installation" & echo.
@echo "If you are actually a developer of this project, please type Y and press enter" & echo.
set /p answer=Are you a developer of this project (Y/N)?
if /i "%answer:~,1%" NEQ "Y" exit /b
@mkdir dist\stable-diffusion-ui
@echo "Downloading components for the installer.."
@call conda env create --prefix installer -f environment.yaml
@call conda activate .\installer
@echo "Setting up startup scripts.."
@mkdir installer\etc\conda\activate.d
@copy scripts\post_activate.bat installer\etc\conda\activate.d\
@echo "Creating a distributable package.."
@call conda install -c conda-forge -y conda-pack
@call conda pack --n-threads -1 --prefix installer --format tar
@cd dist\stable-diffusion-ui
@mkdir installer
@call tar -xf ..\..\installer.tar -C installer
@mkdir scripts
@copy ..\..\scripts\on_env_start.bat scripts\
@copy "..\..\scripts\Start Stable Diffusion UI.cmd" .
@copy ..\..\LICENSE .
@copy "..\..\CreativeML Open RAIL-M License" .
@copy "..\..\How to install and run.txt" .
@echo "Build ready. Zip the 'dist\stable-diffusion-ui' folder."
@echo "Cleaning up.."
@cd ..\..
@rmdir /s /q installer
@del installer.tar

52
build.sh Executable file
View File

@ -0,0 +1,52 @@
#!/bin/bash
printf "Hi there, what you are running is meant for the developers of this project, not for users.\n\n"
printf "If you only want to use the Stable Diffusion UI, you've downloaded the wrong file.\n"
printf "Please download and follow the instructions at https://github.com/cmdr2/stable-diffusion-ui#installation\n\n"
printf "If you are actually a developer of this project, please type Y and press enter\n\n"
read -p "Are you a developer of this project (Y/N) " yn
case $yn in
[Yy]* ) ;;
* ) exit;;
esac
mkdir -p dist/stable-diffusion-ui
echo "Downloading components for the installer.."
source ~/miniconda3/etc/profile.d/conda.sh
conda install -c conda-forge -y conda-pack
conda env create --prefix installer -f environment.yaml
conda activate ./installer
echo "Creating a distributable package.."
conda pack --n-threads -1 --prefix installer --format tar
cd dist/stable-diffusion-ui
mkdir installer
tar -xf ../../installer.tar -C installer
mkdir scripts
cp ../../scripts/on_env_start.sh scripts/
cp ../../scripts/start.sh .
cp ../../LICENSE .
cp "../../CreativeML Open RAIL-M License" .
cp "../../How to install and run.txt" .
chmod u+x start.sh
echo "Build ready. Zip the 'dist/stable-diffusion-ui' folder."
echo "Cleaning up.."
cd ../..
rm -rf installer
rm installer.tar

View File

@ -1,28 +0,0 @@
version: '3.3'
services:
stability-ai:
container_name: sd
ports:
- '5000:5000'
image: 'r8.im/stability-ai/stable-diffusion@sha256:3080f37ef32771c9984d65033cbe71caa96c69680008bae64cf691724a6df04c'
deploy:
resources:
reservations:
devices:
- capabilities: [gpu]
stable-diffusion-ui:
container_name: sd-ui
ports:
- '8000:8000'
build:
context: .
dockerfile: Dockerfile
volumes:
- .:/app
depends_on:
- stability-ai
networks:
default:

7
environment.yaml Normal file
View File

@ -0,0 +1,7 @@
name: stable-diffusion-ui-installer
channels:
- defaults
- conda-forge
dependencies:
- conda
- git

View File

@ -1,471 +0,0 @@
<!DOCTYPE html>
<html>
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<style>
body {
font-family: Arial, Helvetica, sans-serif;
font-size: 11pt;
}
a {
color: rgb(0, 102, 204);
}
a:visited {
color: rgb(0, 102, 204);
}
@media (prefers-color-scheme: dark) {
body {
background-color: rgb(32, 33, 36);
color: #eee;
}
}
label {
font-size: 10pt;
}
#prompt {
width: 50vw;
height: 50pt;
}
@media screen and (max-width: 600px) {
#prompt {
width: 95%;
}
}
.image_preview_container {
display: none;
}
.image_clear_btn {
position: absolute;
transform: translateX(-50%);
background: black;
color: white;
border: 2pt solid #ccc;
padding: 0;
cursor: pointer;
outline: inherit;
border-radius: 8pt;
width: 16pt;
height: 16pt;
font-size: 10pt;
}
#configHeader {
margin-top: 5px;
margin-bottom: 5px;
font-size: 10pt;
}
#config {
font-size: 9pt;
margin-bottom: 5px;
padding-left: 10px;
}
#outputMsg {
font-size: small;
}
#footer {
border-top: 1px solid #999;
margin-top: 10px;
padding-top: 10px;
font-size: small;
}
.imgUseBtn {
position: absolute;
transform: translateX(-100%);
margin-top: 5pt;
margin-left: -5pt;
}
.imgItem {
display: inline;
padding-right: 10px;
}
</style>
</html>
<body>
<div id="status">Server status: <span id="serverStatus">checking..</span> | Request status: <span id="reqStatus">n/a</span></div>
<br/>
<b>Prompt:</b><br/>
<textarea id="prompt">a photograph of an astronaut riding a horse</textarea><br/>
<label for="init_image"><b>Initial Image:</b> (optional) </label> <input id="init_image" name="init_image" type="file" /> </button><br/>
<div id="init_image_preview_container" class="image_preview_container">
<img id="init_image_preview" src="" width="100" height="100" />
<button id="init_image_clear" class="image_clear_btn">X</button>
</div><br/>
<div id="mask_setting">
<label for="mask"><b>Image Mask:</b> (optional) </label> <input id="mask" name="mask" type="file" /> </button><br/>
<div id="mask_preview_container" class="image_preview_container">
<img id="mask_preview" src="" width="100" height="100" />
<button id="mask_clear" class="image_clear_btn">X</button>
</div>
</div>
<div id="configHeader"><b>Advanced settings:</b> [<a id="configToggleBtn" href="#">show</a>]</div>
<div id="config">
<label for="seed">Seed:</label> <input id="seed" name="seed" value="30000"> <input id="random_seed" name="random_seed" type="checkbox" checked> <label for="random_seed">Random Image</label> <br/>
<label for="num_outputs">Number of outputs:</label> <select id="num_outputs" name="num_outputs" value="1"><option value="1" selected>1</option><option value="4">4</option></select><br/>
<label for="width">Width:</label> <select id="width" name="width" value="512"><option value="128">128</option><option value="256">256</option><option value="512" selected>512</option><option value="768">768</option><option value="1024">1024</option></select><br/>
<label for="height">Height:</label> <select id="height" name="height" value="512"><option value="128">128</option><option value="256">256</option><option value="512" selected>512</option><option value="768">768</option></select><br/>
<label for="num_inference_steps">Number of inference steps:</label> <input id="num_inference_steps" name="num_inference_steps" value="50"><br/>
<label for="guidance_scale">Guidance Scale:</label> <input id="guidance_scale" name="guidance_scale" value="75" type="range" min="10" max="200"> <span id="guidance_scale_value"></span><br/>
<span id="prompt_strength_container"><label for="prompt_strength">Prompt Strength:</label> <input id="prompt_strength" name="prompt_strength" value="8" type="range" min="0" max="10"> <span id="prompt_strength_value"></span><br/></span><br/>
<input id="sound_toggle" name="sound_toggle" type="checkbox" checked> <label for="sound_toggle">Play sound on task completion</label><br/>
</div>
<button id="makeImage">Make Image</button> <br/><br/>
<div id="outputMsg"></div>
<div id="images"></div>
<div id="footer">
<p>Please feel free to <a href="https://github.com/cmdr2/stable-diffusion-ui/issues" target="_blank">file an issue</a> if you have any problems or suggestions in using this interface.</p>
<p><b>Disclaimer:</b> The authors of this project are not responsible for any content generated using this interface.</p>
<p>This license of this software forbids you from sharing any content that violates any laws, produce any harm to a person, disseminate any personal information that would be meant for harm, <br/>spread misinformation and target vulnerable groups. For the full list of restrictions please read <a href="https://github.com/cmdr2/stable-diffusion-ui/blob/main/LICENSE" target="_blank">the license</a>.</p>
<p>By using this software, you consent to the terms and conditions of the license.</p>
</div>
</body>
<script>
const SOUND_ENABLED_KEY = "soundEnabled"
const HEALTH_PING_INTERVAL = 5 // seconds
let promptField = document.querySelector('#prompt')
let numOutputsField = document.querySelector('#num_outputs')
let numInferenceStepsField = document.querySelector('#num_inference_steps')
let guidanceScaleField = document.querySelector('#guidance_scale')
let guidanceScaleValueLabel = document.querySelector('#guidance_scale_value')
let randomSeedField = document.querySelector("#random_seed")
let seedField = document.querySelector('#seed')
let widthField = document.querySelector('#width')
let heightField = document.querySelector('#height')
let initImageSelector = document.querySelector("#init_image")
let initImagePreview = document.querySelector("#init_image_preview")
let maskImageSelector = document.querySelector("#mask")
let maskImagePreview = document.querySelector("#mask_preview")
let promptStrengthField = document.querySelector('#prompt_strength')
let promptStrengthValueLabel = document.querySelector('#prompt_strength_value')
let makeImageBtn = document.querySelector('#makeImage')
let imagesContainer = document.querySelector('#images')
let initImagePreviewContainer = document.querySelector('#init_image_preview_container')
let initImageClearBtn = document.querySelector('#init_image_clear')
let promptStrengthContainer = document.querySelector('#prompt_strength_container')
let maskSetting = document.querySelector('#mask_setting')
let maskImagePreviewContainer = document.querySelector('#mask_preview_container')
let maskImageClearBtn = document.querySelector('#mask_clear')
let showConfigToggle = document.querySelector('#configToggleBtn')
let configBox = document.querySelector('#config')
let outputMsg = document.querySelector('#outputMsg')
let soundToggle = document.querySelector('#sound_toggle')
let serverStatus = 'offline'
function isSoundEnabled() {
if (localStorage.getItem(SOUND_ENABLED_KEY) === 'false') {
return false
}
return true
}
function setStatus(statusType, msg, msgType) {
let el = ''
if (statusType === 'server') {
el = '#serverStatus'
serverStatus = msg
} else if (statusType === 'request') {
el = '#reqStatus'
}
if (msgType == 'error') {
msg = '<span style="color: red">' + msg + '<span>'
} else if (msgType == 'success') {
msg = '<span style="color: green">' + msg + '<span>'
}
if (el) {
document.querySelector(el).innerHTML = msg
}
}
function playSound() {
const audio = new Audio('/media/ding.mp3')
audio.volume = 0.2
audio.play()
}
async function healthCheck() {
try {
let res = await fetch('/ping')
res = await res.json()
if (res[0] == 'OK') {
setStatus('server', 'online', 'success')
} else {
setStatus('server', 'offline', 'error')
}
} catch (e) {
setStatus('server', 'offline', 'error')
}
}
async function makeImage() {
setStatus('request', 'fetching..')
makeImageBtn.innerHTML = 'Processing..'
makeImageBtn.disabled = true
outputMsg.innerHTML = 'Fetching..'
function logError(msg, res) {
outputMsg.innerHTML = '<span style="color: red">Error: ' + msg + '</span>'
console.log('request error', res)
setStatus('request', 'error', 'error')
}
let seed = (randomSeedField.checked ? Math.floor(Math.random() * 10000) : seedField.value)
let reqBody = {
prompt: promptField.value,
num_outputs: numOutputsField.value,
num_inference_steps: numInferenceStepsField.value,
guidance_scale: guidanceScaleField.value / 10,
width: widthField.value,
height: heightField.value,
seed: seed,
}
if (initImagePreview.src.indexOf('data:image/png;base64') !== -1) {
reqBody['init_image'] = initImagePreview.src
reqBody['prompt_strength'] = promptStrengthField.value / 10
if (maskImagePreview.src.indexOf('data:image/png;base64') !== -1) {
reqBody['mask'] = maskImagePreview.src
}
}
let res = ''
let time = new Date().getTime()
try {
res = await fetch('/image', {
method: 'POST',
headers: {
'Content-Type': 'application/json'
},
body: JSON.stringify(reqBody)
})
if (res.status != 200) {
if (serverStatus === 'online') {
logError('Stable Diffusion had an error: ' + await res.text() + '. This happens sometimes. Maybe modify the prompt or seed a little bit?', res)
} else {
logError("Stable Diffusion is still starting up, please wait. If this goes on beyond a few minutes, Stable Diffusion has probably crashed.", res)
}
res = undefined
} else {
res = await res.json()
if (res.status !== 'succeeded') {
let msg = ''
if (res.detail !== undefined) {
msg = res.detail[0].msg + " in " + JSON.stringify(res.detail[0].loc)
} else {
msg = res
}
logError(msg, res)
res = undefined
}
}
} catch (e) {
console.log('request error', e)
setStatus('request', 'error', 'error')
}
makeImageBtn.innerHTML = 'Make Image'
makeImageBtn.disabled = false
if (isSoundEnabled()) {
playSound()
}
if (!res) {
return
}
time = new Date().getTime() - time
time /= 1000
outputMsg.innerHTML = 'Processed in ' + time + ' seconds. Seed: ' + seed
imagesContainer.innerHTML = ''
for (let idx in res.output) {
let imgBody = ''
try {
imgBody = res.output[idx]
} catch (e) {
console.log(imgBody)
setStatus('request', 'invalid image', 'error')
return
}
let imgItem = document.createElement('div')
imgItem.className = 'imgItem'
let img = document.createElement('img')
img.width = parseInt(reqBody.width)
img.height = parseInt(reqBody.height)
img.src = imgBody
let imgUseBtn = document.createElement('button')
imgUseBtn.className = 'imgUseBtn'
imgUseBtn.innerHTML = 'Use as Input'
imgItem.appendChild(img)
imgItem.appendChild(imgUseBtn)
imagesContainer.appendChild(imgItem)
imgUseBtn.addEventListener('click', function() {
initImageSelector.value = null
initImagePreview.src = imgBody
initImagePreviewContainer.style.display = 'block'
promptStrengthContainer.style.display = 'block'
maskSetting.style.display = 'block'
randomSeedField.checked = false
seedField.value = seed
seedField.disabled = false
})
}
setStatus('request', 'done', 'success')
if (randomSeedField.checked) {
seedField.value = seed
}
}
function handleAudioEnabledChange(e) {
localStorage.setItem(SOUND_ENABLED_KEY, e.target.checked.toString())
}
soundToggle.addEventListener('click', handleAudioEnabledChange)
soundToggle.checked = isSoundEnabled();
makeImageBtn.addEventListener('click', makeImage)
configBox.style.display = 'none'
showConfigToggle.addEventListener('click', function() {
configBox.style.display = (configBox.style.display === 'none' ? 'block' : 'none')
showConfigToggle.innerHTML = (configBox.style.display === 'none' ? 'show' : 'hide')
return false
})
function updateGuidanceScale() {
guidanceScaleValueLabel.innerHTML = guidanceScaleField.value / 10
}
guidanceScaleField.addEventListener('input', updateGuidanceScale)
updateGuidanceScale()
function updatePromptStrength() {
promptStrengthValueLabel.innerHTML = promptStrengthField.value / 10
}
promptStrengthField.addEventListener('input', updatePromptStrength)
updatePromptStrength()
function checkRandomSeed() {
if (randomSeedField.checked) {
seedField.disabled = true
seedField.value = "random"
} else {
seedField.disabled = false
}
}
randomSeedField.addEventListener('input', checkRandomSeed)
checkRandomSeed()
function showInitImagePreview() {
if (initImageSelector.files.length === 0) {
initImagePreviewContainer.style.display = 'none'
promptStrengthContainer.style.display = 'none'
maskSetting.style.display = 'none'
return
}
let reader = new FileReader()
let file = initImageSelector.files[0]
reader.addEventListener('load', function() {
// console.log(file.name, reader.result)
initImagePreview.src = reader.result
initImagePreviewContainer.style.display = 'block'
promptStrengthContainer.style.display = 'block'
maskSetting.style.display = 'block'
})
if (file) {
reader.readAsDataURL(file)
}
}
initImageSelector.addEventListener('change', showInitImagePreview)
showInitImagePreview()
initImageClearBtn.addEventListener('click', function() {
initImageSelector.value = null
maskImageSelector.value = null
initImagePreview.src = ''
maskImagePreview.src = ''
initImagePreviewContainer.style.display = 'none'
maskImagePreviewContainer.style.display = 'none'
maskSetting.style.display = 'none'
promptStrengthContainer.style.display = 'none'
})
function showMaskImagePreview() {
if (maskImageSelector.files.length === 0) {
maskImagePreviewContainer.style.display = 'none'
return
}
let reader = new FileReader()
let file = maskImageSelector.files[0]
reader.addEventListener('load', function() {
maskImagePreview.src = reader.result
maskImagePreviewContainer.style.display = 'block'
})
if (file) {
reader.readAsDataURL(file)
}
}
maskImageSelector.addEventListener('change', showMaskImagePreview)
showMaskImagePreview()
maskImageClearBtn.addEventListener('click', function() {
maskImageSelector.value = null
maskImagePreview.src = ''
maskImagePreviewContainer.style.display = 'none'
})
setInterval(healthCheck, HEALTH_PING_INTERVAL * 1000)
</script>
</html>

69
main.py
View File

@ -1,69 +0,0 @@
from fastapi import FastAPI, HTTPException
from starlette.responses import FileResponse
from pydantic import BaseModel
import requests
LOCAL_SERVER_URL = 'http://stability-ai:5000'
PREDICT_URL = LOCAL_SERVER_URL + '/predictions'
app = FastAPI()
# defaults from https://huggingface.co/blog/stable_diffusion
class ImageRequest(BaseModel):
prompt: str
init_image: str = None # base64
mask: str = None # base64
num_outputs: str = "1"
num_inference_steps: str = "50"
guidance_scale: str = "7.5"
width: str = "512"
height: str = "512"
seed: str = "30000"
prompt_strength: str = "0.8"
@app.get('/')
def read_root():
return FileResponse('index.html')
@app.get('/ping')
async def ping():
try:
requests.get(LOCAL_SERVER_URL)
return {'OK'}
except:
return {'ERROR'}
@app.post('/image')
async def image(req : ImageRequest):
data = {
"input": {
"prompt": req.prompt,
"num_outputs": req.num_outputs,
"num_inference_steps": req.num_inference_steps,
"width": req.width,
"height": req.height,
"seed": req.seed,
"guidance_scale": req.guidance_scale,
}
}
if req.init_image is not None:
data['input']['init_image'] = req.init_image
data['input']['prompt_strength'] = req.prompt_strength
if req.mask is not None:
data['input']['mask'] = req.mask
if req.seed == "-1":
del data['input']['seed']
res = requests.post(PREDICT_URL, json=data)
if res.status_code != 200:
raise HTTPException(status_code=500, detail=res.text)
return res.json()
@app.get('/media/ding.mp3')
def read_root():
return FileResponse('media/ding.mp3')

Binary file not shown.

Before

Width:  |  Height:  |  Size: 18 KiB

BIN
media/config-v3.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 22 KiB

BIN
media/config-v4.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 29 KiB

BIN
media/config-v5.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 55 KiB

BIN
media/config-v6.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 45 KiB

BIN
media/download buttons.xcf Normal file

Binary file not shown.

BIN
media/download-linux.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

BIN
media/download-win.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 51 KiB

BIN
media/shot-v3a.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 122 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 182 KiB

BIN
media/shot-v6a.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 67 KiB

BIN
media/shot-v8.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 244 KiB

View File

@ -1,3 +0,0 @@
requests
fastapi==0.80.0
uvicorn==0.18.2

View File

@ -0,0 +1 @@
installer\Scripts\activate.bat

61
scripts/on_env_start.bat Normal file
View File

@ -0,0 +1,61 @@
@echo off
@echo. & echo "Stable Diffusion UI - v2" & echo.
set PATH=C:\Windows\System32;%PATH%
@cd ..
if exist "scripts\config.bat" (
@call scripts\config.bat
)
if "%update_branch%"=="" (
set update_branch=main
)
@>nul grep -c "conda_sd_ui_deps_installed" scripts\install_status.txt
@if "%ERRORLEVEL%" NEQ "0" (
for /f "tokens=*" %%a in ('python -c "import os; parts = os.getcwd().split(os.path.sep); print(len(parts))"') do if "%%a" NEQ "2" (
echo. & echo "!!!! WARNING !!!!" & echo.
echo "Your 'stable-diffusion-ui' folder is at %cd%" & echo.
echo "The 'stable-diffusion-ui' folder needs to be at the top of your drive, for e.g. 'C:\stable-diffusion-ui' or 'D:\stable-diffusion-ui' etc."
echo "Not placing this folder at the top of a drive can cause errors on some computers."
echo. & echo "Recommended: Please close this window and move the 'stable-diffusion-ui' folder to the top of a drive. For e.g. 'C:\stable-diffusion-ui'. Then run the installer again." & echo.
echo "Not Recommended: If you're sure that you want to install at the current location, please press any key to continue." & echo.
pause
)
)
@>nul grep -c "sd_ui_git_cloned" scripts\install_status.txt
@if "%ERRORLEVEL%" EQU "0" (
@echo "Stable Diffusion UI's git repository was already installed. Updating from %update_branch%.."
@cd sd-ui-files
@call git reset --hard
@call git checkout "%update_branch%"
@call git pull
@cd ..
) else (
@echo. & echo "Downloading Stable Diffusion UI.." & echo.
@echo "Using the %update_branch% channel" & echo.
@call git clone -b "%update_branch%" https://github.com/cmdr2/stable-diffusion-ui.git sd-ui-files && (
@echo sd_ui_git_cloned >> scripts\install_status.txt
) || (
@echo "Error downloading Stable Diffusion UI. Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!"
pause
@exit /b
)
)
@xcopy sd-ui-files\ui ui /s /i /Y
@copy sd-ui-files\scripts\on_sd_start.bat scripts\ /Y
@copy "sd-ui-files\scripts\Start Stable Diffusion UI.cmd" . /Y
@call scripts\on_sd_start.bat
@pause

43
scripts/on_env_start.sh Executable file
View File

@ -0,0 +1,43 @@
#!/bin/bash
printf "\n\nStable Diffusion UI\n\n"
if [ -f "scripts/config.sh" ]; then
source scripts/config.sh
fi
if [ "$update_branch" == "" ]; then
export update_branch="main"
fi
if [ -f "scripts/install_status.txt" ] && [ `grep -c sd_ui_git_cloned scripts/install_status.txt` -gt "0" ]; then
echo "Stable Diffusion UI's git repository was already installed. Updating from $update_branch.."
cd sd-ui-files
git reset --hard
git checkout "$update_branch"
git pull
cd ..
else
printf "\n\nDownloading Stable Diffusion UI..\n\n"
printf "Using the $update_branch channel\n\n"
if git clone -b "$update_branch" https://github.com/cmdr2/stable-diffusion-ui.git sd-ui-files ; then
echo sd_ui_git_cloned >> scripts/install_status.txt
else
printf "\n\nError downloading Stable Diffusion UI. Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n"
read -p "Press any key to continue"
exit
fi
fi
rm -rf ui
cp -Rf sd-ui-files/ui .
cp sd-ui-files/scripts/on_sd_start.sh scripts/
cp sd-ui-files/scripts/start.sh .
./scripts/on_sd_start.sh
read -p "Press any key to continue"

317
scripts/on_sd_start.bat Normal file
View File

@ -0,0 +1,317 @@
@echo off
@copy sd-ui-files\scripts\on_env_start.bat scripts\ /Y
@REM Caution, this file will make your eyes and brain bleed. It's such an unholy mess.
@REM Note to self: Please rewrite this in Python. For the sake of your own sanity.
@call python -c "import os; import shutil; frm = 'sd-ui-files\\ui\\hotfix\\9c24e6cd9f499d02c4f21a033736dabd365962dc80fe3aeb57a8f85ea45a20a3.26fead7ea4f0f843f6eb4055dfd25693f1a71f3c6871b184042d4b126244e142'; dst = os.path.join(os.path.expanduser('~'), '.cache', 'huggingface', 'transformers', '9c24e6cd9f499d02c4f21a033736dabd365962dc80fe3aeb57a8f85ea45a20a3.26fead7ea4f0f843f6eb4055dfd25693f1a71f3c6871b184042d4b126244e142'); shutil.copyfile(frm, dst) if os.path.exists(dst) else print(''); print('Hotfixed broken JSON file from OpenAI');"
@>nul grep -c "sd_git_cloned" scripts\install_status.txt
@if "%ERRORLEVEL%" EQU "0" (
@echo "Stable Diffusion's git repository was already installed. Updating.."
@cd stable-diffusion
@call git reset --hard
@call git pull
@call git checkout f6cfebffa752ee11a7b07497b8529d5971de916c
@call git apply ..\ui\sd_internal\ddim_callback.patch
@call git apply ..\ui\sd_internal\env_yaml.patch
@cd ..
) else (
@echo. & echo "Downloading Stable Diffusion.." & echo.
@call git clone https://github.com/basujindal/stable-diffusion.git && (
@echo sd_git_cloned >> scripts\install_status.txt
) || (
@echo "Error downloading Stable Diffusion. Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!"
pause
@exit /b
)
@cd stable-diffusion
@call git checkout f6cfebffa752ee11a7b07497b8529d5971de916c
@call git apply ..\ui\sd_internal\ddim_callback.patch
@call git apply ..\ui\sd_internal\env_yaml.patch
@cd ..
)
@cd stable-diffusion
@>nul grep -c "conda_sd_env_created" ..\scripts\install_status.txt
@if "%ERRORLEVEL%" EQU "0" (
@echo "Packages necessary for Stable Diffusion were already installed"
@call conda activate .\env
) else (
@echo. & echo "Downloading packages necessary for Stable Diffusion.." & echo. & echo "***** This will take some time (depending on the speed of the Internet connection) and may appear to be stuck, but please be patient ***** .." & echo.
@rmdir /s /q .\env
@REM prevent conda from using packages from the user's home directory, to avoid conflicts
@set PYTHONNOUSERSITE=1
@call conda env create --prefix env -f environment.yaml || (
@echo. & echo "Error installing the packages necessary for Stable Diffusion. Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo.
pause
exit /b
)
@call conda activate .\env
@call conda install -c conda-forge -y --prefix env antlr4-python3-runtime=4.8 || (
@echo. & echo "Error installing antlr4-python3-runtime for Stable Diffusion. Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo.
pause
exit /b
)
for /f "tokens=*" %%a in ('python -c "import torch; import ldm; import transformers; import numpy; import antlr4; print(42)"') do if "%%a" NEQ "42" (
@echo. & echo "Dependency test failed! Error installing the packages necessary for Stable Diffusion. Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo.
pause
exit /b
)
@echo conda_sd_env_created >> ..\scripts\install_status.txt
)
set PATH=C:\Windows\System32;%PATH%
@>nul grep -c "conda_sd_gfpgan_deps_installed" ..\scripts\install_status.txt
@if "%ERRORLEVEL%" EQU "0" (
@echo "Packages necessary for GFPGAN (Face Correction) were already installed"
) else (
@echo. & echo "Downloading packages necessary for GFPGAN (Face Correction).." & echo.
@set PYTHONNOUSERSITE=1
@call pip install -e git+https://github.com/TencentARC/GFPGAN#egg=GFPGAN || (
@echo. & echo "Error installing the packages necessary for GFPGAN (Face Correction). Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo.
pause
exit /b
)
@call pip install basicsr==1.4.2 || (
@echo. & echo "Error installing the basicsr package necessary for GFPGAN (Face Correction). Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo.
pause
exit /b
)
for /f "tokens=*" %%a in ('python -c "from gfpgan import GFPGANer; print(42)"') do if "%%a" NEQ "42" (
@echo. & echo "Dependency test failed! Error installing the packages necessary for GFPGAN (Face Correction). Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo.
pause
exit /b
)
@echo conda_sd_gfpgan_deps_installed >> ..\scripts\install_status.txt
)
@>nul grep -c "conda_sd_esrgan_deps_installed" ..\scripts\install_status.txt
@if "%ERRORLEVEL%" EQU "0" (
@echo "Packages necessary for ESRGAN (Resolution Upscaling) were already installed"
) else (
@echo. & echo "Downloading packages necessary for ESRGAN (Resolution Upscaling).." & echo.
@set PYTHONNOUSERSITE=1
@call pip install -e git+https://github.com/xinntao/Real-ESRGAN#egg=realesrgan || (
@echo. & echo "Error installing the packages necessary for ESRGAN (Resolution Upscaling). Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo.
pause
exit /b
)
for /f "tokens=*" %%a in ('python -c "from basicsr.archs.rrdbnet_arch import RRDBNet; from realesrgan import RealESRGANer; print(42)"') do if "%%a" NEQ "42" (
@echo. & echo "Dependency test failed! Error installing the packages necessary for ESRGAN (Resolution Upscaling). Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo.
pause
exit /b
)
@echo conda_sd_esrgan_deps_installed >> ..\scripts\install_status.txt
)
@>nul grep -c "conda_sd_ui_deps_installed" ..\scripts\install_status.txt
@if "%ERRORLEVEL%" EQU "0" (
echo "Packages necessary for Stable Diffusion UI were already installed"
) else (
@echo. & echo "Downloading packages necessary for Stable Diffusion UI.." & echo.
@set PYTHONNOUSERSITE=1
@call conda install -c conda-forge -y --prefix env uvicorn fastapi || (
echo "Error installing the packages necessary for Stable Diffusion UI. Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!"
pause
exit /b
)
)
call WHERE uvicorn > .tmp
@>nul grep -c "uvicorn" .tmp
@if "%ERRORLEVEL%" NEQ "0" (
@echo. & echo "UI packages not found! Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo.
pause
exit /b
)
@>nul grep -c "conda_sd_ui_deps_installed" ..\scripts\install_status.txt
@if "%ERRORLEVEL%" NEQ "0" (
@echo conda_sd_ui_deps_installed >> ..\scripts\install_status.txt
)
@if exist "sd-v1-4.ckpt" (
for %%I in ("sd-v1-4.ckpt") do if "%%~zI" EQU "4265380512" (
echo "Data files (weights) necessary for Stable Diffusion were already downloaded"
) else (
for %%J in ("sd-v1-4.ckpt") do if "%%~zJ" EQU "7703807346" (
echo "Data files (weights) necessary for Stable Diffusion were already downloaded"
) else (
for %%K in ("sd-v1-4.ckpt") do if "%%~zK" EQU "7703810927" (
echo "Data files (weights) necessary for Stable Diffusion were already downloaded"
) else (
echo. & echo "The model file present at %cd%\sd-v1-4.ckpt is invalid. It is only %%~zK bytes in size. Re-downloading.." & echo.
del "sd-v1-4.ckpt"
)
)
)
)
@if not exist "sd-v1-4.ckpt" (
@echo. & echo "Downloading data files (weights) for Stable Diffusion.." & echo.
@call curl -L -k https://me.cmdr2.org/stable-diffusion-ui/sd-v1-4.ckpt > sd-v1-4.ckpt
@if exist "sd-v1-4.ckpt" (
for %%I in ("sd-v1-4.ckpt") do if "%%~zI" NEQ "4265380512" (
echo. & echo "Error: The downloaded model file was invalid! Bytes downloaded: %%~zI" & echo.
echo. & echo "Error downloading the data files (weights) for Stable Diffusion. Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo.
pause
exit /b
)
) else (
@echo. & echo "Error downloading the data files (weights) for Stable Diffusion. Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo.
pause
exit /b
)
)
@if exist "GFPGANv1.3.pth" (
for %%I in ("GFPGANv1.3.pth") do if "%%~zI" EQU "348632874" (
echo "Data files (weights) necessary for GFPGAN (Face Correction) were already downloaded"
) else (
echo. & echo "The GFPGAN model file present at %cd%\GFPGANv1.3.pth is invalid. It is only %%~zI bytes in size. Re-downloading.." & echo.
del "GFPGANv1.3.pth"
)
)
@if not exist "GFPGANv1.3.pth" (
@echo. & echo "Downloading data files (weights) for GFPGAN (Face Correction).." & echo.
@call curl -L -k https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth > GFPGANv1.3.pth
@if exist "GFPGANv1.3.pth" (
for %%I in ("GFPGANv1.3.pth") do if "%%~zI" NEQ "348632874" (
echo. & echo "Error: The downloaded GFPGAN model file was invalid! Bytes downloaded: %%~zI" & echo.
echo. & echo "Error downloading the data files (weights) for GFPGAN (Face Correction). Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo.
pause
exit /b
)
) else (
@echo. & echo "Error downloading the data files (weights) for GFPGAN (Face Correction). Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo.
pause
exit /b
)
)
@if exist "RealESRGAN_x4plus.pth" (
for %%I in ("RealESRGAN_x4plus.pth") do if "%%~zI" EQU "67040989" (
echo "Data files (weights) necessary for ESRGAN (Resolution Upscaling) x4plus were already downloaded"
) else (
echo. & echo "The GFPGAN model file present at %cd%\RealESRGAN_x4plus.pth is invalid. It is only %%~zI bytes in size. Re-downloading.." & echo.
del "RealESRGAN_x4plus.pth"
)
)
@if not exist "RealESRGAN_x4plus.pth" (
@echo. & echo "Downloading data files (weights) for ESRGAN (Resolution Upscaling) x4plus.." & echo.
@call curl -L -k https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth > RealESRGAN_x4plus.pth
@if exist "RealESRGAN_x4plus.pth" (
for %%I in ("RealESRGAN_x4plus.pth") do if "%%~zI" NEQ "67040989" (
echo. & echo "Error: The downloaded ESRGAN x4plus model file was invalid! Bytes downloaded: %%~zI" & echo.
echo. & echo "Error downloading the data files (weights) for ESRGAN (Resolution Upscaling) x4plus. Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo.
pause
exit /b
)
) else (
@echo. & echo "Error downloading the data files (weights) for ESRGAN (Resolution Upscaling) x4plus. Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo.
pause
exit /b
)
)
@if exist "RealESRGAN_x4plus_anime_6B.pth" (
for %%I in ("RealESRGAN_x4plus_anime_6B.pth") do if "%%~zI" EQU "17938799" (
echo "Data files (weights) necessary for ESRGAN (Resolution Upscaling) x4plus_anime were already downloaded"
) else (
echo. & echo "The GFPGAN model file present at %cd%\RealESRGAN_x4plus_anime_6B.pth is invalid. It is only %%~zI bytes in size. Re-downloading.." & echo.
del "RealESRGAN_x4plus_anime_6B.pth"
)
)
@if not exist "RealESRGAN_x4plus_anime_6B.pth" (
@echo. & echo "Downloading data files (weights) for ESRGAN (Resolution Upscaling) x4plus_anime.." & echo.
@call curl -L -k https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth > RealESRGAN_x4plus_anime_6B.pth
@if exist "RealESRGAN_x4plus_anime_6B.pth" (
for %%I in ("RealESRGAN_x4plus_anime_6B.pth") do if "%%~zI" NEQ "17938799" (
echo. & echo "Error: The downloaded ESRGAN x4plus_anime model file was invalid! Bytes downloaded: %%~zI" & echo.
echo. & echo "Error downloading the data files (weights) for ESRGAN (Resolution Upscaling) x4plus_anime. Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo.
pause
exit /b
)
) else (
@echo. & echo "Error downloading the data files (weights) for ESRGAN (Resolution Upscaling) x4plus_anime. Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo.
pause
exit /b
)
)
@>nul grep -c "sd_install_complete" ..\scripts\install_status.txt
@if "%ERRORLEVEL%" NEQ "0" (
@echo sd_weights_downloaded >> ..\scripts\install_status.txt
@echo sd_install_complete >> ..\scripts\install_status.txt
)
@echo. & echo "Stable Diffusion is ready!" & echo.
@set SD_DIR=%cd%
@cd env\lib\site-packages
@set PYTHONPATH=%SD_DIR%;%cd%
@cd ..\..\..
@echo PYTHONPATH=%PYTHONPATH%
@cd ..
@set SD_UI_PATH=%cd%\ui
@cd stable-diffusion
@call python --version
@uvicorn server:app --app-dir "%SD_UI_PATH%" --port 9000 --host 0.0.0.0
@pause

309
scripts/on_sd_start.sh Executable file
View File

@ -0,0 +1,309 @@
#!/bin/bash
cp sd-ui-files/scripts/on_env_start.sh scripts/
source installer/etc/profile.d/conda.sh
python -c "import os; import shutil; frm = 'sd-ui-files/ui/hotfix/9c24e6cd9f499d02c4f21a033736dabd365962dc80fe3aeb57a8f85ea45a20a3.26fead7ea4f0f843f6eb4055dfd25693f1a71f3c6871b184042d4b126244e142'; dst = os.path.join(os.path.expanduser('~'), '.cache', 'huggingface', 'transformers', '9c24e6cd9f499d02c4f21a033736dabd365962dc80fe3aeb57a8f85ea45a20a3.26fead7ea4f0f843f6eb4055dfd25693f1a71f3c6871b184042d4b126244e142'); shutil.copyfile(frm, dst) if os.path.exists(dst) else print(''); print('Hotfixed broken JSON file from OpenAI');"
# Caution, this file will make your eyes and brain bleed. It's such an unholy mess.
# Note to self: Please rewrite this in Python. For the sake of your own sanity.
if [ -e "scripts/install_status.txt" ] && [ `grep -c sd_git_cloned scripts/install_status.txt` -gt "0" ]; then
echo "Stable Diffusion's git repository was already installed. Updating.."
cd stable-diffusion
git reset --hard
git pull
git checkout f6cfebffa752ee11a7b07497b8529d5971de916c
git apply ../ui/sd_internal/ddim_callback.patch
git apply ../ui/sd_internal/env_yaml.patch
cd ..
else
printf "\n\nDownloading Stable Diffusion..\n\n"
if git clone https://github.com/basujindal/stable-diffusion.git ; then
echo sd_git_cloned >> scripts/install_status.txt
else
printf "\n\nError downloading Stable Diffusion. Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n"
read -p "Press any key to continue"
exit
fi
cd stable-diffusion
git checkout f6cfebffa752ee11a7b07497b8529d5971de916c
git apply ../ui/sd_internal/ddim_callback.patch
git apply ../ui/sd_internal/env_yaml.patch
cd ..
fi
cd stable-diffusion
if [ `grep -c conda_sd_env_created ../scripts/install_status.txt` -gt "0" ]; then
echo "Packages necessary for Stable Diffusion were already installed"
conda activate ./env
else
printf "\n\nDownloading packages necessary for Stable Diffusion..\n"
printf "\n\n***** This will take some time (depending on the speed of the Internet connection) and may appear to be stuck, but please be patient ***** ..\n\n"
# prevent conda from using packages from the user's home directory, to avoid conflicts
export PYTHONNOUSERSITE=1
if conda env create --prefix env --force -f environment.yaml ; then
echo "Installed. Testing.."
else
printf "\n\nError installing the packages necessary for Stable Diffusion. Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n"
read -p "Press any key to continue"
exit
fi
conda activate ./env
if conda install -c conda-forge --prefix ./env -y antlr4-python3-runtime=4.8 ; then
echo "Installed. Testing.."
else
printf "\n\nError installing antlr4-python3-runtime for Stable Diffusion. Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n"
read -p "Press any key to continue"
exit
fi
out_test=`python -c "import torch; import ldm; import transformers; import numpy; import antlr4; print(42)"`
if [ "$out_test" != "42" ]; then
printf "\n\nDependency test failed! Error installing the packages necessary for Stable Diffusion. Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n"
read -p "Press any key to continue"
exit
fi
echo conda_sd_env_created >> ../scripts/install_status.txt
fi
if [ `grep -c conda_sd_gfpgan_deps_installed ../scripts/install_status.txt` -gt "0" ]; then
echo "Packages necessary for GFPGAN (Face Correction) were already installed"
else
printf "\n\nDownloading packages necessary for GFPGAN (Face Correction)..\n"
export PYTHONNOUSERSITE=1
if pip install -e git+https://github.com/TencentARC/GFPGAN#egg=GFPGAN ; then
echo "Installed. Testing.."
else
printf "\n\nError installing the packages necessary for GFPGAN (Face Correction). Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n"
read -p "Press any key to continue"
exit
fi
out_test=`python -c "from gfpgan import GFPGANer; print(42)"`
if [ "$out_test" != "42" ]; then
printf "\n\nDependency test failed! Error installing the packages necessary for GFPGAN (Face Correction). Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n"
read -p "Press any key to continue"
exit
fi
echo conda_sd_gfpgan_deps_installed >> ../scripts/install_status.txt
fi
if [ `grep -c conda_sd_esrgan_deps_installed ../scripts/install_status.txt` -gt "0" ]; then
echo "Packages necessary for ESRGAN (Resolution Upscaling) were already installed"
else
printf "\n\nDownloading packages necessary for ESRGAN (Resolution Upscaling)..\n"
export PYTHONNOUSERSITE=1
if pip install -e git+https://github.com/xinntao/Real-ESRGAN#egg=realesrgan ; then
echo "Installed. Testing.."
else
printf "\n\nError installing the packages necessary for ESRGAN (Resolution Upscaling). Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n"
read -p "Press any key to continue"
exit
fi
out_test=`python -c "from basicsr.archs.rrdbnet_arch import RRDBNet; from realesrgan import RealESRGANer; print(42)"`
if [ "$out_test" != "42" ]; then
printf "\n\nDependency test failed! Error installing the packages necessary for ESRGAN (Resolution Upscaling). Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n"
read -p "Press any key to continue"
exit
fi
echo conda_sd_esrgan_deps_installed >> ../scripts/install_status.txt
fi
if [ `grep -c conda_sd_ui_deps_installed ../scripts/install_status.txt` -gt "0" ]; then
echo "Packages necessary for Stable Diffusion UI were already installed"
else
printf "\n\nDownloading packages necessary for Stable Diffusion UI..\n\n"
export PYTHONNOUSERSITE=1
if conda install -c conda-forge --prefix ./env -y uvicorn fastapi ; then
echo "Installed. Testing.."
else
printf "\n\nError installing the packages necessary for Stable Diffusion UI. Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n"
read -p "Press any key to continue"
exit
fi
if ! command -v uvicorn &> /dev/null; then
printf "\n\nUI packages not found! Error installing the packages necessary for Stable Diffusion UI. Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n"
read -p "Press any key to continue"
exit
fi
echo conda_sd_ui_deps_installed >> ../scripts/install_status.txt
fi
if [ -f "sd-v1-4.ckpt" ]; then
model_size=`ls -l sd-v1-4.ckpt | awk '{print $5}'`
if [ "$model_size" -eq "4265380512" ] || [ "$model_size" -eq "7703807346" ] || [ "$model_size" -eq "7703810927" ]; then
echo "Data files (weights) necessary for Stable Diffusion were already downloaded"
else
printf "\n\nThe model file present at $PWD/sd-v1-4.ckpt is invalid. It is only $model_size bytes in size. Re-downloading.."
rm sd-v1-4.ckpt
fi
fi
if [ ! -f "sd-v1-4.ckpt" ]; then
echo "Downloading data files (weights) for Stable Diffusion.."
curl -L -k https://me.cmdr2.org/stable-diffusion-ui/sd-v1-4.ckpt > sd-v1-4.ckpt
if [ -f "sd-v1-4.ckpt" ]; then
model_size=`ls -l sd-v1-4.ckpt | awk '{print $5}'`
if [ ! "$model_size" == "4265380512" ]; then
printf "\n\nError: The downloaded model file was invalid! Bytes downloaded: $model_size\n\n"
printf "\n\nError downloading the data files (weights) for Stable Diffusion. Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n"
read -p "Press any key to continue"
exit
fi
else
printf "\n\nError downloading the data files (weights) for Stable Diffusion. Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n"
read -p "Press any key to continue"
exit
fi
fi
if [ -f "GFPGANv1.3.pth" ]; then
model_size=`ls -l GFPGANv1.3.pth | awk '{print $5}'`
if [ "$model_size" -eq "348632874" ]; then
echo "Data files (weights) necessary for GFPGAN (Face Correction) were already downloaded"
else
printf "\n\nThe model file present at $PWD/GFPGANv1.3.pth is invalid. It is only $model_size bytes in size. Re-downloading.."
rm GFPGANv1.3.pth
fi
fi
if [ ! -f "GFPGANv1.3.pth" ]; then
echo "Downloading data files (weights) for GFPGAN (Face Correction).."
curl -L -k https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth > GFPGANv1.3.pth
if [ -f "GFPGANv1.3.pth" ]; then
model_size=`ls -l GFPGANv1.3.pth | awk '{print $5}'`
if [ ! "$model_size" -eq "348632874" ]; then
printf "\n\nError: The downloaded GFPGAN model file was invalid! Bytes downloaded: $model_size\n\n"
printf "\n\nError downloading the data files (weights) for GFPGAN (Face Correction). Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n"
read -p "Press any key to continue"
exit
fi
else
printf "\n\nError downloading the data files (weights) for GFPGAN (Face Correction). Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n"
read -p "Press any key to continue"
exit
fi
fi
if [ -f "RealESRGAN_x4plus.pth" ]; then
model_size=`ls -l RealESRGAN_x4plus.pth | awk '{print $5}'`
if [ "$model_size" -eq "67040989" ]; then
echo "Data files (weights) necessary for ESRGAN (Resolution Upscaling) x4plus were already downloaded"
else
printf "\n\nThe model file present at $PWD/RealESRGAN_x4plus.pth is invalid. It is only $model_size bytes in size. Re-downloading.."
rm RealESRGAN_x4plus.pth
fi
fi
if [ ! -f "RealESRGAN_x4plus.pth" ]; then
echo "Downloading data files (weights) for ESRGAN (Resolution Upscaling) x4plus.."
curl -L -k https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth > RealESRGAN_x4plus.pth
if [ -f "RealESRGAN_x4plus.pth" ]; then
model_size=`ls -l RealESRGAN_x4plus.pth | awk '{print $5}'`
if [ ! "$model_size" -eq "67040989" ]; then
printf "\n\nError: The downloaded ESRGAN x4plus model file was invalid! Bytes downloaded: $model_size\n\n"
printf "\n\nError downloading the data files (weights) for ESRGAN (Resolution Upscaling) x4plus. Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n"
read -p "Press any key to continue"
exit
fi
else
printf "\n\nError downloading the data files (weights) for ESRGAN (Resolution Upscaling) x4plus. Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n"
read -p "Press any key to continue"
exit
fi
fi
if [ -f "RealESRGAN_x4plus_anime_6B.pth" ]; then
model_size=`ls -l RealESRGAN_x4plus_anime_6B.pth | awk '{print $5}'`
if [ "$model_size" -eq "17938799" ]; then
echo "Data files (weights) necessary for ESRGAN (Resolution Upscaling) x4plus_anime were already downloaded"
else
printf "\n\nThe model file present at $PWD/RealESRGAN_x4plus_anime_6B.pth is invalid. It is only $model_size bytes in size. Re-downloading.."
rm RealESRGAN_x4plus_anime_6B.pth
fi
fi
if [ ! -f "RealESRGAN_x4plus_anime_6B.pth" ]; then
echo "Downloading data files (weights) for ESRGAN (Resolution Upscaling) x4plus_anime.."
curl -L -k https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth > RealESRGAN_x4plus_anime_6B.pth
if [ -f "RealESRGAN_x4plus_anime_6B.pth" ]; then
model_size=`ls -l RealESRGAN_x4plus_anime_6B.pth | awk '{print $5}'`
if [ ! "$model_size" -eq "17938799" ]; then
printf "\n\nError: The downloaded ESRGAN x4plus_anime model file was invalid! Bytes downloaded: $model_size\n\n"
printf "\n\nError downloading the data files (weights) for ESRGAN (Resolution Upscaling) x4plus_anime. Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n"
read -p "Press any key to continue"
exit
fi
else
printf "\n\nError downloading the data files (weights) for ESRGAN (Resolution Upscaling) x4plus_anime. Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n"
read -p "Press any key to continue"
exit
fi
fi
if [ `grep -c sd_install_complete ../scripts/install_status.txt` -gt "0" ]; then
echo sd_weights_downloaded >> ../scripts/install_status.txt
echo sd_install_complete >> ../scripts/install_status.txt
fi
printf "\n\nStable Diffusion is ready!\n\n"
SD_PATH=`pwd`
export PYTHONPATH="$SD_PATH;$SD_PATH/env/lib/python3.8/site-packages"
echo "PYTHONPATH=$PYTHONPATH"
cd ..
export SD_UI_PATH=`pwd`/ui
cd stable-diffusion
python --version
uvicorn server:app --app-dir "$SD_UI_PATH" --port 9000 --host 0.0.0.0
read -p "Press any key to continue"

View File

@ -0,0 +1,6 @@
@call conda --version
@call git --version
cd %CONDA_PREFIX%\..\scripts
on_env_start.bat

12
scripts/post_activate.sh Executable file
View File

@ -0,0 +1,12 @@
#!/bin/bash
conda-unpack
source $CONDA_PREFIX/etc/profile.d/conda.sh
conda --version
git --version
cd $CONDA_PREFIX/../scripts
./on_env_start.sh

7
scripts/start.sh Normal file
View File

@ -0,0 +1,7 @@
#!/bin/bash
source installer/bin/activate
conda-unpack
scripts/on_env_start.sh

View File

@ -0,0 +1,2 @@
Set-ItemProperty -Path 'HKLM:\SYSTEM\CurrentControlSet\Control\FileSystem' -Name LongPathsEnabled -Type DWord -Value 1
pause

25
server
View File

@ -1,25 +0,0 @@
#!/bin/bash
CMD="$1"
if [ -z "$1" ]; then
CMD="start"
fi
start_server() {
docker-compose up &
}
stop_server() {
docker-compose down
}
if [ "$CMD" == "start" ]; then
start_server
elif [ "$CMD" == "stop" ]; then
stop_server
elif [ "$CMD" == "restart" ]; then
stop_server
start_server
else
echo "Unknown option: $1 (Expected start or stop)"
fi

View File

@ -0,0 +1,171 @@
{
"_name_or_path": "clip-vit-large-patch14/",
"architectures": [
"CLIPModel"
],
"initializer_factor": 1.0,
"logit_scale_init_value": 2.6592,
"model_type": "clip",
"projection_dim": 768,
"text_config": {
"_name_or_path": "",
"add_cross_attention": false,
"architectures": null,
"attention_dropout": 0.0,
"bad_words_ids": null,
"bos_token_id": 0,
"chunk_size_feed_forward": 0,
"cross_attention_hidden_size": null,
"decoder_start_token_id": null,
"diversity_penalty": 0.0,
"do_sample": false,
"dropout": 0.0,
"early_stopping": false,
"encoder_no_repeat_ngram_size": 0,
"eos_token_id": 2,
"finetuning_task": null,
"forced_bos_token_id": null,
"forced_eos_token_id": null,
"hidden_act": "quick_gelu",
"hidden_size": 768,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1"
},
"initializer_factor": 1.0,
"initializer_range": 0.02,
"intermediate_size": 3072,
"is_decoder": false,
"is_encoder_decoder": false,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1
},
"layer_norm_eps": 1e-05,
"length_penalty": 1.0,
"max_length": 20,
"max_position_embeddings": 77,
"min_length": 0,
"model_type": "clip_text_model",
"no_repeat_ngram_size": 0,
"num_attention_heads": 12,
"num_beam_groups": 1,
"num_beams": 1,
"num_hidden_layers": 12,
"num_return_sequences": 1,
"output_attentions": false,
"output_hidden_states": false,
"output_scores": false,
"pad_token_id": 1,
"prefix": null,
"problem_type": null,
"projection_dim" : 768,
"pruned_heads": {},
"remove_invalid_values": false,
"repetition_penalty": 1.0,
"return_dict": true,
"return_dict_in_generate": false,
"sep_token_id": null,
"task_specific_params": null,
"temperature": 1.0,
"tie_encoder_decoder": false,
"tie_word_embeddings": true,
"tokenizer_class": null,
"top_k": 50,
"top_p": 1.0,
"torch_dtype": null,
"torchscript": false,
"transformers_version": "4.16.0.dev0",
"use_bfloat16": false,
"vocab_size": 49408
},
"text_config_dict": {
"hidden_size": 768,
"intermediate_size": 3072,
"num_attention_heads": 12,
"num_hidden_layers": 12,
"projection_dim": 768
},
"torch_dtype": "float32",
"transformers_version": null,
"vision_config": {
"_name_or_path": "",
"add_cross_attention": false,
"architectures": null,
"attention_dropout": 0.0,
"bad_words_ids": null,
"bos_token_id": null,
"chunk_size_feed_forward": 0,
"cross_attention_hidden_size": null,
"decoder_start_token_id": null,
"diversity_penalty": 0.0,
"do_sample": false,
"dropout": 0.0,
"early_stopping": false,
"encoder_no_repeat_ngram_size": 0,
"eos_token_id": null,
"finetuning_task": null,
"forced_bos_token_id": null,
"forced_eos_token_id": null,
"hidden_act": "quick_gelu",
"hidden_size": 1024,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1"
},
"image_size": 224,
"initializer_factor": 1.0,
"initializer_range": 0.02,
"intermediate_size": 4096,
"is_decoder": false,
"is_encoder_decoder": false,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1
},
"layer_norm_eps": 1e-05,
"length_penalty": 1.0,
"max_length": 20,
"min_length": 0,
"model_type": "clip_vision_model",
"no_repeat_ngram_size": 0,
"num_attention_heads": 16,
"num_beam_groups": 1,
"num_beams": 1,
"num_hidden_layers": 24,
"num_return_sequences": 1,
"output_attentions": false,
"output_hidden_states": false,
"output_scores": false,
"pad_token_id": null,
"patch_size": 14,
"prefix": null,
"problem_type": null,
"projection_dim" : 768,
"pruned_heads": {},
"remove_invalid_values": false,
"repetition_penalty": 1.0,
"return_dict": true,
"return_dict_in_generate": false,
"sep_token_id": null,
"task_specific_params": null,
"temperature": 1.0,
"tie_encoder_decoder": false,
"tie_word_embeddings": true,
"tokenizer_class": null,
"top_k": 50,
"top_p": 1.0,
"torch_dtype": null,
"torchscript": false,
"transformers_version": "4.16.0.dev0",
"use_bfloat16": false
},
"vision_config_dict": {
"hidden_size": 1024,
"intermediate_size": 4096,
"num_attention_heads": 16,
"num_hidden_layers": 24,
"patch_size": 14,
"projection_dim": 768
}
}

1675
ui/index.html Normal file

File diff suppressed because it is too large Load Diff

BIN
ui/media/ding.mp3 Normal file

Binary file not shown.

5
ui/media/drawingboard.min.css vendored Normal file

File diff suppressed because one or more lines are too long

4
ui/media/drawingboard.min.js vendored Normal file

File diff suppressed because one or more lines are too long

BIN
ui/media/favicon-16x16.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 466 B

BIN
ui/media/favicon-32x32.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 973 B

2
ui/media/jquery-3.6.1.min.js vendored Normal file

File diff suppressed because one or more lines are too long

BIN
ui/media/kofi.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB

258
ui/modifiers.json Normal file
View File

@ -0,0 +1,258 @@
[
[
"Drawing Style",
[
"Cel Shading",
"Children's Drawing",
"Crosshatch",
"Detailed and Intricate",
"Doodle",
"Dot Art",
"Line Art",
"Sketch"
]
],
[
"Visual Style",
[
"2D",
"8-bit",
"16-bit",
"Anaglyph",
"Anime",
"Art Nouveau",
"Bauhaus",
"Baroque",
"CGI",
"Cartoon",
"Comic Book",
"Concept Art",
"Constructivist",
"Cubist",
"Digital Art",
"Dadaist",
"Expressionist",
"Fantasy",
"Fauvist",
"Figurative",
"Graphic Novel",
"Geometric",
"Hard Edge Painting",
"Hydrodipped",
"Impressionistic",
"Lithography",
"Manga",
"Minimalist",
"Modern Art",
"Mosaic",
"Mural",
"Naive",
"Neoclassical",
"Photo",
"Realistic",
"Rococo",
"Romantic",
"Street Art",
"Symbolist",
"Stuckist",
"Surrealist",
"Visual Novel",
"Watercolor"
]
],
[
"Pen",
[
"Chalk",
"Colored Pencil",
"Graphite",
"Ink",
"Oil Paint",
"Pastel Art"
]
],
[
"Carving and Etching",
[
"Etching",
"Linocut",
"Paper Model",
"Paper-Mache",
"Papercutting",
"Pyrography",
"Wood-Carving"
]
],
[
"Camera",
[
"Aerial View",
"Canon50",
"Cinematic",
"Close-up",
"Color Grading",
"Dramatic",
"Film Grain",
"Fisheye Lens",
"Glamor Shot",
"Golden Hour",
"HD",
"Landscape",
"Lens Flare",
"Macro",
"Polaroid",
"Photoshoot",
"Portrait",
"Studio Lighting",
"Vintage",
"War Photography",
"White Balance",
"Wildlife Photography"
]
],
[
"Color",
[
"Beautiful Lighting",
"Cold Color Palette",
"Colorful",
"Dynamic Lighting",
"Electric Colors",
"Infrared",
"Pastel",
"Neon",
"Synthwave",
"Warm Color Palette"
]
],
[
"Emotions",
[
"Angry",
"Bitter",
"Disgusted",
"Embarrassed",
"Evil",
"Excited",
"Fear",
"Funny",
"Happy",
"Horrifying",
"Lonely",
"Sad",
"Serene",
"Surprised",
"Melancholic"
]
],
[
"Style of an artist or community",
[
"Artstation",
"trending on Artstation",
"by Agnes Lawrence Pelton",
"by Akihito Yoshida",
"by Alex Grey",
"by Alexander Jansson",
"by Alphonse Mucha",
"by Andy Warhol",
"by Artgerm",
"by Asaf Hanuka",
"by Aubrey Beardsley",
"by Banksy",
"by Beeple",
"by Ben Enwonwu",
"by Bob Eggleton",
"by Caravaggio Michelangelo Merisi",
"by Caspar David Friedrich",
"by Chris Foss",
"by Claude Monet",
"by Dan Mumford",
"by David Mann",
"by Diego Velázquez",
"by Disney Animation Studios",
"by Édouard Manet",
"by Esao Andrews",
"by Frida Kahlo",
"by Gediminas Pranckevicius",
"by Georgia O'Keeffe",
"by Greg Rutkowski",
"by Gustave Doré",
"by Gustave Klimt",
"by H.R. Giger",
"by Hayao Miyazaki",
"by Henri Matisse",
"by HP Lovecraft",
"by Ivan Shishkin",
"by Jack Kirby",
"by Jackson Pollock",
"by James Jean",
"by Jim Burns",
"by Johannes Vermeer",
"by John William Waterhouse",
"by Katsushika Hokusai",
"by Kim Tschang Yeul",
"by Ko Young Hoon",
"by Leonardo da Vinci",
"by Lisa Frank",
"by M.C. Escher",
"by Mahmoud Saïd",
"by Makoto Shinkai",
"by Marc Simonetti",
"by Mark Brooks",
"by Michelangelo",
"by Pablo Picasso",
"by Paul Klee",
"by Peter Mohrbacher",
"by Pierre-Auguste Renoir",
"by Pixar Animation Studios",
"by Rembrandt",
"by Richard Dadd",
"by Rossdraws",
"by Salvador Dalí",
"by Sam Does Arts",
"by Sandro Botticelli",
"by Ted Nasmith",
"by Ten Hundred",
"by Thomas Kinkade",
"by Tivadar Csontváry Kosztka",
"by Victo Ngai",
"by Vincent Di Fate",
"by Vincent van Gogh",
"by Wes Anderson",
"by wlop",
"by Yoshitaka Amano"
]
],
[
"CGI Software",
[
"3D Model",
"3D Sculpt",
"3Ds Max Model",
"Blender Model",
"Cinema4d Model",
"Maya Model",
"Unreal Engine",
"Zbrush Sculpt"
]
],
[
"CGI Rendering",
[
"3D Render",
"Corona Render",
"Creature Design",
"Cycles Render",
"Detailed Render",
"Environment Design",
"Intricate Environment",
"LSD Render",
"Octane Render",
"PBR",
"Glass Caustics",
"Global Illumination",
"Subsurface Scattering"
]
]
]

View File

@ -0,0 +1,98 @@
import json
class Request:
session_id: str = "session"
prompt: str = ""
init_image: str = None # base64
mask: str = None # base64
num_outputs: int = 1
num_inference_steps: int = 50
guidance_scale: float = 7.5
width: int = 512
height: int = 512
seed: int = 42
prompt_strength: float = 0.8
sampler: str = None # "ddim", "plms", "heun", "euler", "euler_a", "dpm2", "dpm2_a", "lms"
# allow_nsfw: bool = False
precision: str = "autocast" # or "full"
save_to_disk_path: str = None
turbo: bool = True
use_cpu: bool = False
use_full_precision: bool = False
use_face_correction: str = None # or "GFPGANv1.3"
use_upscale: str = None # or "RealESRGAN_x4plus" or "RealESRGAN_x4plus_anime_6B"
show_only_filtered_image: bool = False
stream_progress_updates: bool = False
stream_image_progress: bool = False
def json(self):
return {
"session_id": self.session_id,
"prompt": self.prompt,
"num_outputs": self.num_outputs,
"num_inference_steps": self.num_inference_steps,
"guidance_scale": self.guidance_scale,
"width": self.width,
"height": self.height,
"seed": self.seed,
"prompt_strength": self.prompt_strength,
"sampler": self.sampler,
"use_face_correction": self.use_face_correction,
"use_upscale": self.use_upscale,
}
def to_string(self):
return f'''
session_id: {self.session_id}
prompt: {self.prompt}
seed: {self.seed}
num_inference_steps: {self.num_inference_steps}
sampler: {self.sampler}
guidance_scale: {self.guidance_scale}
w: {self.width}
h: {self.height}
precision: {self.precision}
save_to_disk_path: {self.save_to_disk_path}
turbo: {self.turbo}
use_cpu: {self.use_cpu}
use_full_precision: {self.use_full_precision}
use_face_correction: {self.use_face_correction}
use_upscale: {self.use_upscale}
show_only_filtered_image: {self.show_only_filtered_image}
stream_progress_updates: {self.stream_progress_updates}
stream_image_progress: {self.stream_image_progress}'''
class Image:
data: str # base64
seed: int
is_nsfw: bool
path_abs: str = None
def __init__(self, data, seed):
self.data = data
self.seed = seed
def json(self):
return {
"data": self.data,
"seed": self.seed,
"path_abs": self.path_abs,
}
class Response:
request: Request
images: list
def json(self):
res = {
"status": 'succeeded',
"request": self.request.json(),
"output": [],
}
for image in self.images:
res["output"].append(image.json())
return res

View File

@ -0,0 +1,332 @@
diff --git a/optimizedSD/ddpm.py b/optimizedSD/ddpm.py
index b967b55..35ef520 100644
--- a/optimizedSD/ddpm.py
+++ b/optimizedSD/ddpm.py
@@ -22,7 +22,7 @@ from ldm.util import exists, default, instantiate_from_config
from ldm.modules.diffusionmodules.util import make_beta_schedule
from ldm.modules.diffusionmodules.util import make_ddim_sampling_parameters, make_ddim_timesteps, noise_like
from ldm.modules.diffusionmodules.util import make_beta_schedule, extract_into_tensor, noise_like
-from samplers import CompVisDenoiser, get_ancestral_step, to_d, append_dims,linear_multistep_coeff
+from .samplers import CompVisDenoiser, get_ancestral_step, to_d, append_dims,linear_multistep_coeff
def disabled_train(self):
"""Overwrite model.train with this function to make sure train/eval mode
@@ -506,6 +506,8 @@ class UNet(DDPM):
x_latent = noise if x0 is None else x0
# sampling
+ if sampler in ('ddim', 'dpm2', 'heun', 'dpm2_a', 'lms') and not hasattr(self, 'ddim_timesteps'):
+ self.make_schedule(ddim_num_steps=S, ddim_eta=eta, verbose=False)
if sampler == "plms":
self.make_schedule(ddim_num_steps=S, ddim_eta=eta, verbose=False)
@@ -528,39 +530,46 @@ class UNet(DDPM):
elif sampler == "ddim":
samples = self.ddim_sampling(x_latent, conditioning, S, unconditional_guidance_scale=unconditional_guidance_scale,
unconditional_conditioning=unconditional_conditioning,
- mask = mask,init_latent=x_T,use_original_steps=False)
+ mask = mask,init_latent=x_T,use_original_steps=False,
+ callback=callback, img_callback=img_callback)
elif sampler == "euler":
self.make_schedule(ddim_num_steps=S, ddim_eta=eta, verbose=False)
samples = self.euler_sampling(self.alphas_cumprod,x_latent, S, conditioning, unconditional_conditioning=unconditional_conditioning,
- unconditional_guidance_scale=unconditional_guidance_scale)
+ unconditional_guidance_scale=unconditional_guidance_scale,
+ img_callback=img_callback)
elif sampler == "euler_a":
self.make_schedule(ddim_num_steps=S, ddim_eta=eta, verbose=False)
samples = self.euler_ancestral_sampling(self.alphas_cumprod,x_latent, S, conditioning, unconditional_conditioning=unconditional_conditioning,
- unconditional_guidance_scale=unconditional_guidance_scale)
+ unconditional_guidance_scale=unconditional_guidance_scale,
+ img_callback=img_callback)
elif sampler == "dpm2":
samples = self.dpm_2_sampling(self.alphas_cumprod,x_latent, S, conditioning, unconditional_conditioning=unconditional_conditioning,
- unconditional_guidance_scale=unconditional_guidance_scale)
+ unconditional_guidance_scale=unconditional_guidance_scale,
+ img_callback=img_callback)
elif sampler == "heun":
samples = self.heun_sampling(self.alphas_cumprod,x_latent, S, conditioning, unconditional_conditioning=unconditional_conditioning,
- unconditional_guidance_scale=unconditional_guidance_scale)
+ unconditional_guidance_scale=unconditional_guidance_scale,
+ img_callback=img_callback)
elif sampler == "dpm2_a":
samples = self.dpm_2_ancestral_sampling(self.alphas_cumprod,x_latent, S, conditioning, unconditional_conditioning=unconditional_conditioning,
- unconditional_guidance_scale=unconditional_guidance_scale)
+ unconditional_guidance_scale=unconditional_guidance_scale,
+ img_callback=img_callback)
elif sampler == "lms":
samples = self.lms_sampling(self.alphas_cumprod,x_latent, S, conditioning, unconditional_conditioning=unconditional_conditioning,
- unconditional_guidance_scale=unconditional_guidance_scale)
+ unconditional_guidance_scale=unconditional_guidance_scale,
+ img_callback=img_callback)
+
+ yield from samples
if(self.turbo):
self.model1.to("cpu")
self.model2.to("cpu")
- return samples
-
@torch.no_grad()
def plms_sampling(self, cond,b, img,
ddim_use_original_steps=False,
@@ -599,10 +608,10 @@ class UNet(DDPM):
old_eps.append(e_t)
if len(old_eps) >= 4:
old_eps.pop(0)
- if callback: callback(i)
- if img_callback: img_callback(pred_x0, i)
+ if callback: yield from callback(i)
+ if img_callback: yield from img_callback(pred_x0, i)
- return img
+ yield from img_callback(img, len(iterator)-1)
@torch.no_grad()
def p_sample_plms(self, x, c, t, index, repeat_noise=False, use_original_steps=False, quantize_denoised=False,
@@ -706,7 +715,8 @@ class UNet(DDPM):
@torch.no_grad()
def ddim_sampling(self, x_latent, cond, t_start, unconditional_guidance_scale=1.0, unconditional_conditioning=None,
- mask = None,init_latent=None,use_original_steps=False):
+ mask = None,init_latent=None,use_original_steps=False,
+ callback=None, img_callback=None):
timesteps = self.ddim_timesteps
timesteps = timesteps[:t_start]
@@ -730,10 +740,13 @@ class UNet(DDPM):
unconditional_guidance_scale=unconditional_guidance_scale,
unconditional_conditioning=unconditional_conditioning)
+ if callback: yield from callback(i)
+ if img_callback: yield from img_callback(x_dec, i)
+
if mask is not None:
- return x0 * mask + (1. - mask) * x_dec
+ x_dec = x0 * mask + (1. - mask) * x_dec
- return x_dec
+ yield from img_callback(x_dec, len(iterator)-1)
@torch.no_grad()
@@ -779,13 +792,16 @@ class UNet(DDPM):
@torch.no_grad()
- def euler_sampling(self, ac, x, S, cond, unconditional_conditioning = None, unconditional_guidance_scale = 1,extra_args=None,callback=None, disable=None, s_churn=0., s_tmin=0., s_tmax=float('inf'), s_noise=1.):
+ def euler_sampling(self, ac, x, S, cond, unconditional_conditioning = None, unconditional_guidance_scale = 1,extra_args=None,callback=None, disable=None, s_churn=0., s_tmin=0., s_tmax=float('inf'), s_noise=1.,
+ img_callback=None):
"""Implements Algorithm 2 (Euler steps) from Karras et al. (2022)."""
extra_args = {} if extra_args is None else extra_args
cvd = CompVisDenoiser(ac)
sigmas = cvd.get_sigmas(S)
x = x*sigmas[0]
+ print(f"Running Euler Sampling with {len(sigmas) - 1} timesteps")
+
s_in = x.new_ones([x.shape[0]]).half()
for i in trange(len(sigmas) - 1, disable=disable):
gamma = min(s_churn / (len(sigmas) - 1), 2 ** 0.5 - 1) if s_tmin <= sigmas[i] <= s_tmax else 0.
@@ -807,13 +823,18 @@ class UNet(DDPM):
d = to_d(x, sigma_hat, denoised)
if callback is not None:
callback({'x': x, 'i': i, 'sigma': sigmas[i], 'sigma_hat': sigma_hat, 'denoised': denoised})
+
+ if img_callback: yield from img_callback(x, i)
+
dt = sigmas[i + 1] - sigma_hat
# Euler method
x = x + d * dt
- return x
+
+ yield from img_callback(x, len(sigmas)-1)
@torch.no_grad()
- def euler_ancestral_sampling(self,ac,x, S, cond, unconditional_conditioning = None, unconditional_guidance_scale = 1,extra_args=None, callback=None, disable=None):
+ def euler_ancestral_sampling(self,ac,x, S, cond, unconditional_conditioning = None, unconditional_guidance_scale = 1,extra_args=None, callback=None, disable=None,
+ img_callback=None):
"""Ancestral sampling with Euler method steps."""
extra_args = {} if extra_args is None else extra_args
@@ -822,6 +843,8 @@ class UNet(DDPM):
sigmas = cvd.get_sigmas(S)
x = x*sigmas[0]
+ print(f"Running Euler Ancestral Sampling with {len(sigmas) - 1} timesteps")
+
s_in = x.new_ones([x.shape[0]]).half()
for i in trange(len(sigmas) - 1, disable=disable):
@@ -837,17 +860,22 @@ class UNet(DDPM):
sigma_down, sigma_up = get_ancestral_step(sigmas[i], sigmas[i + 1])
if callback is not None:
callback({'x': x, 'i': i, 'sigma': sigmas[i], 'sigma_hat': sigmas[i], 'denoised': denoised})
+
+ if img_callback: yield from img_callback(x, i)
+
d = to_d(x, sigmas[i], denoised)
# Euler method
dt = sigma_down - sigmas[i]
x = x + d * dt
x = x + torch.randn_like(x) * sigma_up
- return x
+
+ yield from img_callback(x, len(sigmas)-1)
@torch.no_grad()
- def heun_sampling(self, ac, x, S, cond, unconditional_conditioning = None, unconditional_guidance_scale = 1, extra_args=None, callback=None, disable=None, s_churn=0., s_tmin=0., s_tmax=float('inf'), s_noise=1.):
+ def heun_sampling(self, ac, x, S, cond, unconditional_conditioning = None, unconditional_guidance_scale = 1, extra_args=None, callback=None, disable=None, s_churn=0., s_tmin=0., s_tmax=float('inf'), s_noise=1.,
+ img_callback=None):
"""Implements Algorithm 2 (Heun steps) from Karras et al. (2022)."""
extra_args = {} if extra_args is None else extra_args
@@ -855,6 +883,8 @@ class UNet(DDPM):
sigmas = cvd.get_sigmas(S)
x = x*sigmas[0]
+ print(f"Running Heun Sampling with {len(sigmas) - 1} timesteps")
+
s_in = x.new_ones([x.shape[0]]).half()
for i in trange(len(sigmas) - 1, disable=disable):
@@ -876,6 +906,9 @@ class UNet(DDPM):
d = to_d(x, sigma_hat, denoised)
if callback is not None:
callback({'x': x, 'i': i, 'sigma': sigmas[i], 'sigma_hat': sigma_hat, 'denoised': denoised})
+
+ if img_callback: yield from img_callback(x, i)
+
dt = sigmas[i + 1] - sigma_hat
if sigmas[i + 1] == 0:
# Euler method
@@ -895,11 +928,13 @@ class UNet(DDPM):
d_2 = to_d(x_2, sigmas[i + 1], denoised_2)
d_prime = (d + d_2) / 2
x = x + d_prime * dt
- return x
+
+ yield from img_callback(x, len(sigmas)-1)
@torch.no_grad()
- def dpm_2_sampling(self,ac,x, S, cond, unconditional_conditioning = None, unconditional_guidance_scale = 1,extra_args=None, callback=None, disable=None, s_churn=0., s_tmin=0., s_tmax=float('inf'), s_noise=1.):
+ def dpm_2_sampling(self,ac,x, S, cond, unconditional_conditioning = None, unconditional_guidance_scale = 1,extra_args=None, callback=None, disable=None, s_churn=0., s_tmin=0., s_tmax=float('inf'), s_noise=1.,
+ img_callback=None):
"""A sampler inspired by DPM-Solver-2 and Algorithm 2 from Karras et al. (2022)."""
extra_args = {} if extra_args is None else extra_args
@@ -907,6 +942,8 @@ class UNet(DDPM):
sigmas = cvd.get_sigmas(S)
x = x*sigmas[0]
+ print(f"Running DPM2 Sampling with {len(sigmas) - 1} timesteps")
+
s_in = x.new_ones([x.shape[0]]).half()
for i in trange(len(sigmas) - 1, disable=disable):
gamma = min(s_churn / (len(sigmas) - 1), 2 ** 0.5 - 1) if s_tmin <= sigmas[i] <= s_tmax else 0.
@@ -924,7 +961,7 @@ class UNet(DDPM):
e_t_uncond, e_t = (x_in + eps * c_out).chunk(2)
denoised = e_t_uncond + unconditional_guidance_scale * (e_t - e_t_uncond)
-
+ if img_callback: yield from img_callback(x, i)
d = to_d(x, sigma_hat, denoised)
# Midpoint method, where the midpoint is chosen according to a rho=3 Karras schedule
@@ -945,11 +982,13 @@ class UNet(DDPM):
d_2 = to_d(x_2, sigma_mid, denoised_2)
x = x + d_2 * dt_2
- return x
+
+ yield from img_callback(x, len(sigmas)-1)
@torch.no_grad()
- def dpm_2_ancestral_sampling(self,ac,x, S, cond, unconditional_conditioning = None, unconditional_guidance_scale = 1, extra_args=None, callback=None, disable=None):
+ def dpm_2_ancestral_sampling(self,ac,x, S, cond, unconditional_conditioning = None, unconditional_guidance_scale = 1, extra_args=None, callback=None, disable=None,
+ img_callback=None):
"""Ancestral sampling with DPM-Solver inspired second-order steps."""
extra_args = {} if extra_args is None else extra_args
@@ -957,6 +996,8 @@ class UNet(DDPM):
sigmas = cvd.get_sigmas(S)
x = x*sigmas[0]
+ print(f"Running DPM2 Ancestral Sampling with {len(sigmas) - 1} timesteps")
+
s_in = x.new_ones([x.shape[0]]).half()
for i in trange(len(sigmas) - 1, disable=disable):
@@ -973,6 +1014,9 @@ class UNet(DDPM):
sigma_down, sigma_up = get_ancestral_step(sigmas[i], sigmas[i + 1])
if callback is not None:
callback({'x': x, 'i': i, 'sigma': sigmas[i], 'sigma_hat': sigmas[i], 'denoised': denoised})
+
+ if img_callback: yield from img_callback(x, i)
+
d = to_d(x, sigmas[i], denoised)
# Midpoint method, where the midpoint is chosen according to a rho=3 Karras schedule
sigma_mid = ((sigmas[i] ** (1 / 3) + sigma_down ** (1 / 3)) / 2) ** 3
@@ -993,11 +1037,13 @@ class UNet(DDPM):
d_2 = to_d(x_2, sigma_mid, denoised_2)
x = x + d_2 * dt_2
x = x + torch.randn_like(x) * sigma_up
- return x
+
+ yield from img_callback(x, len(sigmas)-1)
@torch.no_grad()
- def lms_sampling(self,ac,x, S, cond, unconditional_conditioning = None, unconditional_guidance_scale = 1, extra_args=None, callback=None, disable=None, order=4):
+ def lms_sampling(self,ac,x, S, cond, unconditional_conditioning = None, unconditional_guidance_scale = 1, extra_args=None, callback=None, disable=None, order=4,
+ img_callback=None):
extra_args = {} if extra_args is None else extra_args
s_in = x.new_ones([x.shape[0]])
@@ -1005,6 +1051,8 @@ class UNet(DDPM):
sigmas = cvd.get_sigmas(S)
x = x*sigmas[0]
+ print(f"Running LMS Sampling with {len(sigmas) - 1} timesteps")
+
ds = []
for i in trange(len(sigmas) - 1, disable=disable):
@@ -1017,6 +1065,7 @@ class UNet(DDPM):
e_t_uncond, e_t = (x_in + eps * c_out).chunk(2)
denoised = e_t_uncond + unconditional_guidance_scale * (e_t - e_t_uncond)
+ if img_callback: yield from img_callback(x, i)
d = to_d(x, sigmas[i], denoised)
ds.append(d)
@@ -1027,4 +1076,5 @@ class UNet(DDPM):
cur_order = min(i + 1, order)
coeffs = [linear_multistep_coeff(cur_order, sigmas.cpu(), i, j) for j in range(cur_order)]
x = x + sum(coeff * d for coeff, d in zip(coeffs, reversed(ds)))
- return x
+
+ yield from img_callback(x, len(sigmas)-1)
diff --git a/optimizedSD/openaimodelSplit.py b/optimizedSD/openaimodelSplit.py
index abc3098..7a32ffe 100644
--- a/optimizedSD/openaimodelSplit.py
+++ b/optimizedSD/openaimodelSplit.py
@@ -13,7 +13,7 @@ from ldm.modules.diffusionmodules.util import (
normalization,
timestep_embedding,
)
-from splitAttention import SpatialTransformer
+from .splitAttention import SpatialTransformer
class AttentionPool2d(nn.Module):

View File

@ -0,0 +1,13 @@
diff --git a/environment.yaml b/environment.yaml
index 7f25da8..306750f 100644
--- a/environment.yaml
+++ b/environment.yaml
@@ -23,6 +23,8 @@ dependencies:
- torch-fidelity==0.3.0
- transformers==4.19.2
- torchmetrics==0.6.0
+ - pywavelets==1.3.0
+ - pandas==1.4.4
- kornia==0.6
- -e git+https://github.com/CompVis/taming-transformers.git@master#egg=taming-transformers
- -e git+https://github.com/openai/CLIP.git@main#egg=clip

658
ui/sd_internal/runtime.py Normal file
View File

@ -0,0 +1,658 @@
import json
import os, re
import traceback
import torch
import numpy as np
from omegaconf import OmegaConf
from PIL import Image, ImageOps
from tqdm import tqdm, trange
from itertools import islice
from einops import rearrange
import time
from pytorch_lightning import seed_everything
from torch import autocast
from contextlib import nullcontext
from einops import rearrange, repeat
from ldm.util import instantiate_from_config
from optimizedSD.optimUtils import split_weighted_subprompts
from transformers import logging
from gfpgan import GFPGANer
from basicsr.archs.rrdbnet_arch import RRDBNet
from realesrgan import RealESRGANer
import uuid
logging.set_verbosity_error()
# consts
config_yaml = "optimizedSD/v1-inference.yaml"
filename_regex = re.compile('[^a-zA-Z0-9]')
# api stuff
from . import Request, Response, Image as ResponseImage
import base64
from io import BytesIO
#from colorama import Fore
# local
stop_processing = False
temp_images = {}
ckpt_file = None
gfpgan_file = None
real_esrgan_file = None
model = None
modelCS = None
modelFS = None
model_gfpgan = None
model_real_esrgan = None
model_is_half = False
model_fs_is_half = False
device = None
unet_bs = 1
precision = 'autocast'
sampler_plms = None
sampler_ddim = None
has_valid_gpu = False
force_full_precision = False
try:
gpu = torch.cuda.current_device()
gpu_name = torch.cuda.get_device_name(gpu)
print('GPU detected: ', gpu_name)
force_full_precision = ('nvidia' in gpu_name.lower() or 'geforce' in gpu_name.lower()) and (' 1660' in gpu_name or ' 1650' in gpu_name) # otherwise these NVIDIA cards create green images
if force_full_precision:
print('forcing full precision on NVIDIA 16xx cards, to avoid green images. GPU detected: ', gpu_name)
mem_free, mem_total = torch.cuda.mem_get_info(gpu)
mem_total /= float(10**9)
if mem_total < 3.0:
print("GPUs with less than 3 GB of VRAM are not compatible with Stable Diffusion")
raise Exception()
has_valid_gpu = True
except:
print('WARNING: No compatible GPU found. Using the CPU, but this will be very slow!')
pass
def load_model_ckpt(ckpt_to_use, device_to_use='cuda', turbo=False, unet_bs_to_use=1, precision_to_use='autocast', half_model_fs=False):
global ckpt_file, model, modelCS, modelFS, model_is_half, device, unet_bs, precision, model_fs_is_half
ckpt_file = ckpt_to_use
device = device_to_use if has_valid_gpu else 'cpu'
precision = precision_to_use if not force_full_precision else 'full'
unet_bs = unet_bs_to_use
if device == 'cpu':
precision = 'full'
sd = load_model_from_config(f"{ckpt_file}.ckpt")
li, lo = [], []
for key, value in sd.items():
sp = key.split(".")
if (sp[0]) == "model":
if "input_blocks" in sp:
li.append(key)
elif "middle_block" in sp:
li.append(key)
elif "time_embed" in sp:
li.append(key)
else:
lo.append(key)
for key in li:
sd["model1." + key[6:]] = sd.pop(key)
for key in lo:
sd["model2." + key[6:]] = sd.pop(key)
config = OmegaConf.load(f"{config_yaml}")
model = instantiate_from_config(config.modelUNet)
_, _ = model.load_state_dict(sd, strict=False)
model.eval()
model.cdevice = device
model.unet_bs = unet_bs
model.turbo = turbo
modelCS = instantiate_from_config(config.modelCondStage)
_, _ = modelCS.load_state_dict(sd, strict=False)
modelCS.eval()
modelCS.cond_stage_model.device = device
modelFS = instantiate_from_config(config.modelFirstStage)
_, _ = modelFS.load_state_dict(sd, strict=False)
modelFS.eval()
del sd
if device != "cpu" and precision == "autocast":
model.half()
modelCS.half()
model_is_half = True
else:
model_is_half = False
if half_model_fs:
modelFS.half()
model_fs_is_half = True
else:
model_fs_is_half = False
print('loaded ', ckpt_file, 'to', device, 'precision', precision)
def load_model_gfpgan(gfpgan_to_use):
global gfpgan_file, model_gfpgan
if gfpgan_to_use is None:
return
gfpgan_file = gfpgan_to_use
model_path = gfpgan_to_use + ".pth"
if device == 'cpu':
model_gfpgan = GFPGANer(model_path=model_path, upscale=1, arch='clean', channel_multiplier=2, bg_upsampler=None, device=torch.device('cpu'))
else:
model_gfpgan = GFPGANer(model_path=model_path, upscale=1, arch='clean', channel_multiplier=2, bg_upsampler=None, device=torch.device('cuda'))
print('loaded ', gfpgan_to_use, 'to', device, 'precision', precision)
def load_model_real_esrgan(real_esrgan_to_use):
global real_esrgan_file, model_real_esrgan
if real_esrgan_to_use is None:
return
real_esrgan_file = real_esrgan_to_use
model_path = real_esrgan_to_use + ".pth"
RealESRGAN_models = {
'RealESRGAN_x4plus': RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32, scale=4),
'RealESRGAN_x4plus_anime_6B': RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=6, num_grow_ch=32, scale=4)
}
model_to_use = RealESRGAN_models[real_esrgan_to_use]
if device == 'cpu':
model_real_esrgan = RealESRGANer(scale=2, model_path=model_path, model=model_to_use, pre_pad=0, half=False) # cpu does not support half
model_real_esrgan.device = torch.device('cpu')
model_real_esrgan.model.to('cpu')
else:
model_real_esrgan = RealESRGANer(scale=2, model_path=model_path, model=model_to_use, pre_pad=0, half=model_is_half)
model_real_esrgan.model.name = real_esrgan_to_use
print('loaded ', real_esrgan_to_use, 'to', device, 'precision', precision)
def mk_img(req: Request):
try:
yield from do_mk_img(req)
except Exception as e:
print(traceback.format_exc())
gc()
if device != "cpu":
modelFS.to("cpu")
modelCS.to("cpu")
model.model1.to("cpu")
model.model2.to("cpu")
gc()
yield json.dumps({
"status": 'failed',
"detail": str(e)
})
def do_mk_img(req: Request):
global model, modelCS, modelFS, device
global model_gfpgan, model_real_esrgan
global stop_processing
stop_processing = False
res = Response()
res.request = req
res.images = []
temp_images.clear()
model.turbo = req.turbo
if req.use_cpu:
if device != 'cpu':
device = 'cpu'
if model_is_half:
del model, modelCS, modelFS
load_model_ckpt(ckpt_file, device)
load_model_gfpgan(gfpgan_file)
load_model_real_esrgan(real_esrgan_file)
else:
if has_valid_gpu:
prev_device = device
device = 'cuda'
if (precision == 'autocast' and (req.use_full_precision or not model_is_half)) or \
(precision == 'full' and not req.use_full_precision and not force_full_precision) or \
(req.init_image is None and model_fs_is_half) or \
(req.init_image is not None and not model_fs_is_half and not force_full_precision):
del model, modelCS, modelFS
load_model_ckpt(ckpt_file, device, req.turbo, unet_bs, ('full' if req.use_full_precision else 'autocast'), half_model_fs=(req.init_image is not None and not req.use_full_precision))
if prev_device != device:
load_model_gfpgan(gfpgan_file)
load_model_real_esrgan(real_esrgan_file)
if req.use_face_correction != gfpgan_file:
load_model_gfpgan(req.use_face_correction)
if req.use_upscale != real_esrgan_file:
load_model_real_esrgan(req.use_upscale)
model.cdevice = device
modelCS.cond_stage_model.device = device
opt_prompt = req.prompt
opt_seed = req.seed
opt_n_samples = req.num_outputs
opt_n_iter = 1
opt_scale = req.guidance_scale
opt_C = 4
opt_H = req.height
opt_W = req.width
opt_f = 8
opt_ddim_steps = req.num_inference_steps
opt_ddim_eta = 0.0
opt_strength = req.prompt_strength
opt_save_to_disk_path = req.save_to_disk_path
opt_init_img = req.init_image
opt_use_face_correction = req.use_face_correction
opt_use_upscale = req.use_upscale
opt_show_only_filtered = req.show_only_filtered_image
opt_format = 'png'
opt_sampler_name = req.sampler
print(req.to_string(), '\n device', device)
print('\n\n Using precision:', precision)
seed_everything(opt_seed)
batch_size = opt_n_samples
prompt = opt_prompt
assert prompt is not None
data = [batch_size * [prompt]]
if precision == "autocast" and device != "cpu":
precision_scope = autocast
else:
precision_scope = nullcontext
mask = None
if req.init_image is None:
handler = _txt2img
init_latent = None
t_enc = None
else:
handler = _img2img
init_image = load_img(req.init_image, opt_W, opt_H)
init_image = init_image.to(device)
if device != "cpu" and precision == "autocast":
init_image = init_image.half()
modelFS.to(device)
init_image = repeat(init_image, '1 ... -> b ...', b=batch_size)
init_latent = modelFS.get_first_stage_encoding(modelFS.encode_first_stage(init_image)) # move to latent space
if req.mask is not None:
mask = load_mask(req.mask, opt_W, opt_H, init_latent.shape[2], init_latent.shape[3], True).to(device)
mask = mask[0][0].unsqueeze(0).repeat(4, 1, 1).unsqueeze(0)
mask = repeat(mask, '1 ... -> b ...', b=batch_size)
if device != "cpu" and precision == "autocast":
mask = mask.half()
move_fs_to_cpu()
assert 0. <= opt_strength <= 1., 'can only work with strength in [0.0, 1.0]'
t_enc = int(opt_strength * opt_ddim_steps)
print(f"target t_enc is {t_enc} steps")
if opt_save_to_disk_path is not None:
session_out_path = os.path.join(opt_save_to_disk_path, req.session_id)
os.makedirs(session_out_path, exist_ok=True)
else:
session_out_path = None
seeds = ""
with torch.no_grad():
for n in trange(opt_n_iter, desc="Sampling"):
for prompts in tqdm(data, desc="data"):
with precision_scope("cuda"):
modelCS.to(device)
uc = None
if opt_scale != 1.0:
uc = modelCS.get_learned_conditioning(batch_size * [""])
if isinstance(prompts, tuple):
prompts = list(prompts)
subprompts, weights = split_weighted_subprompts(prompts[0])
if len(subprompts) > 1:
c = torch.zeros_like(uc)
totalWeight = sum(weights)
# normalize each "sub prompt" and add it
for i in range(len(subprompts)):
weight = weights[i]
# if not skip_normalize:
weight = weight / totalWeight
c = torch.add(c, modelCS.get_learned_conditioning(subprompts[i]), alpha=weight)
else:
c = modelCS.get_learned_conditioning(prompts)
modelFS.to(device)
partial_x_samples = None
def img_callback(x_samples, i):
nonlocal partial_x_samples
partial_x_samples = x_samples
if req.stream_progress_updates:
n_steps = opt_ddim_steps if req.init_image is None else t_enc
progress = {"step": i, "total_steps": n_steps}
if req.stream_image_progress and i % 5 == 0:
partial_images = []
for i in range(batch_size):
x_samples_ddim = modelFS.decode_first_stage(x_samples[i].unsqueeze(0))
x_sample = torch.clamp((x_samples_ddim + 1.0) / 2.0, min=0.0, max=1.0)
x_sample = 255.0 * rearrange(x_sample[0].cpu().numpy(), "c h w -> h w c")
x_sample = x_sample.astype(np.uint8)
img = Image.fromarray(x_sample)
buf = BytesIO()
img.save(buf, format='JPEG')
buf.seek(0)
del img, x_sample, x_samples_ddim
# don't delete x_samples, it is used in the code that called this callback
temp_images[str(req.session_id) + '/' + str(i)] = buf
partial_images.append({'path': f'/image/tmp/{req.session_id}/{i}'})
progress['output'] = partial_images
yield json.dumps(progress)
if stop_processing:
raise UserInitiatedStop("User requested that we stop processing")
# run the handler
try:
if handler == _txt2img:
x_samples = _txt2img(opt_W, opt_H, opt_n_samples, opt_ddim_steps, opt_scale, None, opt_C, opt_f, opt_ddim_eta, c, uc, opt_seed, img_callback, mask, opt_sampler_name)
else:
x_samples = _img2img(init_latent, t_enc, batch_size, opt_scale, c, uc, opt_ddim_steps, opt_ddim_eta, opt_seed, img_callback, mask)
yield from x_samples
x_samples = partial_x_samples
except UserInitiatedStop:
if partial_x_samples is None:
continue
x_samples = partial_x_samples
print("saving images")
for i in range(batch_size):
x_samples_ddim = modelFS.decode_first_stage(x_samples[i].unsqueeze(0))
x_sample = torch.clamp((x_samples_ddim + 1.0) / 2.0, min=0.0, max=1.0)
x_sample = 255.0 * rearrange(x_sample[0].cpu().numpy(), "c h w -> h w c")
x_sample = x_sample.astype(np.uint8)
img = Image.fromarray(x_sample)
has_filters = (opt_use_face_correction is not None and opt_use_face_correction.startswith('GFPGAN')) or \
(opt_use_upscale is not None and opt_use_upscale.startswith('RealESRGAN'))
return_orig_img = not has_filters or not opt_show_only_filtered
if stop_processing:
return_orig_img = True
if opt_save_to_disk_path is not None:
prompt_flattened = filename_regex.sub('_', prompts[0])
prompt_flattened = prompt_flattened[:50]
img_id = str(uuid.uuid4())[-8:]
file_path = f"{prompt_flattened}_{img_id}"
img_out_path = os.path.join(session_out_path, f"{file_path}.{opt_format}")
meta_out_path = os.path.join(session_out_path, f"{file_path}.txt")
if return_orig_img:
save_image(img, img_out_path)
save_metadata(meta_out_path, prompts, opt_seed, opt_W, opt_H, opt_ddim_steps, opt_scale, opt_strength, opt_use_face_correction, opt_use_upscale, opt_sampler_name)
if return_orig_img:
img_data = img_to_base64_str(img)
res_image_orig = ResponseImage(data=img_data, seed=opt_seed)
res.images.append(res_image_orig)
if opt_save_to_disk_path is not None:
res_image_orig.path_abs = img_out_path
del img
if has_filters and not stop_processing:
print('Applying filters..')
gc()
filters_applied = []
if opt_use_face_correction:
_, _, output = model_gfpgan.enhance(x_sample[:,:,::-1], has_aligned=False, only_center_face=False, paste_back=True)
x_sample = output[:,:,::-1]
filters_applied.append(opt_use_face_correction)
if opt_use_upscale:
output, _ = model_real_esrgan.enhance(x_sample[:,:,::-1])
x_sample = output[:,:,::-1]
filters_applied.append(opt_use_upscale)
filtered_image = Image.fromarray(x_sample)
filtered_img_data = img_to_base64_str(filtered_image)
res_image_filtered = ResponseImage(data=filtered_img_data, seed=opt_seed)
res.images.append(res_image_filtered)
filters_applied = "_".join(filters_applied)
if opt_save_to_disk_path is not None:
filtered_img_out_path = os.path.join(session_out_path, f"{file_path}_{filters_applied}.{opt_format}")
save_image(filtered_image, filtered_img_out_path)
res_image_filtered.path_abs = filtered_img_out_path
del filtered_image
seeds += str(opt_seed) + ","
opt_seed += 1
move_fs_to_cpu()
gc()
del x_samples, x_samples_ddim, x_sample
print("memory_final = ", torch.cuda.memory_allocated() / 1e6)
print('Task completed')
yield json.dumps(res.json())
def save_image(img, img_out_path):
try:
img.save(img_out_path)
except:
print('could not save the file', traceback.format_exc())
def save_metadata(meta_out_path, prompts, opt_seed, opt_W, opt_H, opt_ddim_steps, opt_scale, opt_prompt_strength, opt_correct_face, opt_upscale, sampler_name):
metadata = f"{prompts[0]}\nWidth: {opt_W}\nHeight: {opt_H}\nSeed: {opt_seed}\nSteps: {opt_ddim_steps}\nGuidance Scale: {opt_scale}\nPrompt Strength: {opt_prompt_strength}\nUse Face Correction: {opt_correct_face}\nUse Upscaling: {opt_upscale}\nSampler: {sampler_name}"
try:
with open(meta_out_path, 'w') as f:
f.write(metadata)
except:
print('could not save the file', traceback.format_exc())
def _txt2img(opt_W, opt_H, opt_n_samples, opt_ddim_steps, opt_scale, start_code, opt_C, opt_f, opt_ddim_eta, c, uc, opt_seed, img_callback, mask, sampler_name):
shape = [opt_n_samples, opt_C, opt_H // opt_f, opt_W // opt_f]
if device != "cpu":
mem = torch.cuda.memory_allocated() / 1e6
modelCS.to("cpu")
while torch.cuda.memory_allocated() / 1e6 >= mem:
time.sleep(1)
if sampler_name == 'ddim':
model.make_schedule(ddim_num_steps=opt_ddim_steps, ddim_eta=opt_ddim_eta, verbose=False)
samples_ddim = model.sample(
S=opt_ddim_steps,
conditioning=c,
seed=opt_seed,
shape=shape,
verbose=False,
unconditional_guidance_scale=opt_scale,
unconditional_conditioning=uc,
eta=opt_ddim_eta,
x_T=start_code,
img_callback=img_callback,
mask=mask,
sampler = sampler_name,
)
yield from samples_ddim
def _img2img(init_latent, t_enc, batch_size, opt_scale, c, uc, opt_ddim_steps, opt_ddim_eta, opt_seed, img_callback, mask):
# encode (scaled latent)
z_enc = model.stochastic_encode(
init_latent,
torch.tensor([t_enc] * batch_size).to(device),
opt_seed,
opt_ddim_eta,
opt_ddim_steps,
)
x_T = None if mask is None else init_latent
# decode it
samples_ddim = model.sample(
t_enc,
c,
z_enc,
unconditional_guidance_scale=opt_scale,
unconditional_conditioning=uc,
img_callback=img_callback,
mask=mask,
x_T=x_T,
sampler = 'ddim'
)
yield from samples_ddim
def move_fs_to_cpu():
if device != "cpu":
mem = torch.cuda.memory_allocated() / 1e6
modelFS.to("cpu")
while torch.cuda.memory_allocated() / 1e6 >= mem:
time.sleep(1)
def gc():
if device == 'cpu':
return
torch.cuda.empty_cache()
torch.cuda.ipc_collect()
# internal
def chunk(it, size):
it = iter(it)
return iter(lambda: tuple(islice(it, size)), ())
def load_model_from_config(ckpt, verbose=False):
print(f"Loading model from {ckpt}")
pl_sd = torch.load(ckpt, map_location="cpu")
if "global_step" in pl_sd:
print(f"Global Step: {pl_sd['global_step']}")
sd = pl_sd["state_dict"]
return sd
# utils
class UserInitiatedStop(Exception):
pass
def load_img(img_str, w0, h0):
image = base64_str_to_img(img_str).convert("RGB")
w, h = image.size
print(f"loaded input image of size ({w}, {h}) from base64")
if h0 is not None and w0 is not None:
h, w = h0, w0
w, h = map(lambda x: x - x % 64, (w, h)) # resize to integer multiple of 64
image = image.resize((w, h), resample=Image.Resampling.LANCZOS)
image = np.array(image).astype(np.float32) / 255.0
image = image[None].transpose(0, 3, 1, 2)
image = torch.from_numpy(image)
return 2.*image - 1.
def load_mask(mask_str, h0, w0, newH, newW, invert=False):
image = base64_str_to_img(mask_str).convert("RGB")
w, h = image.size
print(f"loaded input mask of size ({w}, {h})")
if invert:
print("inverted")
image = ImageOps.invert(image)
# where_0, where_1 = np.where(image == 0), np.where(image == 255)
# image[where_0], image[where_1] = 255, 0
if h0 is not None and w0 is not None:
h, w = h0, w0
w, h = map(lambda x: x - x % 64, (w, h)) # resize to integer multiple of 64
print(f"New mask size ({w}, {h})")
image = image.resize((newW, newH), resample=Image.Resampling.LANCZOS)
image = np.array(image)
image = image.astype(np.float32) / 255.0
image = image[None].transpose(0, 3, 1, 2)
image = torch.from_numpy(image)
return image
# https://stackoverflow.com/a/61114178
def img_to_base64_str(img):
buffered = BytesIO()
img.save(buffered, format="PNG")
buffered.seek(0)
img_byte = buffered.getvalue()
img_str = "data:image/png;base64," + base64.b64encode(img_byte).decode()
return img_str
def base64_str_to_img(img_str):
img_str = img_str[len("data:image/png;base64,"):]
data = base64.b64decode(img_str)
buffered = BytesIO(data)
img = Image.open(buffered)
return img

223
ui/server.py Normal file
View File

@ -0,0 +1,223 @@
import json
import traceback
import sys
import os
SCRIPT_DIR = os.getcwd()
print('started in ', SCRIPT_DIR)
SD_UI_DIR = os.getenv('SD_UI_PATH', None)
sys.path.append(os.path.dirname(SD_UI_DIR))
CONFIG_DIR = os.path.join(SD_UI_DIR, '..', 'scripts')
OUTPUT_DIRNAME = "Stable Diffusion UI" # in the user's home folder
from fastapi import FastAPI, HTTPException
from fastapi.staticfiles import StaticFiles
from starlette.responses import FileResponse, StreamingResponse
from pydantic import BaseModel
import logging
from sd_internal import Request, Response
app = FastAPI()
model_loaded = False
model_is_loading = False
modifiers_cache = None
outpath = os.path.join(os.path.expanduser("~"), OUTPUT_DIRNAME)
# defaults from https://huggingface.co/blog/stable_diffusion
class ImageRequest(BaseModel):
session_id: str = "session"
prompt: str = ""
init_image: str = None # base64
mask: str = None # base64
num_outputs: int = 1
num_inference_steps: int = 50
guidance_scale: float = 7.5
width: int = 512
height: int = 512
seed: int = 42
prompt_strength: float = 0.8
sampler: str = None # "ddim", "plms", "heun", "euler", "euler_a", "dpm2", "dpm2_a", "lms"
# allow_nsfw: bool = False
save_to_disk_path: str = None
turbo: bool = True
use_cpu: bool = False
use_full_precision: bool = False
use_face_correction: str = None # or "GFPGANv1.3"
use_upscale: str = None # or "RealESRGAN_x4plus" or "RealESRGAN_x4plus_anime_6B"
show_only_filtered_image: bool = False
stream_progress_updates: bool = False
stream_image_progress: bool = False
class SetAppConfigRequest(BaseModel):
update_branch: str = "main"
app.mount('/media', StaticFiles(directory=os.path.join(SD_UI_DIR, 'media/')), name="media")
@app.get('/')
def read_root():
headers = {"Cache-Control": "no-cache, no-store, must-revalidate", "Pragma": "no-cache", "Expires": "0"}
return FileResponse(os.path.join(SD_UI_DIR, 'index.html'), headers=headers)
@app.get('/ping')
async def ping():
global model_loaded, model_is_loading
try:
if model_loaded:
return {'OK'}
if model_is_loading:
return {'ERROR'}
model_is_loading = True
from sd_internal import runtime
runtime.load_model_ckpt(ckpt_to_use="sd-v1-4")
model_loaded = True
model_is_loading = False
return {'OK'}
except Exception as e:
print(traceback.format_exc())
return HTTPException(status_code=500, detail=str(e))
@app.post('/image')
def image(req : ImageRequest):
from sd_internal import runtime
r = Request()
r.session_id = req.session_id
r.prompt = req.prompt
r.init_image = req.init_image
r.mask = req.mask
r.num_outputs = req.num_outputs
r.num_inference_steps = req.num_inference_steps
r.guidance_scale = req.guidance_scale
r.width = req.width
r.height = req.height
r.seed = req.seed
r.prompt_strength = req.prompt_strength
r.sampler = req.sampler
# r.allow_nsfw = req.allow_nsfw
r.turbo = req.turbo
r.use_cpu = req.use_cpu
r.use_full_precision = req.use_full_precision
r.save_to_disk_path = req.save_to_disk_path
r.use_upscale: str = req.use_upscale
r.use_face_correction = req.use_face_correction
r.show_only_filtered_image = req.show_only_filtered_image
r.stream_progress_updates = True # the underlying implementation only supports streaming
r.stream_image_progress = req.stream_image_progress
try:
if not req.stream_progress_updates:
r.stream_image_progress = False
res = runtime.mk_img(r)
if req.stream_progress_updates:
return StreamingResponse(res, media_type='application/json')
else: # compatibility mode: buffer the streaming responses, and return the last one
last_result = None
for result in res:
last_result = result
return json.loads(last_result)
except Exception as e:
print(traceback.format_exc())
return HTTPException(status_code=500, detail=str(e))
@app.get('/image/stop')
def stop():
try:
if model_is_loading:
return {'ERROR'}
from sd_internal import runtime
runtime.stop_processing = True
return {'OK'}
except Exception as e:
print(traceback.format_exc())
return HTTPException(status_code=500, detail=str(e))
@app.get('/image/tmp/{session_id}/{img_id}')
def get_image(session_id, img_id):
from sd_internal import runtime
buf = runtime.temp_images[session_id + '/' + img_id]
buf.seek(0)
return StreamingResponse(buf, media_type='image/jpeg')
@app.post('/app_config')
async def setAppConfig(req : SetAppConfigRequest):
try:
config = {
'update_branch': req.update_branch
}
config_json_str = json.dumps(config)
config_bat_str = f'@set update_branch={req.update_branch}'
config_sh_str = f'export update_branch={req.update_branch}'
config_json_path = os.path.join(CONFIG_DIR, 'config.json')
config_bat_path = os.path.join(CONFIG_DIR, 'config.bat')
config_sh_path = os.path.join(CONFIG_DIR, 'config.sh')
with open(config_json_path, 'w') as f:
f.write(config_json_str)
with open(config_bat_path, 'w') as f:
f.write(config_bat_str)
with open(config_sh_path, 'w') as f:
f.write(config_sh_str)
return {'OK'}
except Exception as e:
print(traceback.format_exc())
return HTTPException(status_code=500, detail=str(e))
@app.get('/app_config')
def getAppConfig():
try:
config_json_path = os.path.join(CONFIG_DIR, 'config.json')
if not os.path.exists(config_json_path):
return HTTPException(status_code=500, detail="No config file")
with open(config_json_path, 'r') as f:
config_json_str = f.read()
config = json.loads(config_json_str)
return config
except Exception as e:
print(traceback.format_exc())
return HTTPException(status_code=500, detail=str(e))
@app.get('/modifiers.json')
def read_modifiers():
return FileResponse(os.path.join(SD_UI_DIR, 'modifiers.json'))
@app.get('/output_dir')
def read_home_dir():
return {outpath}
# don't log /ping requests
class HealthCheckLogFilter(logging.Filter):
def filter(self, record: logging.LogRecord) -> bool:
return record.getMessage().find('/ping') == -1
logging.getLogger('uvicorn.access').addFilter(HealthCheckLogFilter())
# start the browser ui
import webbrowser; webbrowser.open('http://localhost:9000')