Compare commits

..

367 Commits

Author SHA1 Message Date
13721f160e changelog grammar 2022-12-24 23:22:47 +05:30
102e5623f7 Merge branch 'beta' into refactor 2022-12-24 23:14:02 +05:30
9a975321db v2.5 changelog 2022-12-24 23:11:13 +05:30
6743ec14f1 Merge branch 'beta' of github.com:cmdr2/stable-diffusion-ui into beta 2022-12-24 22:17:31 +05:30
daec5e5426 Changes to allow rolling back from the upcoming sdkit-based system 2022-12-24 22:17:16 +05:30
a2b55c0df7 Report precision 2022-12-24 21:44:42 +05:30
01320ac735 Rename project to Easy Diffusion 2022-12-24 21:36:47 +05:30
84bddee2ce Treat none as a boolean false in drag-and-drop 2022-12-24 19:41:36 +05:30
5f6b798e35 Stop printing annoying ok messages 2022-12-24 19:13:17 +05:30
9137f3793e Merge pull request #693 from madrang/mobile-fixes
Add a debounce delay to allow mobile to bouble tap.
2022-12-24 15:53:31 +05:30
73e92a688f color logging 2 2022-12-24 15:43:06 +05:30
7a9f219037 color logging 2022-12-24 15:41:19 +05:30
a4728190c0 Refactor server.py 2022-12-24 15:29:49 +05:30
04d67a24b6 Don't allow the results to be collapsed when clicking draghandle 2022-12-24 04:55:28 -05:00
55049ba9d2 Add a debounce delay to allow mobile to bouble tap. 2022-12-24 04:42:43 -05:00
e0b33a4feb Install rich 2022-12-24 15:10:46 +05:30
fb5c0a3db7 Install python 3.8.5 during installation. Torch isn't available for 3.11 2022-12-24 14:57:57 +05:30
8154a5709b disable the legacy src and ldm folder (otherwise this prevents installing gfpgan and realesrgan) 2022-12-24 14:01:33 +05:30
3a6780bd50 Copy check_modules.py the first time an existing user runs the new version 2022-12-24 13:56:05 +05:30
b7a76d4212 Merge branch 'beta' into refactor 2022-12-24 13:45:53 +05:30
ba7cae683a Bump to 2.5 2022-12-24 13:39:28 +05:30
243556656e Temporarily disable the model config dropdown in the UI 2022-12-24 13:38:55 +05:30
6662dc66d5 Updated scripts to install sdkit into existing installations, while still working with new installations 2022-12-24 13:37:50 +05:30
107112d1c4 Integration bugs 2022-12-24 12:37:20 +05:30
c5d343750c Merge pull request #691 from JeLuF/patch-4
Avoid guidance scale "1.0"
2022-12-23 17:55:41 +05:30
09b76dcd93 Avoid guidance scale "1.0"
Using a guidance scale of 1.0 will cause an exception in the renderer and return a very confusing error message.
https://discord.com/channels/1014774730907209781/1028195513377509376
2022-12-23 13:18:08 +01:00
fb95d76e34 Update CHANGES.md 2022-12-23 11:26:14 +05:30
cf2408013e Measure the click-to-render-request latency, only if the click button was used 2022-12-23 10:54:40 +05:30
d8543d1358 Use the sdkit model scan; Disable scan-per-load since we scan them before allowing them to be invoked 2022-12-22 16:47:59 +05:30
d8b79d8b5c Don't crash if IP listing fails. Thanks @JeLuf 2022-12-22 15:43:52 +05:30
c2bcf89f9a Merge branch 'beta' into refactor 2022-12-22 15:42:04 +05:30
5cb24f992c Bump version 2022-12-22 15:23:07 +05:30
21394b7d45 Reduce the delay between clicking 'Make Image' and making the render call to the server. Was nearly 4-5 seconds, now it's about 250-300ms. This is a hacky workaround until a better solution is found 2022-12-22 15:22:25 +05:30
768fb2583a Merge branch 'beta' of github.com:cmdr2/stable-diffusion-ui into beta 2022-12-22 13:43:09 +05:30
6e07b2354f Fix an unnecessary error when a task header is clicked 2022-12-22 13:42:47 +05:30
0cd0d6aadf Update CHANGES.md 2022-12-22 13:25:57 +05:30
d6c535c45c Merge branch 'main' into beta 2022-12-22 13:23:07 +05:30
babdb5b718 Prompt Matrix is in main 2022-12-22 12:26:32 +05:30
0ea8d038be Merge pull request #679 from SpecificKnot/main
Changes to Front Docs
2022-12-22 12:25:48 +05:30
c804a9971e Work-in-progress code for adding a model config dropdown in the UI. Doesn't work yet 2022-12-22 11:54:00 +05:30
4d7f6e4236 Change version number in beta 2022-12-22 10:32:40 +05:30
6036ccdc1c Style Adjustments
Made a few adjustments to fit the needs of the project for new users.
2022-12-20 12:44:48 +00:00
5eeef41d8c Update to use the latest sdkit API 2022-12-20 15:16:47 +05:30
bacf266f0d Merge pull request #651 from madrang/release-notes
Update 'release-notes' to use loadScript
2022-12-20 10:21:07 +05:30
ba5c54043b Merge pull request #680 from AssassinJN/beta
Drag and Drop Styles
2022-12-20 10:19:30 +05:30
e33c858829 Merge pull request #1 from JeLuF/AJNdrag
Only activate the dragOver event listener when dragging tasks
2022-12-19 14:39:27 -05:00
e47e54de3f Only activate the dragOver event listener when dragging tasks 2022-12-19 20:34:06 +01:00
54f9e9bfe9 adding drag and drop styles
Add functions required for adding styles to imageTaskContainer to show where images will be dropped.
2022-12-19 13:45:42 -05:00
e1875c872c classes for drag and drop
Added classes for drag and drop.
2022-12-19 13:44:15 -05:00
27b8e173e8 Changes to Front Docs 2022-12-19 14:28:05 +00:00
47e3884994 Rename the python package name to easydiffusion (from sd_internal) 2022-12-19 19:39:15 +05:30
e483071894 Rename diffusionkit to sdkit; Delete runtime.py (historic moment) 2022-12-19 19:27:28 +05:30
af090cb289 Update README.md 2022-12-19 12:24:19 +05:30
9bbb25f16c Update README.md 2022-12-19 12:23:32 +05:30
3007f00c9b Update README.md 2022-12-19 12:22:27 +05:30
352dcfbe30 Update README.md 2022-12-19 12:20:46 +05:30
60b181a545 Update README.md 2022-12-19 12:11:01 +05:30
600482e2d7 Update README.md 2022-12-19 12:10:15 +05:30
39ccbbd72e Update README.md 2022-12-19 12:09:13 +05:30
6e69cbcdaf Merge pull request #674 from SpecificKnot/main
Simplified README
2022-12-19 11:55:04 +05:30
bf6c222a3b Merge pull request #641 from JeLuF/pause
Pause button
2022-12-19 11:52:55 +05:30
6afcf7570a Merge pull request #671 from patriceac/allow-empty-prompts
Allow empty prompts (image modifiers only)
2022-12-19 11:50:18 +05:30
c3126f7b4d Merge pull request #673 from jsuelwald/patch-1
Change time display on job
2022-12-19 11:48:38 +05:30
cb3b542363 Merge pull request #675 from JeLuF/drag
Add drag handle
2022-12-19 09:36:44 +05:30
1a5e15608c Merge pull request #676 from JeLuF/ipfix
Return empty list if hostname lookup fails
2022-12-19 09:14:40 +05:30
64a751ad79 Merge branch 'beta' into pause 2022-12-19 00:55:56 +01:00
57efe31959 Return empty list if hostname lookup fails 2022-12-19 00:42:48 +01:00
39350d554b Remove old code 2022-12-19 00:32:13 +01:00
8f4e03550c Add drag handle 2022-12-19 00:14:57 +01:00
d03823fb20 Last minute changes 2022-12-18 21:16:32 +00:00
00ec2b9d6f README Updates
Updates to README to make it easier to follow along.
2022-12-18 21:13:12 +00:00
70e4bc4582 Update README.md 2022-12-18 20:52:38 +00:00
5e56a437ef Update README.md 2022-12-18 20:48:19 +00:00
22ffd25619 Change time display on job
Change "Processed 1 image in 150.65 seconds" to "Processed 1 Image in 2 minutes 30 seconds" to be consistent with the approx. time remaining while rendering
2022-12-18 07:20:42 +01:00
127949c56b Allow empty prompts (image modifiers only)
Allows empty prompts as long as there are image modifiers. This allows the user to craft prompts just by using image modifiers if they so wish.
2022-12-17 17:06:07 -08:00
cdfef16a0e Merge pull request #670 from patriceac/collapsible-toggle-event
Fire an event when a collapsible is toggled
2022-12-17 16:49:51 +05:30
1595f1ed05 Add 6 new samplers; Fix a bug where new tasks wouldn't started if a previous task was stopped 2022-12-17 16:45:43 +05:30
1cae39b105 Fire an event when a collapsible is toggled
Need an event to know that a collapsible got toggled to be able to resize the panels accordingly. Thanks!
2022-12-17 03:05:43 -08:00
8189b38e6e Typo in decoding live preview images 2022-12-17 15:59:09 +05:30
c240d6932a Update CHANGES.md 2022-12-17 10:13:23 +05:30
c4548d9396 Merge pull request #669 from JeLuF/hover
CSS only initimg hover, 'use as input' button
2022-12-17 09:50:46 +05:30
aea70e3dd4 Merge pull request #668 from JeLuF/imgedit
Fix img resize issues, add redo/undo buttons
2022-12-17 09:50:07 +05:30
3b01e65e11 CSS only initimg hover, 'use as input' button 2022-12-17 01:30:30 +01:00
341c810bbb Fix img resize issues, add redo/undo buttons 2022-12-17 00:29:54 +01:00
85fd2dfaaa Merge pull request #664 from patriceac/tab-change-trigger
Fire an event upon tab change
2022-12-16 18:24:10 +05:30
bf4bc38c6c Merge pull request #662 from JeLuF/patch-7
Linux uses .zip, not .tar.xz (Fixes #657)
2022-12-16 18:23:44 +05:30
aa8b50280b Remove the test_sd2 flag, the code now works with SD 2.0 2022-12-16 15:31:55 +05:30
62553dc0fa Fire an event upon tab change
Fire an event upon tab change.
2022-12-16 01:45:58 -08:00
25639cc3f8 Tweak Memory Usage setting text; Fix a bug with the memory usage setting comparison 2022-12-16 14:11:55 +05:30
7982a9ae25 Change the performance field to GPU Memory Usage instead, and use the 'balanced' profile by default, since it's just 5% slower than 'high', and uses nearly 50% less VRAM 2022-12-16 11:34:49 +05:30
aa01fd058e Set performance level (low, medium, high) instead of a Turbo field. The previous Turbo field is equivalent to 'Medium' performance now 2022-12-15 23:30:06 +05:30
ef7e1575bd Linux uses .zip, not .tar.xz (Fixes #657) 2022-12-15 16:44:43 +01:00
fb075a0013 Fix whitespace 2022-12-14 16:53:50 +05:30
d1738baf44 Merge branch 'beta' into refactor 2022-12-14 16:53:23 +05:30
7eb29fa91b Fix: errors were overwritten by the time taken in the UI 2022-12-14 16:52:46 +05:30
34c00fb77f Fix: errors were overwritten by the time taken in the UI 2022-12-14 16:51:30 +05:30
7965318d9f Update task_manager.py 2022-12-14 16:49:59 +05:30
e73a514e29 Revert a recent change to task error reporting, seems unstable 2022-12-14 16:37:45 +05:30
35ff4f439e Refactor save_to_disk 2022-12-14 16:30:19 +05:30
12e0194c7f Allow None as the value type in dnd parsing 2022-12-14 16:30:08 +05:30
d1ac90e16d [metadata parsing] Support loading the flat JSON format saved by the next backend; Set the dropdown to None if the value is undefined or null in the metadata 2022-12-14 15:43:24 +05:30
7dc7f70582 Allow parsing .safetensors stable diffusion model path in the metadata parser 2022-12-14 10:34:36 +05:30
84d606408a Prompt is now a keyword in the new metadata format generated from diffusionkit 2022-12-14 10:31:19 +05:30
d103693811 Bug in the metadata generation - made an array of None 2022-12-14 10:22:24 +05:30
0dbce101ac sampler -> sampler_name 2022-12-14 10:21:44 +05:30
cb81e2aacd Fix a bug where the metadata output format wouldn't get sent to the backend 2022-12-14 10:18:01 +05:30
6cd0b530c5 Simplify the code for VAE loading, and make it faster to load VAEs (because we don't reload the entire SD model each time a VAE changes); Record the error and end the thread if the SD model fails to load during startup 2022-12-13 15:46:04 +05:30
35571eb14d Don't hang the task if something other than the renderer fails (e.g. model loading) 2022-12-13 12:03:34 +05:30
8e6102ad9a removeTask() 2022-12-13 12:03:30 +05:30
80bc80dc2c removeTask() 2022-12-13 12:02:43 +05:30
a483bd0800 No need to catch and report exceptions separately in the renderer now 2022-12-13 11:46:13 +05:30
47a39569bc Merge branch 'beta' into refactor 2022-12-13 11:45:43 +05:30
f00e1a92d8 Don't hang the task if something other than the renderer fails (e.g. model loading) 2022-12-13 11:44:20 +05:30
a289945e8e Merge pull request #654 from jsuelwald/beta
The exception should also mention dpm2
2022-12-12 21:05:03 +05:30
b750c0d7c3 The exception should also mention dpm2 2022-12-12 16:24:03 +01:00
a244a6873a Use the new 'diffusionkit' package name 2022-12-12 20:46:11 +05:30
ceff4f06c1 Merge branch 'beta' into refactor 2022-12-12 20:43:29 +05:30
0307114c8e Merge pull request #653 from cmdr2/beta
Don't collapse the task entry if 'Stop Task' is pressed
2022-12-12 19:56:49 +05:30
92030a3917 Don't collapse the task entry if 'Stop Task' is pressed 2022-12-12 19:56:27 +05:30
73ace121a4 Merge pull request #652 from cmdr2/beta
Beta
2022-12-12 19:49:21 +05:30
44d5809e46 Changelog 2022-12-12 19:46:13 +05:30
5c4e6f7e96 Tweak editor width 2022-12-12 19:42:43 +05:30
8c032579b8 Hide the hypernetwork strength slider if no hypernetwork model is selected; Support drag-n-drop for hypernetwork models 2022-12-12 19:31:59 +05:30
b53935bfd4 Revert "Scrolling panes (#632)"
This reverts commit e3184622e8.
2022-12-12 19:03:16 +05:30
d4db027cfa Move the hypernetwork options below the sampler settings; Whitespace fixes 2022-12-12 19:02:34 +05:30
27963decc9 Use the multi-filters API 2022-12-12 18:12:55 +05:30
25f488c6e1 Merge branch 'beta' into refactor 2022-12-12 15:47:13 +05:30
07bd580050 Typos 2022-12-12 15:44:22 +05:30
fb32a38d96 Rename sampler to sampler_name in the API 2022-12-12 15:21:02 +05:30
ac0961d7d4 Typos from the refactor 2022-12-12 15:18:56 +05:30
6b943f88d1 Set uvicorn log level to 'error' 2022-12-12 15:18:30 +05:30
4bbf683d15 Minor refactor 2022-12-12 14:41:36 +05:30
d0e50584ea Expose the metadata format option in the UI 2022-12-12 14:06:20 +05:30
b57649828d Refactor the save-to-disk code, moving parts of it to diffusionkit 2022-12-12 14:01:47 +05:30
1f44a283b3 Update 'release-notes' to use loadScript 2022-12-12 02:47:42 -05:00
9947c3bcfb Start timer to IDLE_COOLDOWN before idleEventPromise completes. (#649) 2022-12-12 11:12:11 +05:30
8faf6b9f52 Don't allow to make zero images, make at least one. (#647) 2022-12-12 11:11:33 +05:30
e45cbbf1ca Use the turbo setting if requested 2022-12-11 20:42:31 +05:30
1a5b6ef260 Rename runtime2.py to renderer.py; Will remove the old runtime soon 2022-12-11 20:21:25 +05:30
096556d8c9 Move away the remaining model-related code to the model_manager 2022-12-11 20:13:44 +05:30
97919c7e87 Simplify the runtime code 2022-12-11 19:58:12 +05:30
0aa7968503 Move color correction to diffusionkit; Rename color correction to 'Preserve color profile' 2022-12-11 19:34:07 +05:30
bd1bc78953 Use onIdle(), move pause button, quick resume without using the promise 2022-12-11 14:57:01 +01:00
6ce6dc3ff6 Get rid of the ugly copying around (and maintaining) of multiple request-related fields. Split into two objects: task-related fields, and render-related fields. Also remove the ability for request-defined full-precision. Full-precision can now be forced by using a USE_FULL_PRECISION environment variable 2022-12-11 18:16:29 +05:30
e6346775e7 Merge branch 'beta' into pause 2022-12-11 11:19:48 +01:00
d03eed3859 Simplify the logic for reloading gfpgan and realesrgan models (based on the request), using the code path used for the other model types 2022-12-11 14:14:59 +05:30
afb88616d8 Load the models after the device init, to let the UI load before the models finish loading 2022-12-11 13:30:16 +05:30
543f13f9a3 Tweak logging to increase the space available by 3 characters 2022-12-11 13:19:22 +05:30
af5c68051a Fix for the tooltips being cutoff (#636) 2022-12-11 12:59:23 +05:30
5b7cd11de8 Added support for Async events (#643)
* Added support for async events callbacks

* Don't fire IDLE event if the first callback hasn't completed execution.
2022-12-11 11:22:52 +05:30
d3c3496e55 Merge pull request #639 from madrang/newEngine
Check if window is defined. Not all JS execution environments have it.
2022-12-11 11:19:11 +05:30
c08c8b2789 Merge pull request #638 from JeLuF/initimg
show initimg in task list
2022-12-11 11:18:10 +05:30
069315e434 Merge pull request #642 from patriceac/patch-5
Fixing a typo
2022-12-11 11:16:24 +05:30
7e4ad83a1c Merge pull request #637 from madrang/mainjs_fixes
Fix (typeof stepUpdate !== 'object') not completing the task on stop.
2022-12-11 11:15:31 +05:30
400f9fd680 Merge pull request #635 from patriceac/patch-4
Store the auto-scroll checkbox setting in localStorage instead of using the auto-save framework
2022-12-11 11:06:19 +05:30
38951f5581 Pause button - check whether function is defined before calling it 2022-12-11 02:49:49 +01:00
b5329ee93d Fixing a typo
Yeah, I know... What can I say? I have my OCD too. 👀
2022-12-10 17:45:14 -08:00
c568bca69e Pause button 2022-12-11 02:31:23 +01:00
7b2be12587 Check if window is defined. Not all JS execution environments have it. 2022-12-10 18:26:48 -05:00
099fde2652 show initimg in task list 2022-12-10 17:17:37 +01:00
83e5410945 Fix (typeof stepUpdate !== 'object') not completing the task on stop. 2022-12-10 00:52:27 -05:00
b330c34b29 Fix auto-scroll setting management
After thinking about it, the auto-save toggle is meant for the *Editor* fields listed behind the Configure button. The auto-scroll toggle is not part of the Editor, and is more akin to a system setting, although it's placed in the main UI for convenience reasons related to its nature. As such, and especially considering it's a plugin, I lean towards decoupling auto-scroll from the auto-save settings, and just storing it independently.
2022-12-09 19:34:41 -08:00
e3184622e8 Scrolling panes (#632)
Decouple the editor and the preview panes. Scrollbars color updated as well as requested.
2022-12-09 23:11:39 +05:30
28f822afe0 Fix tags not being properly applied to prompt matrix (#610)
There is an issue on the beta where if you use pipe ( | ) in the prompt to make a prompt matrix, the optional prompts are only applied when the last prompt in the matrix is used.
2022-12-09 23:04:25 +05:30
a2af811ad2 Disable uvicorn access logging in favor of cleaner server-side logging, we already get all that info; Print the request metadata 2022-12-09 22:47:34 +05:30
cde8c2d3bd Use a logger 2022-12-09 21:30:18 +05:30
79cc84b611 Option to apply color correction (balances the histogram) during inpainting; Refactor the runtime to use a general-purpose dict 2022-12-09 19:39:56 +05:30
f1de0be679 Fix integration issues after the refactor 2022-12-09 17:50:33 +05:30
854e3d3576 Fix reading value from undefined. (#631) 2022-12-09 16:34:59 +05:30
dbac2655f5 Typo 2022-12-09 16:14:04 +05:30
0f656dbf2f Typo 2022-12-09 16:11:08 +05:30
3fbb3f6773 Use const 2022-12-09 16:09:10 +05:30
8820814002 Simplify the API for resolving model paths; Code cleanup 2022-12-09 15:45:36 +05:30
b40fb3a422 Model readme file write flag 2022-12-09 15:27:40 +05:30
aa59575df3 Remove unused patch files 2022-12-09 15:24:55 +05:30
accfec9007 Space 2022-12-09 15:22:56 +05:30
16410d90b8 Use the simplified model loading API in diffusion-kit; Catch and report exceptions while generating images 2022-12-09 15:21:49 +05:30
27c6113287 Support hypernetworks; moves the hypernetwork module to diffusion-kit 2022-12-09 13:29:06 +05:30
f4a6910ab4 Work-in-progress: refactored the end-to-end codebase. Missing: hypernetworks, turbo config, and SD 2. Not tested yet 2022-12-08 21:39:09 +05:30
bad89160cc Work-in-progress model loading 2022-12-08 13:50:46 +05:30
5782966d63 Merge branch 'beta' into refactor 2022-12-08 11:58:09 +05:30
ba2c966329 First draft of multi-task in a single session. (#622) 2022-12-08 11:12:46 +05:30
f8dee7e25f Add test sample to one of the plugin. (#626)
* Added test example from a plugin.

* Only load style if #news was created.
2022-12-08 10:57:50 +05:30
a8151176d7 SD 2.1 2022-12-08 10:04:33 +05:30
9ee0b7fe2e SD 2.1 2022-12-08 10:04:14 +05:30
fb6a7e04f5 Work-in-progress refactor of the backend, to move most of the logic to diffusion-kit and keeping this as a UI around that engine. Does not work yet. 2022-12-07 22:15:35 +05:30
bfdf487d52 SD2 models no longer need to be prefixed with 'sd2_' . The model loader now checks for a key that only SD2 models seem to have, to deduce which config file to use 2022-12-07 16:19:46 +05:30
b7aac1501d Don't show prompt strength when the app starts 2022-12-07 13:12:35 +05:30
273525e6f9 Merge branch 'beta' of github.com:cmdr2/stable-diffusion-ui into beta 2022-12-07 13:12:02 +05:30
064a4938c1 Don't show prompt strength when the app starts 2022-12-07 13:11:49 +05:30
182236e742 Hypernets mergefixes (#625)
* Add hypernetwork args definition in the engine.

* Add the values to reqBody

* Don't load hypernetwork.py with SD2 until it's compatible.
2022-12-07 12:35:36 +05:30
75cb052cca Paint editor - translucent mask, more brush size options 2022-12-07 12:28:28 +05:30
d4a378827f Paint editor - translucent mask, more brush size options 2022-12-07 12:27:40 +05:30
592d5e8c40 Merge branch 'beta' of github.com:cmdr2/stable-diffusion-ui into beta 2022-12-07 11:41:52 +05:30
733150111d Changelog 2022-12-07 11:41:36 +05:30
cbe91251ac Hypernetwork support (#619)
* Update README.md

* Update README.md

* Make on_sd_start.sh executable

* Merge pull request #542 from patriceac/patch-1

Fix restoration of model and VAE

* Merge pull request #541 from patriceac/patch-2

Fix restoration of parallel output setting

* Hypernetwork support

Adds support for hypernetworks. Hypernetworks are stored in /models/hypernetworks

* forgot to remove unused code

Co-authored-by: cmdr2 <secondary.cmdr2@gmail.com>
2022-12-07 11:24:16 +05:30
1283c6483d Use the reqBody exposed to events to allow plugins to change the request. (#620) 2022-12-07 09:34:04 +05:30
f24d3d69af Fix download pictures (#616)
Old link was broken. Apparently the "develop" branch was deleted.
2022-12-07 09:33:11 +05:30
7984327d81 Fixed tasks buttons by replacing the error with a warning when setting properties to undefined. (#618) 2022-12-06 21:49:05 +05:30
ef90832aea engine.js (#615)
* New engine.js first draft.

* Small fixes...

* Bump version for cache...

* Improved cancellation code.

* Cleaning

* Wrong argument used in Task.waitUntil

* session_id needs to always match SD.sessionId

* Removed passing explicit Session ID from UI.
Use SD.sessionID to replace.

* Cleaning... Removed a disabled line and a hardcoded value.

* Fix return if tasks are still waiting.

* Added checkbox to reverse processing order.

* Fixed progress not displaying properly.

* Renamed reverse label.

* Only hide progress bar inside onCompleted.

* Thanks to rbertus2000 for helping testing and debugging!

* Resolve async promises when used optionally.

* when removed var should have used let, not const.

* Renamed getTaskErrorHandler to onTaskErrorHandler to better reflect actual implementation.

* Switched to the unsafer and less git friendly end of lines comma as requested in review.

* Raised SERVER_STATE_VALIDITY_DURATION to 90 seconds to match the changes to Beta.

* Added logging.

* Added one more hook before those inside the SD engine.

* Added selftest.plugin.js as part of core.

* Removed a tests that wasn't yet implemented...

* Groupped task stopping and abort in single function.

* Added optional test for plugins.

* Allow prompt text to be selected.

* Added comment.

* Improved isServerAvailable for better mobile usage and added comments for easier debugging.

* Comments...

* Normalized EVENT_STATUS_CHANGED to follow the same pattern as the other events.

* Disable plugins if editorModifierTagsList is not defined.

* Adds a new ServiceContainer to register IOC handlers.

* Added expect test for a missing dependency in a ServiceContainer

* Moved all event code in it's own sub class for easier reuse.

* Removed forgotten unused var...

* Allow getPrompts to be reused be plugins.

* Renamed EventSource to GenericEventSource to avoid redefining an existing class name.

* Added missing time argument to debounce

* Added output_quality to engine.js

* output_quality need to be an int.

* Fixed typo.

* Replaced the default euler_a by dpm2 to work with both SD1.# and SD2

* Remove generic completed tasks from plugins on generator complete.

* dpm2 starts at step 2, replaced with plms to start at step 1.

* Merge error

* Merge error

* changelog

Co-authored-by: Marc-Andre Ferland <madrang@gmail.com>
2022-12-06 17:04:08 +05:30
9571b8addc Merge pull request #614 from cmdr2/beta
Beta
2022-12-06 16:18:24 +05:30
9601f304a5 Merge branch 'beta' of github.com:cmdr2/stable-diffusion-ui into beta 2022-12-06 16:17:55 +05:30
ff43dac2a7 Open the color box when the custom label is clicked 2022-12-06 16:17:45 +05:30
0a43305455 Merge pull request #613 from cmdr2/beta
Beta
2022-12-06 16:11:38 +05:30
54d8224de2 Update CHANGES.md 2022-12-06 16:09:58 +05:30
c9e34457cd Tweak text in editor 2022-12-06 15:39:27 +05:30
47c8eb304f Revert the button styling 2022-12-06 15:36:52 +05:30
2dd39fa218 Disable auto-save for the auto-scroll toggle, until a better way to save it is figured out. It currently breaks a few UI fields, since it calls initSettings() a second time 2022-12-06 15:20:31 +05:30
cb618efb98 Image Editor Updates (#612)
* fixed tools for image editor to be more modular and made cursor an actual cursor change

* fixed eraser cursor positioning

* updated opacity to not have a 100 option

* separated clear into an actions section

* added history support for image editor. ctrl-z and ctrl-y both work now

* removed extra console log debugging stuff

* updated buttons style

* updated the button ui on the main page as requested

* updated with a bunch of bugfixes
2022-12-06 13:56:51 +05:30
e7ca8090fd Make JPEG Output quality user controllable (#607)
Add a slider to the image options for the JPEG quality
For PNG images, the slider is hidden.
2022-12-05 11:02:33 +05:30
7861c57317 Safetensor support (Fixes #599) (#608)
* safetensors support
Add support for checkpoints in safetensors format: https://github.com/huggingface/safetensors

This format shall be safer than pickle files

* pip install safetensors
2022-12-05 10:59:48 +05:30
f701b8dc29 Simplify onUpscaleClick (#602)
* Simplified onUpscaleClick code.

* Updated fix with comment as to what it's fixing.

* Move the fix to enqueueImageVariationTask
2022-12-05 10:46:10 +05:30
bd10a850fa Fix upscaling when a source image is set (#593)
* Fix upscaling when a source image is set

If you have an image selected (img2img) then clicking Upscale on another unrelated image, the image for img2img is used and you get something very unexpected.

* Fix for img2img and mask gens
2022-12-03 22:25:14 +05:30
0f96688a54 Highlight artist modifiers when clicked (#596)
Artist modifiers, with the exception of Artstation (the first one), don't have the outline when selected. All the other modifiers, above or below, seem to work as intended

https://discord.com/channels/1014774730907209781/1014774732018683927/1048343258775949322
2022-12-03 22:18:57 +05:30
8eeca90d55 Fix weird scrolling when using a pen (#588)
With a pen, typing on a browser page, waiting a short moment, and then moving the pen scrolls the page.
Call event.preventDefault() to disable this default behaviour for events in the canvas area.
2022-12-02 14:40:21 +05:30
367e7f7065 Add dpm2 (#592)
* Move cond_stage_model to the right device

* Removed unused vars.

* Added 'dpm2'
2022-12-02 12:58:00 +05:30
ee19eaae62 Fix for RuntimeError, missing lines. (#591)
* Move cond_stage_model to the right device

* Removed unused vars.
2022-12-02 12:57:26 +05:30
8eb3a3536b Update on_sd_start.bat 2022-12-02 12:06:41 +05:30
cfd50231e1 Update on_sd_start.sh 2022-12-02 12:06:39 +05:30
1c8ab9e1b4 Temporarily set the display: flex style only on the image editor buttons 2022-12-01 16:59:12 +05:30
6094cd8578 Fix the 'load from file' button that had moved to the next line' 2022-12-01 16:10:20 +05:30
353c49a40b Bump version 2022-12-01 16:05:35 +05:30
277140f218 Image Editor (#574)
* started implementing hamunii's image editor, and added a hamunii theme

* fixed so active tab is main tab

* added some testing stuff for image ediotr

* re-implemented canvas drawing myself. just need to add layer stuff now

* moved everything to an image editor class and implement it so it actually works nicely now

* fixed a couple weird bugs and cleaned up the background image and sharpness stuff

* cleaned up a lot of stuff about the editor, added tools, buttons, made it mostly work in the current ui

* added inpainting support

* updated with more nice changes/updates to the inpainting and drawing editor

* made some more fixes and touchups to the image editor

* removed a bunch of semicolons

* remove old image inpainting system

* updated to work properly on mobile

* made a minor bugfix

* fixed img_size_box alignment

* Update index.html

Co-authored-by: cmdr2 <secondary.cmdr2@gmail.com>
Co-authored-by: cmdr2 <shashank.shekhar.global@gmail.com>
2022-12-01 16:01:09 +05:30
ca9413ccf4 Toggle image modifiers plugin (#558)
* Toggle image modifiers plugin

Right-click on image modifiers to temporarily turn them off without removing them. To quickly iterate and experiment with various combinations.

Please note this plugin required a minor tweak in getPrompts() to add support for image modifier inactive state.

* Fix tag matching

Co-authored-by: cmdr2 <secondary.cmdr2@gmail.com>
2022-12-01 15:10:36 +05:30
c9a0d090cb Merge pull request #569 from patriceac/Fix-seed-behavior
Tweak the seed behavior
2022-12-01 15:03:21 +05:30
1cd783d3a3 Merge pull request #534 from patriceac/Custom-modifiers-as-a-plugin
Custom modifiers as a plugin
2022-12-01 14:59:29 +05:30
1ead764a02 Merge branch 'beta' into Custom-modifiers-as-a-plugin 2022-12-01 14:57:39 +05:30
45f7b35954 Merge pull request #581 from patriceac/Hotfix-for-repeat-image-modifiers-handling
Hotfix for repeat image modifiers
2022-12-01 14:47:15 +05:30
6a41540749 Remove unused scripts from the previous installer 2022-12-01 14:44:20 +05:30
5b47da67f6 Merge pull request #582 from cmdr2/beta
Beta
2022-12-01 13:59:13 +05:30
292f68ff97 Typo in css path 2022-12-01 13:57:38 +05:30
3b554d881a Styling changes for the confirm dialog 2022-12-01 13:54:49 +05:30
40ebf468d3 Hotfix for repeat image modifiers
As per Discord conversation, this PR fixed the image modifiers behavior when a modifier appears more than once, and also fixes a regression introduced by ((weighted modifiers)).
2022-11-30 22:13:13 -08:00
4bc6e51862 Merge pull request #580 from JeLuF/patch-5
Add Quadro T2000 to force_full_precision list.
2022-12-01 10:35:32 +05:30
427861cf13 Add Quadro T2000 to force_full_precision list. 2022-12-01 00:59:12 +01:00
da3e7a2eb8 Fix the broken image close button 2022-11-30 21:14:18 +05:30
2979f04c82 Use socket.gethostname() instead of socket.getfqdn() 2022-11-30 20:17:18 +05:30
1949d8a50c Tweak modifiers help msg 2022-11-30 16:32:43 +05:30
ee66c799e0 Merge pull request #563 from patriceac/Mouse-wheel-behavior-fixes
Improved Mouse Wheel UX with Image Modifiers
2022-11-30 16:24:05 +05:30
7c50b8bf94 Merge branch 'beta' into Mouse-wheel-behavior-fixes 2022-11-30 16:22:45 +05:30
141ff74ece Merge pull request #557 from madrang/webmanifest
Added web manifest to allow installing the Url as a web app.
2022-11-30 16:19:04 +05:30
321e5f1ed6 Merge pull request #564 from patriceac/Fix-UI-display-when-removing-the-last-task
Fix UI display when removing the last task
2022-11-30 16:14:59 +05:30
6d131d9d8e Merge branch 'beta' into Fix-UI-display-when-removing-the-last-task 2022-11-30 16:14:28 +05:30
7e69b8eb31 Merge branch 'beta' of github.com:cmdr2/stable-diffusion-ui into beta 2022-11-30 16:11:22 +05:30
4e0b33e6a4 Merge pull request #566 from patriceac/Visual-feedback-on-buttons
Visual feedback on button click
2022-11-30 16:11:08 +05:30
54f7e6fcb8 SD2 fix - register buffer on the correct device 2022-11-30 16:05:06 +05:30
529169c4da Merge pull request #541 from patriceac/patch-2
Fix restoration of parallel output setting
2022-11-30 15:54:04 +05:30
a2c8c99215 Merge pull request #541 from patriceac/patch-2
Fix restoration of parallel output setting
2022-11-30 15:53:30 +05:30
e8bf3fd009 Merge pull request #542 from patriceac/patch-1
Fix restoration of model and VAE
2022-11-30 15:52:26 +05:30
465676e9ea Merge pull request #542 from patriceac/patch-1
Fix restoration of model and VAE
2022-11-30 15:51:31 +05:30
af53b57047 Changelog 2022-11-30 15:49:47 +05:30
54b5f75905 Rename auto-scroll to reflect its purpose 2022-11-30 15:47:24 +05:30
4348333497 Don't register listeners for an autosave setting, if they've already been registered 2022-11-30 15:45:30 +05:30
cc31110bcf Merge pull request #537 from patriceac/Generate-screen-layout
Auto-scroll plugin
2022-11-30 15:44:31 +05:30
f7c04bf7a6 bump version 2022-11-30 14:34:42 +05:30
029509ebad Unify IP info with devices, into a system_info table 2022-11-30 14:34:24 +05:30
65102bb64d Merge pull request #536 from JeLuF/serverip
Show network addresses in system settings
2022-11-30 14:00:18 +05:30
b96b55c5ce Merge branch 'beta' into serverip 2022-11-30 14:00:12 +05:30
1f5aba010e Merge branch 'beta' of https://github.com/cmdr2/stable-diffusion-ui.git into webmanifest
# Conflicts:
#	ui/index.html
2022-11-30 03:29:46 -05:00
f0b3bea4e3 Also confirm before the 'Stop All' button acts; Tweak wording of confirm dialog 2022-11-30 13:54:42 +05:30
84fae2d9e0 Merge pull request #531 from JeLuF/confirm
Confirm 'Clear All' and 'Stop Task'
2022-11-30 13:48:14 +05:30
0b96fa112d Merge branch 'beta' into confirm 2022-11-30 13:47:08 +05:30
c64bcd23d3 Picklescanner is mandatory 2022-11-30 13:38:22 +05:30
efd9a22bb5 Merge pull request #530 from madrang/list-models
Scan model once as start, then only if changed.
2022-11-30 13:37:27 +05:30
159c3edfe3 Simplify the logic for toggling modifier cards, no need to loop through the cards, since we already have the card object in hand 2022-11-30 13:33:20 +05:30
f74fa8657b Merge pull request #518 from patriceac/patch-6
Fix duplicate custom modifiers activation states
2022-11-30 13:27:14 +05:30
648b142a4b Merge pull request #571 from madrang/tabs-css
Add a new css rule for screens smaller than 500px.
2022-11-30 13:24:38 +05:30
426f92595e Merge pull request #520 from madrang/fix-gfpgan
Fix the gfpgan fix for multi-gpu
2022-11-30 13:08:10 +05:30
82a8d9b644 Merge pull request #577 from cmdr2/beta
Beta
2022-11-30 12:19:44 +05:30
ff9430b8a2 Tabs to 4 spaces 2022-11-30 12:18:34 +05:30
2e69ffcb5e Merge pull request #576 from cmdr2/beta
v2.4.16 - Remove the use of git-apply
2022-11-30 12:12:17 +05:30
0ea38db7ef Show the SD 2.0 setting only to beta users 2022-11-30 12:05:46 +05:30
a69d4c279e Make seed field behavior deterministic
Copying the image settings while 'Random' is enabled would cause the seed to be randomized. This was misleading as what I see wasn't what I would get.
2022-11-29 19:04:42 -08:00
2706149399 Tweak left padding of editor panel 2022-11-29 15:27:13 +05:30
3d0cdc1cb6 Bump version 2022-11-29 13:32:29 +05:30
ac605e9352 Typos and minor fixes for sd 2 2022-11-29 13:30:08 +05:30
5432297691 Default to sd-v1-4 when trying to use a SD2 model with SD 1.4, and warn the user. This will eventually be unnecessary 2022-11-29 13:14:58 +05:30
e37be0f954 Remove the need to use yield in the core loop for streaming results. This removes the need to patch the Stable Diffusion code, which can be fragile 2022-11-29 13:03:57 +05:30
a99209b674 Add a new css rule for screens smaller than 500px. 2022-11-28 20:23:17 -05:00
cb02b5ba18 Merge pull request #567 from madrang/tabs-css
Improved tabs flow on small screens.
2022-11-28 18:27:29 +05:30
69f14edd80 Tweak the seed behavior
Update the seed *before* starting the processing, so interrupting the processing retains the seed being used for the batch being currently processed.

The idea behind that is that if I like the gen I'm currently seeing and want to build on top of it, I can create a new task with the same seed without having to wait for the current task to complete.
2022-11-28 01:19:31 -08:00
14714b950d Slight improvement of detection logic 2022-11-28 00:14:12 -08:00
13654cb8c0 Make on_sd_start.sh executable 2022-11-28 13:00:02 +05:30
00276228cf Make on_sd_start.sh executable 2022-11-28 12:59:33 +05:30
8583bb8d7b Improved tabs flow on small screens. 2022-11-27 20:37:20 -05:00
d48951fe00 Visual feedback on button click
When there are too many tasks and the top of the list is not visible, there is no visual feedback that a task has been successfully added to the queue.

Adding a subtle visual feedback on buttons upon click to reflect that the mouse event was taken into account.
2022-11-27 16:26:01 -08:00
99bdcfa0a5 Set theme-color from the current selected theme. 2022-11-27 15:49:54 -05:00
e64e1a92e6 Fix UI display when removing the last task
Clear All button properly shows the "welcome message", but Remove the last task would just result in a blank Preview pane.
2022-11-27 12:42:51 -08:00
e278e639a3 Fix removal of image modifiers with non-zero weights
Properly handles removal of image modifiers that had (((modifiers))) or [[[modifiers]]] updated at runtime.
2022-11-27 03:00:19 -08:00
c4bad5c454 Conciseness
Shortening the sentence.
2022-11-27 01:42:39 -08:00
da41a74efc Require Ctrl+Mouse Wheel for modifier weight adjustment
The current behavior is just too annoying, and scrolling the page is a much more frequent activity than tweaking the weights.
2022-11-27 01:35:47 -08:00
0dc970562a Reverting this unnecessary change 2022-11-26 18:27:04 -08:00
2d8401473d Revert "Update custom-modifiers.plugin.js"
This reverts commit e5c11ea214.
2022-11-26 16:57:54 -08:00
9c91f57b19 Added web manifest to allow installing the Url as a web app. 2022-11-26 15:51:26 -05:00
f14afcd129 Update README.md 2022-11-26 12:44:19 +05:30
5c1a3d82d7 Update README.md 2022-11-26 12:23:03 +05:30
e02a917569 Improved logic for auto-scroll toggle insertion
Updating the insertion logic to prepare for future UI improvements.
2022-11-25 22:51:49 -08:00
347fa0fda1 Update on_sd_start.bat 2022-11-26 01:50:30 +05:30
6510d4cb02 Merge pull request #553 from cmdr2/revert-552-beta
Revert "Patching patch again"
2022-11-26 01:45:57 +05:30
91e4ccf6f8 Update on_sd_start.bat 2022-11-26 01:43:41 +05:30
36249874bc Revert "Patching patch again" 2022-11-26 01:42:16 +05:30
d2b5d6cce9 Merge pull request #552 from jsuelwald/beta
Patching patch again
2022-11-26 01:40:02 +05:30
b2922741c9 Patching patch again 2022-11-25 21:06:19 +01:00
300f3e27db Merge pull request #551 from cmdr2/revert-549-patch-2
Revert "Update ddim_callback_sd2.patch"
2022-11-26 01:25:17 +05:30
d7330b80a9 Revert "Update ddim_callback_sd2.patch" 2022-11-26 01:22:35 +05:30
acdd7667b7 Merge pull request #549 from jsuelwald/patch-2
Update ddim_callback_sd2.patch
2022-11-26 01:19:08 +05:30
8114fa3f5d Update ddim_callback_sd2.patch 2022-11-25 20:46:24 +01:00
4bc5508f38 Rollback 2022-11-26 01:07:55 +05:30
e503c6092e Ddim decode for img2img 2022-11-26 00:55:39 +05:30
6a8985d8dd Update ddim_callback_sd2.patch 2022-11-26 00:49:15 +05:30
bee67fd883 Shape 2022-11-25 23:54:08 +05:30
a1d75d40aa Update runtime.py 2022-11-25 23:36:43 +05:30
29484867ca Typo 2022-11-25 23:32:56 +05:30
7fa983b971 Img2img sd2 attempt 2 2022-11-25 23:28:31 +05:30
617a8b2814 Fix for make_schedule error in sd2 2022-11-25 23:15:22 +05:30
b924d323d4 img2img attempt for sd2 2022-11-25 22:36:02 +05:30
a2efda41d3 Cleaning up the code 2022-11-25 03:50:47 -08:00
642c114501 Working txt2img 2022-11-25 14:29:24 +05:30
02dd3e457d Tweaks to load sd1 models in sd2 code, typos 2022-11-25 13:57:15 +05:30
ea7b28c9d5 Placeholder changes for SD 2.0 support, haven't tested yet 2022-11-25 12:17:44 +05:30
472ab4a9ce Fix restoration of parallel output setting 2022-11-24 14:15:27 -08:00
fca84e3edf Fix restoration of model and VAE
😅
2022-11-24 13:47:35 -08:00
b70235ff92 Set the PYTHONPATH in the developer console, before the prompt shows up 2022-11-24 11:48:27 +05:30
6eff591df7 System settings to disable the 'Are you sure?'-dialogs 2022-11-23 23:05:30 +01:00
d0b2bf736e Auto-scroll off by default 2022-11-23 03:23:51 -08:00
e5c11ea214 Update custom-modifiers.plugin.js
Removing the redundant initialization of the array.
2022-11-23 03:00:19 -08:00
6b6443406d Create Autoscroll.plugin.js 2022-11-23 02:57:07 -08:00
3452d7852a Merge branch 'beta' into serverip 2022-11-23 11:28:05 +01:00
f1fa10badd Show network addresses in system settings
Users sometimes struggle to get the IP address of their PC. This PR adds a button to the system settings pane that will list the server's IP
addresses.
2022-11-23 11:25:36 +01:00
1267621424 Merge pull request #535 from cmdr2/beta
Switch to new custom backend
2022-11-23 15:09:09 +05:30
8a0ec95fe1 Merge branch 'main' into beta 2022-11-23 15:08:34 +05:30
ba30a63407 Update custom-modifiers.plugin.js
Add a carriage return at the end
2022-11-22 23:07:44 -08:00
c56a2adbcb Custom modifiers as a plugin 2022-11-22 19:04:20 -08:00
2de96d4dc9 Scan model once as start, then only if changed. 2022-11-22 20:41:08 -05:00
a486f20892 Merge branch 'beta' into confirm 2022-11-22 21:33:18 +01:00
49535deb2e Confirm 'Clear All' and 'Stop Task'
Ask for a confimation before clearing the results pane or stopping a render task. The dialog can be skipped by holding down the shift key while clicking on the button.
2022-11-22 21:27:36 +01:00
7cbf62cf12 Revert whitespace fix 2022-11-22 23:30:03 +05:30
3b0ace3410 Revert whitespace fix 2022-11-22 23:27:46 +05:30
5a9c8e1d87 Warn but don't fix whitespaces in a patch 2022-11-22 23:21:11 +05:30
daaa65dc0a Warn but don't fix whitespaces in a patch 2022-11-22 23:20:24 +05:30
ab4e371524 Fix whitespace during git apply 2022-11-22 22:25:36 +05:30
927fd304b0 Merge branch 'beta' of github.com:cmdr2/stable-diffusion-ui into beta 2022-11-22 22:22:07 +05:30
5af84b8e90 Fix whitespace during git apply 2022-11-22 22:21:54 +05:30
d425dac499 Merge pull request #529 from madrang/dragNdrop
Fixing file drag and drop.
2022-11-22 21:57:34 +05:30
d056459e76 Merge pull request #529 from madrang/dragNdrop
Fixing file drag and drop.
2022-11-22 21:56:07 +05:30
3169485f33 Fixing file drag and drop. 2022-11-22 11:11:06 -05:00
d9b9f80a93 diffusion-kit upgrade 2022-11-22 17:39:51 +05:30
d429505b71 Update version of diffusion-kit 2022-11-22 17:14:20 +05:30
72ee708917 Remove the need to install realesrgan, gfpgan and certain specific package versions, since the new backend should install them directly 2022-11-22 16:50:10 +05:30
93bbfac29a Change the backend to a custom fork of SD, since basujindal's fork is no longer under development. This fork is intended to include the common models/tools used like RealESRGAN, GFPGAN, Codeformer etc, and is meant to be a community-developed project 2022-11-22 16:38:39 +05:30
040d7a6563 Merge pull request #528 from patriceac/patch-1
Add support for custom modifiers to d&d and clipboard
2022-11-22 16:06:26 +05:30
e8dd930a50 Add support for custom modifiers to d&d and clipboard
Add support for custom modifiers to d&d and clipboard and remove now-redundant code in restoreTaskToUI.
2022-11-22 00:06:43 -08:00
31c049ebfe Version css 2022-11-22 11:09:01 +05:30
d343a37fb2 Merge branch 'beta' of github.com:cmdr2/stable-diffusion-ui into beta 2022-11-22 11:08:07 +05:30
7097175c6f CSS tweak for logo and version 2022-11-22 11:07:50 +05:30
8e57c49043 Merge pull request #527 from cmdr2/beta
Beta
2022-11-22 11:00:25 +05:30
9f036ceefd Merge branch 'main' into beta 2022-11-22 10:59:51 +05:30
ff3ca8b36b link to new downloads 2022-11-22 10:48:43 +05:30
87a7b70a27 Shell error code check 2022-11-22 10:40:20 +05:30
9c71c966ca Shell error code check 2022-11-22 10:39:47 +05:30
6dc99e676e Reduce the width of the editor sidebar, regression 2022-11-21 18:45:37 +05:30
3bf5e11f94 Nowarn for fresh installation (git apply whitespace) 2022-11-21 17:19:55 +05:30
eef9af2266 Typo 2022-11-21 17:14:54 +05:30
8316a002da Don't warn about whitespace in the git patch application 2022-11-21 17:11:38 +05:30
c3bf767024 Merge pull request #525 from cmdr2/beta
Beta
2022-11-21 14:08:47 +05:30
0a21a69a9f Updated facexlib fix for usage on multi-gpu. 2022-11-20 13:04:22 -05:00
cbc48e31e1 Fix duplicate custom modifiers activation states
Fixing activation state for custom modifier cards sharing the same tag where only one of the cards gets (de)activated.
2022-11-19 19:25:28 -08:00
70 changed files with 18208 additions and 3457 deletions

27
3rd-PARTY-LICENSES Normal file
View File

@ -0,0 +1,27 @@
jquery-confirm
==============
https://craftpip.github.io/jquery-confirm/
jquery-confirm is licensed under the MIT license:
The MIT License (MIT)
Copyright (c) 2019 Boniface Pereira
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@ -1,8 +1,28 @@
# What's new? # What's new?
## v2.5
### Major Changes
- **Nearly twice as fast** - significantly faster speed of image generation. We're now pretty close to automatic1111's speed. Code contributions are welcome to make our project even faster: https://github.com/easydiffusion/sdkit/#is-it-fast
- **Full support for Stable Diffusion 2.1** - supports loading v1.4 or v2.0 or v2.1 models seamlessly. No need to enable "Test SD2", and no need to add `sd2_` to your SD 2.0 model file names.
- **Memory optimized Stable Diffusion 2.1** - you can now use 768x768 models for SD 2.1, with the same low VRAM optimizations that we've always had for SD 1.4.
- **6 new samplers!** - explore the new samplers, some of which can generate great images in less than 10 inference steps!
- **Model Merging** - You can now merge two models (`.ckpt` or `.safetensors`) and output `.ckpt` or `.safetensors` models, optionally in `fp16` precision. Details: https://github.com/cmdr2/stable-diffusion-ui/wiki/Model-Merging
- **Fast loading/unloading of VAEs** - No longer needs to reload the entire Stable Diffusion model, each time you change the VAE
- **Database of known models** - automatically picks the right configuration for known models. E.g. we automatically detect and apply "v" parameterization (required for some SD 2.0 models), and "fp32" attention precision (required for some SD 2.1 models).
- **Color correction for img2img** - an option to preserve the color profile (histogram) of the initial image. This is especially useful if you're getting red-tinted images after inpainting/masking.
- **Three GPU Memory Usage Settings** - `High` (fastest, maximum VRAM usage), `Balanced` (default - almost as fast, significantly lower VRAM usage), `Low` (slowest, very low VRAM usage). The `Low` setting is applied automatically for GPUs with less than 4 GB of VRAM.
- **Save metadata as JSON** - You can now save the metadata files as either text or json files (choose in the Settings tab).
- **Major rewrite of the code** - Most of the codebase has been reorganized and rewritten, to make it more manageable and easier for new developers to contribute features. We've separated our core engine into a new project called `sdkit`, which allows anyone to easily integrate Stable Diffusion (and related modules like GFPGAN etc) into their programming projects (via a simple `pip install sdkit`): https://github.com/easydiffusion/sdkit/
- **Name change** - Last, and probably the least, the UI is now called "Easy Diffusion". It indicates the focus of this project - an easy way for people to play with Stable Diffusion.
Our focus continues to remain on an easy installation experience, and an easy user-interface. While still remaining pretty powerful, in terms of features and speed.
## v2.4 ## v2.4
### Major Changes ### Major Changes
- **Automatic scanning for malicious model files** - using `picklescan`. Thanks @JeLuf - **Allow reordering the task queue** (by dragging and dropping tasks). Thanks @madrang
- **Automatic scanning for malicious model files** - using `picklescan`, and support for `safetensor` model format. Thanks @JeLuf
- **Image Editor** - for drawing simple images for guiding the AI. Thanks @mdiller
- **Use pre-trained hypernetworks** - for improving the quality of images. Thanks @C0bra5
- **Support for custom VAE models**. You can place your VAE files in the `models/vae` folder, and refresh the browser page to use them. More info: https://github.com/cmdr2/stable-diffusion-ui/wiki/VAE-Variational-Auto-Encoder - **Support for custom VAE models**. You can place your VAE files in the `models/vae` folder, and refresh the browser page to use them. More info: https://github.com/cmdr2/stable-diffusion-ui/wiki/VAE-Variational-Auto-Encoder
- **Experimental support for multiple GPUs!** It should work automatically. Just open one browser tab per GPU, and spread your tasks across your GPUs. For e.g. open our UI in two browser tabs if you have two GPUs. You can customize which GPUs it should use in the "Settings" tab, otherwise let it automatically pick the best GPUs. Thanks @madrang . More info: https://github.com/cmdr2/stable-diffusion-ui/wiki/Run-on-Multiple-GPUs - **Experimental support for multiple GPUs!** It should work automatically. Just open one browser tab per GPU, and spread your tasks across your GPUs. For e.g. open our UI in two browser tabs if you have two GPUs. You can customize which GPUs it should use in the "Settings" tab, otherwise let it automatically pick the best GPUs. Thanks @madrang . More info: https://github.com/cmdr2/stable-diffusion-ui/wiki/Run-on-Multiple-GPUs
- **Cleaner UI design** - Show settings and help in new tabs, instead of dropdown popups (which were buggy). Thanks @mdiller - **Cleaner UI design** - Show settings and help in new tabs, instead of dropdown popups (which were buggy). Thanks @mdiller
@ -19,8 +39,32 @@
- Configuration to prevent the browser from opening on startup - Configuration to prevent the browser from opening on startup
- Lots of minor bug fixes - Lots of minor bug fixes
- A `What's New?` tab in the UI - A `What's New?` tab in the UI
- Ask for a confimation before clearing the results pane or stopping a render task. The dialog can be skipped by holding down the shift key while clicking on the button.
- Show the network addresses of the server in the systems setting dialog
- Support loading models in the safetensor format, for improved safety
### Detailed changelog ### Detailed changelog
* 2.4.21 - 23 Dec 2022 - Speed up image creation, by removing a delay (regression) of 4-5 seconds between clicking the `Make Image` button and calling the server.
* 2.4.20 - 22 Dec 2022 - `Pause All` button to pause all the pending tasks. Thanks @JeLuf
* 2.4.20 - 22 Dec 2022 - `Undo`/`Redo` buttons in the image editor. Thanks @JeLuf
* 2.4.20 - 22 Dec 2022 - Drag handle to reorder the tasks. This fixed a bug where the metadata was no longer selectable (for copying). Thanks @JeLuf
* 2.4.19 - 17 Dec 2022 - Add Undo/Redo buttons in the Image Editor. Thanks @JeLuf
* 2.4.19 - 10 Dec 2022 - Show init img in task list
* 2.4.19 - 7 Dec 2022 - Use pre-trained hypernetworks while generating images. Thanks @C0bra5
* 2.4.19 - 6 Dec 2022 - Allow processing new tasks first. Thanks @madrang
* 2.4.19 - 6 Dec 2022 - Allow reordering the task queue (by dragging tasks). Thanks @madrang
* 2.4.19 - 6 Dec 2022 - Re-organize the code, to make it easier to write user plugins. Thanks @madrang
* 2.4.18 - 5 Dec 2022 - Make JPEG Output quality user controllable. Thanks @JeLuf
* 2.4.18 - 5 Dec 2022 - Support loading models in the safetensor format, for improved safety. Thanks @JeLuf
* 2.4.18 - 1 Dec 2022 - Image Editor, for drawing simple images for guiding the AI. Thanks @mdiller
* 2.4.18 - 1 Dec 2022 - Disable an image modifier temporarily by right-clicking it. Thanks @patriceac
* 2.4.17 - 30 Nov 2022 - Scroll to generated image. Thanks @patriceac
* 2.4.17 - 30 Nov 2022 - Show the network addresses of the server in the systems setting dialog. Thanks @JeLuf
* 2.4.17 - 30 Nov 2022 - Fix a bug where GFPGAN wouldn't work properly when multiple GPUs tried to run it at the same time. Thanks @madrang
* 2.4.17 - 30 Nov 2022 - Confirm before stopping or clearing all the tasks. Thanks @JeLuf
* 2.4.16 - 29 Nov 2022 - Bug fixes for SD 2.0 - remove the need for patching, default to SD 1.4 model if trying to load an SD2 model in SD1.4.
* 2.4.15 - 25 Nov 2022 - Experimental support for SD 2.0. Uses lots of memory, not optimized, probably GPU-only.
* 2.4.14 - 22 Nov 2022 - Change the backend to a custom fork of Stable Diffusion
* 2.4.13 - 21 Nov 2022 - Change the modifier weight via mouse wheel, drag to reorder selected modifiers, and some more modifier-related fixes. Thanks @patriceac * 2.4.13 - 21 Nov 2022 - Change the modifier weight via mouse wheel, drag to reorder selected modifiers, and some more modifier-related fixes. Thanks @patriceac
* 2.4.12 - 21 Nov 2022 - Another fix for improving how long images take to generate. Reduces the time taken for an enqueued task to start processing. * 2.4.12 - 21 Nov 2022 - Another fix for improving how long images take to generate. Reduces the time taken for an enqueued task to start processing.
* 2.4.11 - 21 Nov 2022 - Installer improvements: avoid crashing if the username contains a space or special characters, allow moving/renaming the folder after installation on Windows, whitespace fix on git apply * 2.4.11 - 21 Nov 2022 - Installer improvements: avoid crashing if the username contains a space or special characters, allow moving/renaming the folder after installation on Windows, whitespace fix on git apply

View File

@ -6,7 +6,7 @@ Thanks
# For developers: # For developers:
If you would like to contribute to this project, there is a discord for dicussion: If you would like to contribute to this project, there is a discord for discussion:
[![Discord Server](https://badgen.net/badge/icon/discord?icon=discord&label)](https://discord.com/invite/u9yhsFmEkB) [![Discord Server](https://badgen.net/badge/icon/discord?icon=discord&label)](https://discord.com/invite/u9yhsFmEkB)
## Development environment for UI (frontend and server) changes ## Development environment for UI (frontend and server) changes

137
README.md
View File

@ -1,66 +1,107 @@
# Stable Diffusion UI # Stable Diffusion UI
### Easiest way to install and use [Stable Diffusion](https://github.com/CompVis/stable-diffusion) on your own computer. No dependencies or technical knowledge required. 1-click install, powerful features. ### The easiest way to install and use [Stable Diffusion](https://github.com/CompVis/stable-diffusion) on your own computer. Does not require technical knowledge, does not require pre-installed software. 1-click install, powerful features, friendly community.
[![Discord Server](https://img.shields.io/discord/1014774730907209781?label=Discord)](https://discord.com/invite/u9yhsFmEkB) (for support, and development discussion) | [Troubleshooting guide for common problems](Troubleshooting.md) [![Discord Server](https://img.shields.io/discord/1014774730907209781?label=Discord)](https://discord.com/invite/u9yhsFmEkB) (for support, and development discussion) | [Troubleshooting guide for common problems](https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting)
### New:
Experimental support for Stable Diffusion 2.0 is available in beta!
---- ----
## Step 1: Download the installer # Step 1: Download and prepare the installer
Click the download button for your operating system:
<p float="left"> <p float="left">
<a href="#installation"><img src="https://github.com/cmdr2/stable-diffusion-ui/raw/develop/media/download-win.png" width="200" /></a> <a href="https://github.com/cmdr2/stable-diffusion-ui/releases/download/v2.4.13/stable-diffusion-ui-windows.zip"><img src="https://github.com/cmdr2/stable-diffusion-ui/raw/main/media/download-win.png" width="200" /></a>
<a href="#installation"><img src="https://github.com/cmdr2/stable-diffusion-ui/raw/develop/media/download-linux.png" width="200" /></a> <a href="https://github.com/cmdr2/stable-diffusion-ui#installation"><img src="https://github.com/cmdr2/stable-diffusion-ui/raw/main/media/download-linux.png" width="200" /></a>
</p> </p>
## Step 2: Run the program ## On Windows:
- On Windows: Double-click `Start Stable Diffusion UI.cmd` 1. Unzip/extract the folder `stable-diffusion-ui` which should be in your downloads folder, unless you changed your default downloads destination.
- On Linux: Run `./start.sh` in a terminal 2. Move the `stable-diffusion-ui` folder to your `C:` drive (or any other drive like `D:`, at the top root level). `C:\stable-diffusion-ui` or `D:\stable-diffusion-ui` as examples. This will avoid a common problem with Windows (file path length limits).
## On Linux:
1. Unzip/extract the folder `stable-diffusion-ui` which should be in your downloads folder, unless you changed your default downloads destination.
2. Open a terminal window, and navigate to the `stable-diffusion-ui` directory.
## Step 3: There is no step 3! # Step 2: Run the program
It's simple to get started. You don't need to install or struggle with Python, Anaconda, Docker etc. ## On Windows:
Double-click `Start Stable Diffusion UI.cmd`.
If Windows SmartScreen prevents you from running the program click `More info` and then `Run anyway`.
## On Linux:
Run `./start.sh` (or `bash start.sh`) in a terminal.
The installer will take care of whatever is needed. A friendly [Discord community](https://discord.com/invite/u9yhsFmEkB) will help you if you face any problems. The installer will take care of whatever is needed. If you face any problems, you can join the friendly [Discord community](https://discord.com/invite/u9yhsFmEkB) and ask for assistance.
# Step 3: There is no Step 3. It's that simple!
**To Uninstall:** Just delete the `stable-diffusion-ui` folder to uninstall all the downloaded packages.
---- ----
# Easy for new users, powerful features for advanced users # Easy for new users, powerful features for advanced users
### Features: ## Features:
- **No Dependencies or Technical Knowledge Required**: 1-click install for Windows 10/11 and Linux. *No dependencies*, no need for WSL or Docker or Conda or technical setup. Just download and run!
- **Clutter-free UI**: a friendly and simple UI, while providing a lot of powerful features ### User experience
- Supports "*Text to Image*" and "*Image to Image*" - **Hassle-free installation**: Does not require technical knowledge, does not require pre-installed software. Just download and run!
- **Custom Models**: Use your own `.ckpt` file, by placing it inside the `models/stable-diffusion` folder! - **Clutter-free UI**: A friendly and simple UI, while providing a lot of powerful features.
- **Live Preview**: See the image as the AI is drawing it
- **Task Queue**: Queue up all your ideas, without waiting for the current task to finish ### Image generation
- **In-Painting**: Specify areas of your image to paint into - **Supports**: "*Text to Image*" and "*Image to Image*".
- **Face Correction (GFPGAN) and Upscaling (RealESRGAN)** - **In-Painting**: Specify areas of your image to paint into.
- **Image Modifiers**: A library of *modifier tags* like *"Realistic"*, *"Pencil Sketch"*, *"ArtStation"* etc. Experiment with various styles quickly. - **Simple Drawing Tool**: Draw basic images to guide the AI, without needing an external drawing program.
- **Loopback**: Use the output image as the input image for the next img2img task - **Face Correction (GFPGAN)**
- **Upscaling (RealESRGAN)**
- **Loopback**: Use the output image as the input image for the next img2img task.
- **Negative Prompt**: Specify aspects of the image to *remove*. - **Negative Prompt**: Specify aspects of the image to *remove*.
- **Attention/Emphasis:** () in the prompt increases the model's attention to enclosed words, and [] decreases it - **Attention/Emphasis**: () in the prompt increases the model's attention to enclosed words, and [] decreases it.
- **Weighted Prompts:** Use weights for specific words in your prompt to change their importance, e.g. `red:2.4 dragon:1.2` - **Weighted Prompts**: Use weights for specific words in your prompt to change their importance, e.g. `red:2.4 dragon:1.2`.
- **Prompt Matrix:** (in beta) Quickly create multiple variations of your prompt, e.g. `a photograph of an astronaut riding a horse | illustration | cinematic lighting` - **Prompt Matrix**: Quickly create multiple variations of your prompt, e.g. `a photograph of an astronaut riding a horse | illustration | cinematic lighting`.
- **Lots of Samplers:** ddim, plms, heun, euler, euler_a, dpm2, dpm2_a, lms - **Lots of Samplers**: ddim, plms, heun, euler, euler_a, dpm2, dpm2_a, lms.
- **Multiple Prompts File:** Queue multiple prompts by entering one prompt per line, or by running a text file - **1-click Upscale/Face Correction**: Upscale or correct an image after it has been generated.
- **NSFW Setting**: A setting in the UI to control *NSFW content* - **Make Similar Images**: Click to generate multiple variations of a generated image.
- **JPEG/PNG output** - **NSFW Setting**: A setting in the UI to control *NSFW content*.
- **Save generated images to disk** - **JPEG/PNG output**: Multiple file formats.
### Advanced features
- **Custom Models**: Use your own `.ckpt` or `.safetensors` file, by placing it inside the `models/stable-diffusion` folder!
- **Stable Diffusion 2.0 support (experimental)**: available in beta channel.
- **Use custom VAE models**
- **Use pre-trained Hypernetworks**
- **UI Plugins**: Choose from a growing list of [community-generated UI plugins](https://github.com/cmdr2/stable-diffusion-ui/wiki/UI-Plugins), or write your own plugin to add features to the project!
### Performance and security
- **Low Memory Usage**: Creates 512x512 images with less than 4GB of GPU RAM!
- **Use CPU setting**: If you don't have a compatible graphics card, but still want to run it on your CPU. - **Use CPU setting**: If you don't have a compatible graphics card, but still want to run it on your CPU.
- **Multi-GPU support**: Automatically spreads your tasks across multiple GPUs (if available), for faster performance!
- **Auto scan for malicious models**: Uses picklescan to prevent malicious models.
- **Safetensors support**: Support loading models in the safetensor format, for improved safety.
- **Auto-updater**: Gets you the latest improvements and bug-fixes to a rapidly evolving project. - **Auto-updater**: Gets you the latest improvements and bug-fixes to a rapidly evolving project.
- **Low Memory Usage**: Creates 512x512 images with less than 4GB of VRAM!
- **Developer Console**: A developer-mode for those who want to modify their Stable Diffusion code, and edit the conda environment. - **Developer Console**: A developer-mode for those who want to modify their Stable Diffusion code, and edit the conda environment.
### Easy for new users: ### Usability:
- **Live Preview**: See the image as the AI is drawing it.
- **Task Queue**: Queue up all your ideas, without waiting for the current task to finish.
- **Image Modifiers**: A library of *modifier tags* like *"Realistic"*, *"Pencil Sketch"*, *"ArtStation"* etc. Experiment with various styles quickly.
- **Multiple Prompts File**: Queue multiple prompts by entering one prompt per line, or by running a text file.
- **Save generated images to disk**: Save your images to your PC!
- **UI Themes**: Customize the program to your liking.
**(and a lot more)**
----
## Easy for new users:
![Screenshot of the initial UI](media/shot-v10-simple.jpg?raw=true) ![Screenshot of the initial UI](media/shot-v10-simple.jpg?raw=true)
### Powerful features for advanced users: ## Powerful features for advanced users:
![Screenshot of advanced settings](media/shot-v10.jpg?raw=true) ![Screenshot of advanced settings](media/shot-v10.jpg?raw=true)
### Live Preview ## Live Preview
Useful for judging (and stopping) an image quickly, without waiting for it to finish rendering. Useful for judging (and stopping) an image quickly, without waiting for it to finish rendering.
![live-512](https://user-images.githubusercontent.com/844287/192097249-729a0a1e-a677-485e-9ccc-16a9e848fabe.gif) ![live-512](https://user-images.githubusercontent.com/844287/192097249-729a0a1e-a677-485e-9ccc-16a9e848fabe.gif)
### Task Queue ## Task Queue
![Screenshot of task queue](media/task-queue-v1.jpg?raw=true) ![Screenshot of task queue](media/task-queue-v1.jpg?raw=true)
# System Requirements # System Requirements
@ -70,23 +111,10 @@ Useful for judging (and stopping) an image quickly, without waiting for it to fi
You don't need to install or struggle with Python, Anaconda, Docker etc. The installer will take care of whatever is needed. You don't need to install or struggle with Python, Anaconda, Docker etc. The installer will take care of whatever is needed.
# Installation ----
1. **Download** [for Windows](https://github.com/cmdr2/stable-diffusion-ui/releases/download/v2.3.5/stable-diffusion-ui-windows.zip) or [for Linux](https://github.com/cmdr2/stable-diffusion-ui/releases/download/v2.3.5/stable-diffusion-ui-linux.zip).
2. **Extract**:
- For Windows: After unzipping the file, please move the `stable-diffusion-ui` folder to your `C:` (or any drive like D:, at the top root level), e.g. `C:\stable-diffusion-ui`. This will avoid a common problem with Windows (file path length limits).
- For Linux: After extracting the .tar.xz file, please open a terminal, and go to the `stable-diffusion-ui` directory.
3. **Run**:
- For Windows: `Start Stable Diffusion UI.cmd` by double-clicking it.
- For Linux: In the terminal, run `./start.sh` (or `bash start.sh`)
This will automatically install Stable Diffusion, set it up, and start the interface. No additional steps are needed.
**To Uninstall:** Just delete the `stable-diffusion-ui` folder to uninstall all the downloaded packages.
# How to use? # How to use?
Please use our [guide](https://github.com/cmdr2/stable-diffusion-ui/wiki/How-to-Use) to understand how to use the features in this UI. Please refer to our [guide](https://github.com/cmdr2/stable-diffusion-ui/wiki/How-to-Use) to understand how to use the features in this UI.
# Bugs reports and code contributions welcome # Bugs reports and code contributions welcome
If there are any problems or suggestions, please feel free to ask on the [discord server](https://discord.com/invite/u9yhsFmEkB) or [file an issue](https://github.com/cmdr2/stable-diffusion-ui/issues). If there are any problems or suggestions, please feel free to ask on the [discord server](https://discord.com/invite/u9yhsFmEkB) or [file an issue](https://github.com/cmdr2/stable-diffusion-ui/issues).
@ -102,4 +130,11 @@ If you have any code contributions in mind, please feel free to say Hi to us on
# Disclaimer # Disclaimer
The authors of this project are not responsible for any content generated using this interface. The authors of this project are not responsible for any content generated using this interface.
The license of this software forbids you from sharing any content that violates any laws, produce any harm to a person, disseminate any personal information that would be meant for harm, spread misinformation, or target vulnerable groups. For the full list of restrictions please read [the license](LICENSE). You agree to these terms by using this software. The license of this software forbids you from sharing any content that:
- Violates any laws.
- Produces any harm to a person or persons.
- Disseminates (spreads) any personal information that would be meant for harm.
- Spreads misinformation.
- Target vulnerable groups.
For the full list of restrictions please read [the License](LICENSE). You agree to these terms by using this software.

View File

@ -1 +0,0 @@
Moved to https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting

View File

@ -23,12 +23,21 @@ call conda --version
echo. echo.
@rem activate the environment @rem activate the legacy environment (if present) and set PYTHONPATH
call conda activate .\stable-diffusion\env if exist "installer_files\env" (
set PYTHONPATH=%cd%\installer_files\env\lib\site-packages
)
if exist "stable-diffusion\env" (
call conda activate .\stable-diffusion\env
set PYTHONPATH=%cd%\stable-diffusion\env\lib\site-packages
)
call where python call where python
call python --version call python --version
echo PYTHONPATH=%PYTHONPATH%
@rem done
echo. echo.
cmd /k cmd /k

View File

@ -24,7 +24,7 @@ if exist "%INSTALL_ENV_DIR%" set PATH=%INSTALL_ENV_DIR%;%INSTALL_ENV_DIR%\Librar
set PACKAGES_TO_INSTALL= set PACKAGES_TO_INSTALL=
if not exist "%LEGACY_INSTALL_ENV_DIR%\etc\profile.d\conda.sh" ( if not exist "%LEGACY_INSTALL_ENV_DIR%\etc\profile.d\conda.sh" (
if not exist "%INSTALL_ENV_DIR%\etc\profile.d\conda.sh" set PACKAGES_TO_INSTALL=%PACKAGES_TO_INSTALL% conda if not exist "%INSTALL_ENV_DIR%\etc\profile.d\conda.sh" set PACKAGES_TO_INSTALL=%PACKAGES_TO_INSTALL% conda python=3.8.5
) )
call git --version >.tmp1 2>.tmp2 call git --version >.tmp1 2>.tmp2
@ -42,11 +42,11 @@ if "%PACKAGES_TO_INSTALL%" NEQ "" (
mkdir "%MAMBA_ROOT_PREFIX%" mkdir "%MAMBA_ROOT_PREFIX%"
call curl -Lk "%MICROMAMBA_DOWNLOAD_URL%" > "%MAMBA_ROOT_PREFIX%\micromamba.exe" call curl -Lk "%MICROMAMBA_DOWNLOAD_URL%" > "%MAMBA_ROOT_PREFIX%\micromamba.exe"
@REM if "%ERRORLEVEL%" NEQ "0" ( if "%ERRORLEVEL%" NEQ "0" (
@REM echo "There was a problem downloading micromamba. Cannot continue." echo "There was a problem downloading micromamba. Cannot continue."
@REM pause pause
@REM exit /b exit /b
@REM ) )
mkdir "%APPDATA%" mkdir "%APPDATA%"
mkdir "%USERPROFILE%" mkdir "%USERPROFILE%"

View File

@ -39,7 +39,7 @@ if [ -e "$INSTALL_ENV_DIR" ]; then export PATH="$INSTALL_ENV_DIR/bin:$PATH"; fi
PACKAGES_TO_INSTALL="" PACKAGES_TO_INSTALL=""
if [ ! -e "$LEGACY_INSTALL_ENV_DIR/etc/profile.d/conda.sh" ] && [ ! -e "$INSTALL_ENV_DIR/etc/profile.d/conda.sh" ]; then PACKAGES_TO_INSTALL="$PACKAGES_TO_INSTALL conda"; fi if [ ! -e "$LEGACY_INSTALL_ENV_DIR/etc/profile.d/conda.sh" ] && [ ! -e "$INSTALL_ENV_DIR/etc/profile.d/conda.sh" ]; then PACKAGES_TO_INSTALL="$PACKAGES_TO_INSTALL conda python=3.8.5"; fi
if ! hash "git" &>/dev/null; then PACKAGES_TO_INSTALL="$PACKAGES_TO_INSTALL git"; fi if ! hash "git" &>/dev/null; then PACKAGES_TO_INSTALL="$PACKAGES_TO_INSTALL git"; fi
if "$MAMBA_ROOT_PREFIX/micromamba" --version &>/dev/null; then umamba_exists="T"; fi if "$MAMBA_ROOT_PREFIX/micromamba" --version &>/dev/null; then umamba_exists="T"; fi

13
scripts/check_modules.py Normal file
View File

@ -0,0 +1,13 @@
'''
This script checks if the given modules exist
'''
import sys
import pkgutil
modules = sys.argv[1:]
missing_modules = []
for m in modules:
if pkgutil.find_loader(m) is None:
print('module', m, 'not found')
exit(1)

View File

@ -26,15 +26,26 @@ if [ "$0" == "bash" ]; then
echo "" echo ""
# activate the environment # activate the legacy environment (if present) and set PYTHONPATH
CONDA_BASEPATH=$(conda info --base) if [ -e "installer_files/env" ]; then
source "$CONDA_BASEPATH/etc/profile.d/conda.sh" # otherwise conda complains about 'shell not initialized' (needed when running in a script) export PYTHONPATH="$(pwd)/installer_files/env/lib/python3.8/site-packages"
fi
if [ -e "stable-diffusion/env" ]; then
CONDA_BASEPATH=$(conda info --base)
source "$CONDA_BASEPATH/etc/profile.d/conda.sh" # otherwise conda complains about 'shell not initialized' (needed when running in a script)
conda activate ./stable-diffusion/env conda activate ./stable-diffusion/env
export PYTHONPATH="$(pwd)/stable-diffusion/env/lib/python3.8/site-packages"
fi
which python which python
python --version python --version
echo "PYTHONPATH=$PYTHONPATH"
# done
echo "" echo ""
else else
file_name=$(basename "${BASH_SOURCE[0]}") file_name=$(basename "${BASH_SOURCE[0]}")

View File

@ -53,6 +53,7 @@ if "%update_branch%"=="" (
@xcopy sd-ui-files\ui ui /s /i /Y /q @xcopy sd-ui-files\ui ui /s /i /Y /q
@copy sd-ui-files\scripts\on_sd_start.bat scripts\ /Y @copy sd-ui-files\scripts\on_sd_start.bat scripts\ /Y
@copy sd-ui-files\scripts\bootstrap.bat scripts\ /Y @copy sd-ui-files\scripts\bootstrap.bat scripts\ /Y
@copy sd-ui-files\scripts\check_modules.py scripts\ /Y
@copy "sd-ui-files\scripts\Start Stable Diffusion UI.cmd" . /Y @copy "sd-ui-files\scripts\Start Stable Diffusion UI.cmd" . /Y
@copy "sd-ui-files\scripts\Developer Console.cmd" . /Y @copy "sd-ui-files\scripts\Developer Console.cmd" . /Y

View File

@ -37,6 +37,7 @@ rm -rf ui
cp -Rf sd-ui-files/ui . cp -Rf sd-ui-files/ui .
cp sd-ui-files/scripts/on_sd_start.sh scripts/ cp sd-ui-files/scripts/on_sd_start.sh scripts/
cp sd-ui-files/scripts/bootstrap.sh scripts/ cp sd-ui-files/scripts/bootstrap.sh scripts/
cp sd-ui-files/scripts/check_modules.py scripts/
cp sd-ui-files/scripts/start.sh . cp sd-ui-files/scripts/start.sh .
cp sd-ui-files/scripts/developer_console.sh . cp sd-ui-files/scripts/developer_console.sh .

View File

@ -5,179 +5,123 @@
@copy sd-ui-files\scripts\on_env_start.bat scripts\ /Y @copy sd-ui-files\scripts\on_env_start.bat scripts\ /Y
@copy sd-ui-files\scripts\bootstrap.bat scripts\ /Y @copy sd-ui-files\scripts\bootstrap.bat scripts\ /Y
@copy sd-ui-files\scripts\check_modules.py scripts\ /Y
if exist "%cd%\profile" ( if exist "%cd%\profile" (
set USERPROFILE=%cd%\profile set USERPROFILE=%cd%\profile
) )
@rem set the correct installer path (current vs legacy)
if exist "%cd%\installer_files\env" (
set INSTALL_ENV_DIR=%cd%\installer_files\env
)
if exist "%cd%\stable-diffusion\env" (
set INSTALL_ENV_DIR=%cd%\stable-diffusion\env
)
@mkdir tmp @mkdir tmp
@set TMP=%cd%\tmp @set TMP=%cd%\tmp
@set TEMP=%cd%\tmp @set TEMP=%cd%\tmp
@rem activate the installer env @rem activate the installer env
call conda activate call conda activate
@rem @if "%ERRORLEVEL%" NEQ "0" ( @if "%ERRORLEVEL%" NEQ "0" (
@rem @echo. & echo "Error activating conda for Stable Diffusion. Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo. @echo. & echo "Error activating conda for Stable Diffusion. Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo.
@rem pause pause
@rem exit /b exit /b
@rem ) )
@REM remove the old version of the dev console script, if it's still present @REM remove the old version of the dev console script, if it's still present
if exist "Open Developer Console.cmd" del "Open Developer Console.cmd" if exist "Open Developer Console.cmd" del "Open Developer Console.cmd"
@call python -c "import os; import shutil; frm = 'sd-ui-files\\ui\\hotfix\\9c24e6cd9f499d02c4f21a033736dabd365962dc80fe3aeb57a8f85ea45a20a3.26fead7ea4f0f843f6eb4055dfd25693f1a71f3c6871b184042d4b126244e142'; dst = os.path.join(os.path.expanduser('~'), '.cache', 'huggingface', 'transformers', '9c24e6cd9f499d02c4f21a033736dabd365962dc80fe3aeb57a8f85ea45a20a3.26fead7ea4f0f843f6eb4055dfd25693f1a71f3c6871b184042d4b126244e142'); shutil.copyfile(frm, dst) if os.path.exists(dst) else print(''); print('Hotfixed broken JSON file from OpenAI');" @call python -c "import os; import shutil; frm = 'sd-ui-files\\ui\\hotfix\\9c24e6cd9f499d02c4f21a033736dabd365962dc80fe3aeb57a8f85ea45a20a3.26fead7ea4f0f843f6eb4055dfd25693f1a71f3c6871b184042d4b126244e142'; dst = os.path.join(os.path.expanduser('~'), '.cache', 'huggingface', 'transformers', '9c24e6cd9f499d02c4f21a033736dabd365962dc80fe3aeb57a8f85ea45a20a3.26fead7ea4f0f843f6eb4055dfd25693f1a71f3c6871b184042d4b126244e142'); shutil.copyfile(frm, dst) if os.path.exists(dst) else print(''); print('Hotfixed broken JSON file from OpenAI');"
@>nul findstr /m "sd_git_cloned" scripts\install_status.txt @rem create the stable-diffusion folder, to work with legacy installations
@if "%ERRORLEVEL%" EQU "0" ( if not exist "stable-diffusion" mkdir stable-diffusion
@echo "Stable Diffusion's git repository was already installed. Updating.." cd stable-diffusion
@cd stable-diffusion @rem activate the old stable-diffusion env, if it exists
if exist "env" (
@call git reset --hard call conda activate .\env
@call git pull
@call git -c advice.detachedHead=false checkout f6cfebffa752ee11a7b07497b8529d5971de916c
@call git apply --whitespace=nowarn ..\ui\sd_internal\ddim_callback.patch
@call git apply --whitespace=nowarn ..\ui\sd_internal\env_yaml.patch
@cd ..
) else (
@echo. & echo "Downloading Stable Diffusion.." & echo.
@call git clone https://github.com/basujindal/stable-diffusion.git && (
@echo sd_git_cloned >> scripts\install_status.txt
) || (
@echo "Error downloading Stable Diffusion. Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!"
pause
@exit /b
)
@cd stable-diffusion
@call git -c advice.detachedHead=false checkout f6cfebffa752ee11a7b07497b8529d5971de916c
@call git apply --whitespace=nowarn ..\ui\sd_internal\ddim_callback.patch
@call git apply --whitespace=nowarn ..\ui\sd_internal\env_yaml.patch
@cd ..
) )
@cd stable-diffusion @rem disable the legacy src and ldm folder (otherwise this prevents installing gfpgan and realesrgan)
if exist src rename src src-old
if exist ldm rename ldm ldm-old
@>nul findstr /m "conda_sd_env_created" ..\scripts\install_status.txt @rem install torch and torchvision
@if "%ERRORLEVEL%" EQU "0" ( call python ..\scripts\check_modules.py torch torchvision
@echo "Packages necessary for Stable Diffusion were already installed" if "%ERRORLEVEL%" EQU "0" (
echo "torch and torchvision have already been installed."
@call conda activate .\env
) else ( ) else (
@echo. & echo "Downloading packages necessary for Stable Diffusion.." & echo. & echo "***** This will take some time (depending on the speed of the Internet connection) and may appear to be stuck, but please be patient ***** .." & echo. echo "Installing torch and torchvision.."
@rmdir /s /q .\env @REM prevent from using packages from the user's home directory, to avoid conflicts
set PYTHONNOUSERSITE=1
set PYTHONPATH=%INSTALL_ENV_DIR%\lib\site-packages
@REM prevent conda from using packages from the user's home directory, to avoid conflicts call pip install --upgrade torch torchvision --extra-index-url https://download.pytorch.org/whl/cu116 || (
@set PYTHONNOUSERSITE=1 echo "Error installing torch. Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!"
set USERPROFILE=%cd%\profile
set PYTHONPATH=%cd%;%cd%\env\lib\site-packages
@call conda env create --prefix env -f environment.yaml || (
@echo. & echo "Error installing the packages necessary for Stable Diffusion. Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo.
pause pause
exit /b exit /b
) )
)
@call conda activate .\env @rem install/upgrade sdkit
call python ..\scripts\check_modules.py sdkit sdkit.models ldm transformers numpy antlr4 gfpgan realesrgan
if "%ERRORLEVEL%" EQU "0" (
echo "sdkit is already installed."
@call conda install -c conda-forge -y --prefix env antlr4-python3-runtime=4.8 || ( @REM prevent from using packages from the user's home directory, to avoid conflicts
@echo. & echo "Error installing antlr4-python3-runtime for Stable Diffusion. Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo. set PYTHONNOUSERSITE=1
set PYTHONPATH=%INSTALL_ENV_DIR%\lib\site-packages
call >nul pip install --upgrade sdkit || (
echo "Error updating sdkit"
)
) else (
echo "Installing sdkit: https://pypi.org/project/sdkit/"
@REM prevent from using packages from the user's home directory, to avoid conflicts
set PYTHONNOUSERSITE=1
set PYTHONPATH=%INSTALL_ENV_DIR%\lib\site-packages
call pip install sdkit || (
echo "Error installing sdkit. Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!"
pause pause
exit /b exit /b
) )
)
for /f "tokens=*" %%a in ('python -c "import torch; import ldm; import transformers; import numpy; import antlr4; print(42)"') do if "%%a" NEQ "42" ( @rem install rich
@echo. & echo "Dependency test failed! Error installing the packages necessary for Stable Diffusion. Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo. call python ..\scripts\check_modules.py rich
if "%ERRORLEVEL%" EQU "0" (
echo "rich has already been installed."
) else (
echo "Installing rich.."
set PYTHONNOUSERSITE=1
set PYTHONPATH=%INSTALL_ENV_DIR%\lib\site-packages
call pip install rich || (
echo "Error installing rich. Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!"
pause pause
exit /b exit /b
) )
@echo conda_sd_env_created >> ..\scripts\install_status.txt
) )
set PATH=C:\Windows\System32;%PATH% set PATH=C:\Windows\System32;%PATH%
@>nul findstr /m "conda_sd_gfpgan_deps_installed" ..\scripts\install_status.txt call python ..\scripts\check_modules.py uvicorn fastapi
@if "%ERRORLEVEL%" EQU "0" (
@echo "Packages necessary for GFPGAN (Face Correction) were already installed"
) else (
@echo. & echo "Downloading packages necessary for GFPGAN (Face Correction).." & echo.
@set PYTHONNOUSERSITE=1
set USERPROFILE=%cd%\profile
set PYTHONPATH=%cd%;%cd%\env\lib\site-packages
@call pip install -e git+https://github.com/TencentARC/GFPGAN#egg=GFPGAN || (
@echo. & echo "Error installing the packages necessary for GFPGAN (Face Correction). Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo.
pause
exit /b
)
@call pip install basicsr==1.4.2 || (
@echo. & echo "Error installing the basicsr package necessary for GFPGAN (Face Correction). Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo.
pause
exit /b
)
for /f "tokens=*" %%a in ('python -c "from gfpgan import GFPGANer; print(42)"') do if "%%a" NEQ "42" (
@echo. & echo "Dependency test failed! Error installing the packages necessary for GFPGAN (Face Correction). Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo.
pause
exit /b
)
@echo conda_sd_gfpgan_deps_installed >> ..\scripts\install_status.txt
)
@>nul findstr /m "conda_sd_esrgan_deps_installed" ..\scripts\install_status.txt
@if "%ERRORLEVEL%" EQU "0" (
@echo "Packages necessary for ESRGAN (Resolution Upscaling) were already installed"
) else (
@echo. & echo "Downloading packages necessary for ESRGAN (Resolution Upscaling).." & echo.
@set PYTHONNOUSERSITE=1
set USERPROFILE=%cd%\profile
set PYTHONPATH=%cd%;%cd%\env\lib\site-packages
@call pip install -e git+https://github.com/xinntao/Real-ESRGAN#egg=realesrgan || (
@echo. & echo "Error installing the packages necessary for ESRGAN (Resolution Upscaling). Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo.
pause
exit /b
)
for /f "tokens=*" %%a in ('python -c "from basicsr.archs.rrdbnet_arch import RRDBNet; from realesrgan import RealESRGANer; print(42)"') do if "%%a" NEQ "42" (
@echo. & echo "Dependency test failed! Error installing the packages necessary for ESRGAN (Resolution Upscaling). Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo.
pause
exit /b
)
@echo conda_sd_esrgan_deps_installed >> ..\scripts\install_status.txt
)
@>nul findstr /m "conda_sd_ui_deps_installed" ..\scripts\install_status.txt
@if "%ERRORLEVEL%" EQU "0" ( @if "%ERRORLEVEL%" EQU "0" (
echo "Packages necessary for Stable Diffusion UI were already installed" echo "Packages necessary for Stable Diffusion UI were already installed"
) else ( ) else (
@echo. & echo "Downloading packages necessary for Stable Diffusion UI.." & echo. @echo. & echo "Downloading packages necessary for Stable Diffusion UI.." & echo.
@set PYTHONNOUSERSITE=1 set PYTHONNOUSERSITE=1
set PYTHONPATH=%INSTALL_ENV_DIR%\lib\site-packages
set USERPROFILE=%cd%\profile @call conda install -c conda-forge -y uvicorn fastapi || (
set PYTHONPATH=%cd%;%cd%\env\lib\site-packages
@call conda install -c conda-forge -y --prefix env uvicorn fastapi || (
echo "Error installing the packages necessary for Stable Diffusion UI. Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" echo "Error installing the packages necessary for Stable Diffusion UI. Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!"
pause pause
exit /b exit /b
@ -192,16 +136,6 @@ call WHERE uvicorn > .tmp
exit /b exit /b
) )
@>nul 2>nul call python -m picklescan --help
@if "%ERRORLEVEL%" NEQ "0" (
@echo. & echo Picklescan not found. Installing
@call pip install picklescan || (
echo "Error installing the picklescan package necessary for Stable Diffusion UI. Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!"
pause
exit /b
)
)
@>nul findstr /m "conda_sd_ui_deps_installed" ..\scripts\install_status.txt @>nul findstr /m "conda_sd_ui_deps_installed" ..\scripts\install_status.txt
@if "%ERRORLEVEL%" NEQ "0" ( @if "%ERRORLEVEL%" NEQ "0" (
@echo conda_sd_ui_deps_installed >> ..\scripts\install_status.txt @echo conda_sd_ui_deps_installed >> ..\scripts\install_status.txt
@ -209,10 +143,7 @@ call WHERE uvicorn > .tmp
if not exist "..\models\stable-diffusion" mkdir "..\models\stable-diffusion"
if not exist "..\models\vae" mkdir "..\models\vae" if not exist "..\models\vae" mkdir "..\models\vae"
echo. > "..\models\stable-diffusion\Put your custom ckpt files here.txt"
echo. > "..\models\vae\Put your VAE files here.txt"
@if exist "sd-v1-4.ckpt" ( @if exist "sd-v1-4.ckpt" (
for %%I in ("sd-v1-4.ckpt") do if "%%~zI" EQU "4265380512" ( for %%I in ("sd-v1-4.ckpt") do if "%%~zI" EQU "4265380512" (
@ -370,8 +301,6 @@ echo. > "..\models\vae\Put your VAE files here.txt"
) )
) )
@>nul findstr /m "sd_install_complete" ..\scripts\install_status.txt @>nul findstr /m "sd_install_complete" ..\scripts\install_status.txt
@if "%ERRORLEVEL%" NEQ "0" ( @if "%ERRORLEVEL%" NEQ "0" (
@echo sd_weights_downloaded >> ..\scripts\install_status.txt @echo sd_weights_downloaded >> ..\scripts\install_status.txt
@ -382,10 +311,8 @@ echo. > "..\models\vae\Put your VAE files here.txt"
@set SD_DIR=%cd% @set SD_DIR=%cd%
@cd env\lib\site-packages set PYTHONPATH=%INSTALL_ENV_DIR%\lib\site-packages
@set PYTHONPATH=%SD_DIR%;%cd% echo PYTHONPATH=%PYTHONPATH%
@cd ..\..\..
@echo PYTHONPATH=%PYTHONPATH%
call where python call where python
call python --version call python --version
@ -394,17 +321,9 @@ call python --version
@set SD_UI_PATH=%cd%\ui @set SD_UI_PATH=%cd%\ui
@cd stable-diffusion @cd stable-diffusion
@rem
@rem Rewrite easy-install.pth. This fixes the installation if the user has relocated the SDUI installation
@rem
>env\Lib\site-packages\easy-install.pth echo %cd%\src\taming-transformers
>>env\Lib\site-packages\easy-install.pth echo %cd%\src\clip
>>env\Lib\site-packages\easy-install.pth echo %cd%\src\gfpgan
>>env\Lib\site-packages\easy-install.pth echo %cd%\src\realesrgan
@if NOT DEFINED SD_UI_BIND_PORT set SD_UI_BIND_PORT=9000 @if NOT DEFINED SD_UI_BIND_PORT set SD_UI_BIND_PORT=9000
@if NOT DEFINED SD_UI_BIND_IP set SD_UI_BIND_IP=0.0.0.0 @if NOT DEFINED SD_UI_BIND_IP set SD_UI_BIND_IP=0.0.0.0
@uvicorn server:app --app-dir "%SD_UI_PATH%" --port %SD_UI_BIND_PORT% --host %SD_UI_BIND_IP% @uvicorn main:server_api --app-dir "%SD_UI_PATH%" --port %SD_UI_BIND_PORT% --host %SD_UI_BIND_IP% --log-level error
@pause @pause

View File

@ -4,6 +4,7 @@ source ./scripts/functions.sh
cp sd-ui-files/scripts/on_env_start.sh scripts/ cp sd-ui-files/scripts/on_env_start.sh scripts/
cp sd-ui-files/scripts/bootstrap.sh scripts/ cp sd-ui-files/scripts/bootstrap.sh scripts/
cp sd-ui-files/scripts/check_modules.py scripts/
# activate the installer env # activate the installer env
CONDA_BASEPATH=$(conda info --base) CONDA_BASEPATH=$(conda info --base)
@ -21,129 +22,89 @@ python -c "import os; import shutil; frm = 'sd-ui-files/ui/hotfix/9c24e6cd9f499d
# Caution, this file will make your eyes and brain bleed. It's such an unholy mess. # Caution, this file will make your eyes and brain bleed. It's such an unholy mess.
# Note to self: Please rewrite this in Python. For the sake of your own sanity. # Note to self: Please rewrite this in Python. For the sake of your own sanity.
if [ -e "scripts/install_status.txt" ] && [ `grep -c sd_git_cloned scripts/install_status.txt` -gt "0" ]; then # set the correct installer path (current vs legacy)
echo "Stable Diffusion's git repository was already installed. Updating.." if [ -e "installer_files/env" ]; then
export INSTALL_ENV_DIR="$(pwd)/installer_files/env"
cd stable-diffusion fi
if [ -e "stable-diffusion/env" ]; then
git reset --hard export INSTALL_ENV_DIR="$(pwd)/stable-diffusion/env"
git pull
git -c advice.detachedHead=false checkout f6cfebffa752ee11a7b07497b8529d5971de916c
git apply --whitespace=nowarn ../ui/sd_internal/ddim_callback.patch || fail "ddim patch failed"
git apply --whitespace=nowarn ../ui/sd_internal/env_yaml.patch || fail "yaml patch failed"
cd ..
else
printf "\n\nDownloading Stable Diffusion..\n\n"
if git clone https://github.com/basujindal/stable-diffusion.git ; then
echo sd_git_cloned >> scripts/install_status.txt
else
fail "git clone of basujindal/stable-diffusion.git failed"
fi
cd stable-diffusion
git -c advice.detachedHead=false checkout f6cfebffa752ee11a7b07497b8529d5971de916c
git apply --whitespace=nowarn ../ui/sd_internal/ddim_callback.patch || fail "ddim patch failed"
git apply --whitespace=nowarn ../ui/sd_internal/env_yaml.patch || fail "yaml patch failed"
cd ..
fi fi
# create the stable-diffusion folder, to work with legacy installations
if [ ! -e "stable-diffusion" ]; then mkdir stable-diffusion; fi
cd stable-diffusion cd stable-diffusion
if [ `grep -c conda_sd_env_created ../scripts/install_status.txt` -gt "0" ]; then # activate the old stable-diffusion env, if it exists
echo "Packages necessary for Stable Diffusion were already installed" if [ -e "env" ]; then
conda activate ./env || fail "conda activate failed" conda activate ./env || fail "conda activate failed"
else
printf "\n\nDownloading packages necessary for Stable Diffusion..\n"
printf "\n\n***** This will take some time (depending on the speed of the Internet connection) and may appear to be stuck, but please be patient ***** ..\n\n"
# prevent conda from using packages from the user's home directory, to avoid conflicts
export PYTHONNOUSERSITE=1
export PYTHONPATH="$(pwd):$(pwd)/env/lib/site-packages"
if conda env create --prefix env --force -f environment.yaml ; then
echo "Installed. Testing.."
else
fail "'conda env create' failed"
fi
conda activate ./env || fail "conda activate failed"
if conda install -c conda-forge --prefix ./env -y antlr4-python3-runtime=4.8 ; then
echo "Installed. Testing.."
else
fail "Error installing antlr4-python3-runtime"
fi
out_test=`python -c "import torch; import ldm; import transformers; import numpy; import antlr4; print(42)"`
if [ "$out_test" != "42" ]; then
fail "Dependency test failed"
fi
echo conda_sd_env_created >> ../scripts/install_status.txt
fi fi
if [ `grep -c conda_sd_gfpgan_deps_installed ../scripts/install_status.txt` -gt "0" ]; then # disable the legacy src and ldm folder (otherwise this prevents installing gfpgan and realesrgan)
echo "Packages necessary for GFPGAN (Face Correction) were already installed" if [ -e "src" ]; then mv src src-old; fi
if [ -e "ldm" ]; then mv ldm ldm-old; fi
# install torch and torchvision
if python ../scripts/check_modules.py torch torchvision; then
echo "torch and torchvision have already been installed."
else else
printf "\n\nDownloading packages necessary for GFPGAN (Face Correction)..\n" echo "Installing torch and torchvision.."
export PYTHONNOUSERSITE=1 export PYTHONNOUSERSITE=1
export PYTHONPATH="$(pwd):$(pwd)/env/lib/site-packages" export PYTHONPATH="$INSTALL_ENV_DIR/lib/python3.8/site-packages"
if pip install -e git+https://github.com/TencentARC/GFPGAN#egg=GFPGAN ; then if pip install --upgrade torch torchvision --extra-index-url https://download.pytorch.org/whl/cu116 ; then
echo "Installed. Testing.." echo "Installed."
else else
fail "Error installing the packages necessary for GFPGAN (Face Correction)." fail "torch install failed"
fi fi
out_test=`python -c "from gfpgan import GFPGANer; print(42)"`
if [ "$out_test" != "42" ]; then
echo "EE The dependency check has failed. This usually means that some system libraries are missing."
echo "EE On Debian/Ubuntu systems, this are often these packages: libsm6 libxext6 libxrender-dev"
echo "EE Other Linux distributions might have different package names for these libraries."
fail "GFPGAN dependency test failed"
fi
echo conda_sd_gfpgan_deps_installed >> ../scripts/install_status.txt
fi fi
if [ `grep -c conda_sd_esrgan_deps_installed ../scripts/install_status.txt` -gt "0" ]; then # install/upgrade sdkit
echo "Packages necessary for ESRGAN (Resolution Upscaling) were already installed" if python ../scripts/check_modules.py sdkit sdkit.models ldm transformers numpy antlr4 gfpgan realesrgan ; then
else echo "sdkit is already installed."
printf "\n\nDownloading packages necessary for ESRGAN (Resolution Upscaling)..\n"
export PYTHONNOUSERSITE=1 export PYTHONNOUSERSITE=1
export PYTHONPATH="$(pwd):$(pwd)/env/lib/site-packages" export PYTHONPATH="$INSTALL_ENV_DIR/lib/python3.8/site-packages"
if pip install -e git+https://github.com/xinntao/Real-ESRGAN#egg=realesrgan ; then pip install --upgrade sdkit > /dev/null
echo "Installed. Testing.." else
echo "Installing sdkit: https://pypi.org/project/sdkit/"
export PYTHONNOUSERSITE=1
export PYTHONPATH="$INSTALL_ENV_DIR/lib/python3.8/site-packages"
if pip install sdkit ; then
echo "Installed."
else else
fail "Error installing the packages necessary for ESRGAN" fail "sdkit install failed"
fi fi
out_test=`python -c "from basicsr.archs.rrdbnet_arch import RRDBNet; from realesrgan import RealESRGANer; print(42)"`
if [ "$out_test" != "42" ]; then
fail "ESRGAN dependency test failed"
fi
echo conda_sd_esrgan_deps_installed >> ../scripts/install_status.txt
fi fi
if [ `grep -c conda_sd_ui_deps_installed ../scripts/install_status.txt` -gt "0" ]; then # install rich
if python ../scripts/check_modules.py rich; then
echo "rich has already been installed."
else
echo "Installing rich.."
export PYTHONNOUSERSITE=1
export PYTHONPATH="$INSTALL_ENV_DIR/lib/python3.8/site-packages"
if pip install rich ; then
echo "Installed."
else
fail "Install failed for rich"
fi
fi
if python ../scripts/check_modules.py uvicorn fastapi ; then
echo "Packages necessary for Stable Diffusion UI were already installed" echo "Packages necessary for Stable Diffusion UI were already installed"
else else
printf "\n\nDownloading packages necessary for Stable Diffusion UI..\n\n" printf "\n\nDownloading packages necessary for Stable Diffusion UI..\n\n"
export PYTHONNOUSERSITE=1 export PYTHONNOUSERSITE=1
export PYTHONPATH="$(pwd):$(pwd)/env/lib/site-packages" export PYTHONPATH="$INSTALL_ENV_DIR/lib/python3.8/site-packages"
if conda install -c conda-forge --prefix ./env -y uvicorn fastapi ; then if conda install -c conda-forge -y uvicorn fastapi ; then
echo "Installed. Testing.." echo "Installed. Testing.."
else else
fail "'conda install uvicorn' failed" fail "'conda install uvicorn' failed"
@ -152,23 +113,9 @@ else
if ! command -v uvicorn &> /dev/null; then if ! command -v uvicorn &> /dev/null; then
fail "UI packages not found!" fail "UI packages not found!"
fi fi
echo conda_sd_ui_deps_installed >> ../scripts/install_status.txt
fi fi
if python -m picklescan --help >/dev/null 2>&1; then
echo "Picklescan is already installed."
else
echo "Picklescan not found, installing."
pip install picklescan || fail "Picklescan installation failed."
fi
mkdir -p "../models/stable-diffusion"
mkdir -p "../models/vae" mkdir -p "../models/vae"
echo "" > "../models/stable-diffusion/Put your custom ckpt files here.txt"
echo "" > "../models/vae/Put your VAE files here.txt"
if [ -f "sd-v1-4.ckpt" ]; then if [ -f "sd-v1-4.ckpt" ]; then
model_size=`find "sd-v1-4.ckpt" -printf "%s"` model_size=`find "sd-v1-4.ckpt" -printf "%s"`
@ -309,7 +256,6 @@ if [ ! -f "../models/vae/vae-ft-mse-840000-ema-pruned.ckpt" ]; then
fi fi
fi fi
if [ `grep -c sd_install_complete ../scripts/install_status.txt` -gt "0" ]; then if [ `grep -c sd_install_complete ../scripts/install_status.txt` -gt "0" ]; then
echo sd_weights_downloaded >> ../scripts/install_status.txt echo sd_weights_downloaded >> ../scripts/install_status.txt
echo sd_install_complete >> ../scripts/install_status.txt echo sd_install_complete >> ../scripts/install_status.txt
@ -318,7 +264,8 @@ fi
printf "\n\nStable Diffusion is ready!\n\n" printf "\n\nStable Diffusion is ready!\n\n"
SD_PATH=`pwd` SD_PATH=`pwd`
export PYTHONPATH="$SD_PATH:$SD_PATH/env/lib/python3.8/site-packages"
export PYTHONPATH="$INSTALL_ENV_DIR/lib/python3.8/site-packages"
echo "PYTHONPATH=$PYTHONPATH" echo "PYTHONPATH=$PYTHONPATH"
which python which python
@ -328,6 +275,6 @@ cd ..
export SD_UI_PATH=`pwd`/ui export SD_UI_PATH=`pwd`/ui
cd stable-diffusion cd stable-diffusion
uvicorn server:app --app-dir "$SD_UI_PATH" --port ${SD_UI_BIND_PORT:-9000} --host ${SD_UI_BIND_IP:-0.0.0.0} uvicorn main:server_api --app-dir "$SD_UI_PATH" --port ${SD_UI_BIND_PORT:-9000} --host ${SD_UI_BIND_IP:-0.0.0.0} --log-level error
read -p "Press any key to continue" read -p "Press any key to continue"

View File

@ -1,6 +0,0 @@
@call conda --version
@call git --version
cd %CONDA_PREFIX%\..\scripts
on_env_start.bat

View File

@ -1,12 +0,0 @@
#!/bin/bash
conda-unpack
source $CONDA_PREFIX/etc/profile.d/conda.sh
conda --version
git --version
cd $CONDA_PREFIX/../scripts
./on_env_start.sh

View File

165
ui/easydiffusion/app.py Normal file
View File

@ -0,0 +1,165 @@
import os
import socket
import sys
import json
import traceback
import logging
from rich.logging import RichHandler
from sdkit.utils import log as sdkit_log # hack, so we can overwrite the log config
from easydiffusion import task_manager
from easydiffusion.utils import log
# Remove all handlers associated with the root logger object.
for handler in logging.root.handlers[:]:
logging.root.removeHandler(handler)
LOG_FORMAT = '%(asctime)s.%(msecs)03d %(levelname)s %(threadName)s %(message)s'
logging.basicConfig(
level=logging.INFO,
format=LOG_FORMAT,
datefmt="%X",
handlers=[RichHandler(markup=True, rich_tracebacks=True, show_time=False, show_level=False)]
)
SD_DIR = os.getcwd()
SD_UI_DIR = os.getenv('SD_UI_PATH', None)
sys.path.append(os.path.dirname(SD_UI_DIR))
CONFIG_DIR = os.path.abspath(os.path.join(SD_UI_DIR, '..', 'scripts'))
MODELS_DIR = os.path.abspath(os.path.join(SD_DIR, '..', 'models'))
USER_UI_PLUGINS_DIR = os.path.abspath(os.path.join(SD_DIR, '..', 'plugins', 'ui'))
CORE_UI_PLUGINS_DIR = os.path.abspath(os.path.join(SD_UI_DIR, 'plugins', 'ui'))
UI_PLUGINS_SOURCES = ((CORE_UI_PLUGINS_DIR, 'core'), (USER_UI_PLUGINS_DIR, 'user'))
OUTPUT_DIRNAME = "Stable Diffusion UI" # in the user's home folder
TASK_TTL = 15 * 60 # Discard last session's task timeout
APP_CONFIG_DEFAULTS = {
# auto: selects the cuda device with the most free memory, cuda: use the currently active cuda device.
'render_devices': 'auto', # valid entries: 'auto', 'cpu' or 'cuda:N' (where N is a GPU index)
'update_branch': 'main',
'ui': {
'open_browser_on_start': True,
},
}
def init():
os.makedirs(USER_UI_PLUGINS_DIR, exist_ok=True)
update_render_threads()
def getConfig(default_val=APP_CONFIG_DEFAULTS):
try:
config_json_path = os.path.join(CONFIG_DIR, 'config.json')
if not os.path.exists(config_json_path):
return default_val
with open(config_json_path, 'r', encoding='utf-8') as f:
config = json.load(f)
if 'net' not in config:
config['net'] = {}
if os.getenv('SD_UI_BIND_PORT') is not None:
config['net']['listen_port'] = int(os.getenv('SD_UI_BIND_PORT'))
if os.getenv('SD_UI_BIND_IP') is not None:
config['net']['listen_to_network'] = (os.getenv('SD_UI_BIND_IP') == '0.0.0.0')
return config
except Exception as e:
log.warn(traceback.format_exc())
return default_val
def setConfig(config):
try: # config.json
config_json_path = os.path.join(CONFIG_DIR, 'config.json')
with open(config_json_path, 'w', encoding='utf-8') as f:
json.dump(config, f)
except:
log.error(traceback.format_exc())
try: # config.bat
config_bat_path = os.path.join(CONFIG_DIR, 'config.bat')
config_bat = []
if 'update_branch' in config:
config_bat.append(f"@set update_branch={config['update_branch']}")
config_bat.append(f"@set SD_UI_BIND_PORT={config['net']['listen_port']}")
bind_ip = '0.0.0.0' if config['net']['listen_to_network'] else '127.0.0.1'
config_bat.append(f"@set SD_UI_BIND_IP={bind_ip}")
if len(config_bat) > 0:
with open(config_bat_path, 'w', encoding='utf-8') as f:
f.write('\r\n'.join(config_bat))
except:
log.error(traceback.format_exc())
try: # config.sh
config_sh_path = os.path.join(CONFIG_DIR, 'config.sh')
config_sh = ['#!/bin/bash']
if 'update_branch' in config:
config_sh.append(f"export update_branch={config['update_branch']}")
config_sh.append(f"export SD_UI_BIND_PORT={config['net']['listen_port']}")
bind_ip = '0.0.0.0' if config['net']['listen_to_network'] else '127.0.0.1'
config_sh.append(f"export SD_UI_BIND_IP={bind_ip}")
if len(config_sh) > 1:
with open(config_sh_path, 'w', encoding='utf-8') as f:
f.write('\n'.join(config_sh))
except:
log.error(traceback.format_exc())
def save_to_config(ckpt_model_name, vae_model_name, hypernetwork_model_name, vram_usage_level):
config = getConfig()
if 'model' not in config:
config['model'] = {}
config['model']['stable-diffusion'] = ckpt_model_name
config['model']['vae'] = vae_model_name
config['model']['hypernetwork'] = hypernetwork_model_name
if vae_model_name is None or vae_model_name == "":
del config['model']['vae']
if hypernetwork_model_name is None or hypernetwork_model_name == "":
del config['model']['hypernetwork']
config['vram_usage_level'] = vram_usage_level
setConfig(config)
def update_render_threads():
config = getConfig()
render_devices = config.get('render_devices', 'auto')
active_devices = task_manager.get_devices()['active'].keys()
log.debug(f'requesting for render_devices: {render_devices}')
task_manager.update_render_threads(render_devices, active_devices)
def getUIPlugins():
plugins = []
for plugins_dir, dir_prefix in UI_PLUGINS_SOURCES:
for file in os.listdir(plugins_dir):
if file.endswith('.plugin.js'):
plugins.append(f'/plugins/{dir_prefix}/{file}')
return plugins
def getIPConfig():
try:
ips = socket.gethostbyname_ex(socket.gethostname())
ips[2].append(ips[0])
return ips[2]
except Exception as e:
log.exception(e)
return []
def open_browser():
config = getConfig()
ui = config.get('ui', {})
net = config.get('net', {'listen_port':9000})
port = net.get('listen_port', 9000)
if ui.get('open_browser_on_start', True):
import webbrowser; webbrowser.open(f"http://localhost:{port}")

View File

@ -3,6 +3,15 @@ import torch
import traceback import traceback
import re import re
from easydiffusion.utils import log
'''
Set `FORCE_FULL_PRECISION` in the environment variables, or in `config.bat`/`config.sh` to set full precision (i.e. float32).
Otherwise the models will load at half-precision (i.e. float16).
Half-precision is fine most of the time. Full precision is only needed for working around GPU bugs (like NVIDIA 16xx GPUs).
'''
COMPARABLE_GPU_PERCENTILE = 0.65 # if a GPU's free_mem is within this % of the GPU with the most free_mem, it will be picked COMPARABLE_GPU_PERCENTILE = 0.65 # if a GPU's free_mem is within this % of the GPU with the most free_mem, it will be picked
mem_free_threshold = 0 mem_free_threshold = 0
@ -34,7 +43,7 @@ def get_device_delta(render_devices, active_devices):
if 'auto' in render_devices: if 'auto' in render_devices:
render_devices = auto_pick_devices(active_devices) render_devices = auto_pick_devices(active_devices)
if 'cpu' in render_devices: if 'cpu' in render_devices:
print('WARNING: Could not find a compatible GPU. Using the CPU, but this will be very slow!') log.warn('WARNING: Could not find a compatible GPU. Using the CPU, but this will be very slow!')
active_devices = set(active_devices) active_devices = set(active_devices)
render_devices = set(render_devices) render_devices = set(render_devices)
@ -53,7 +62,7 @@ def auto_pick_devices(currently_active_devices):
if device_count == 1: if device_count == 1:
return ['cuda:0'] if is_device_compatible('cuda:0') else ['cpu'] return ['cuda:0'] if is_device_compatible('cuda:0') else ['cpu']
print('Autoselecting GPU. Using most free memory.') log.debug('Autoselecting GPU. Using most free memory.')
devices = [] devices = []
for device in range(device_count): for device in range(device_count):
device = f'cuda:{device}' device = f'cuda:{device}'
@ -64,7 +73,7 @@ def auto_pick_devices(currently_active_devices):
mem_free /= float(10**9) mem_free /= float(10**9)
mem_total /= float(10**9) mem_total /= float(10**9)
device_name = torch.cuda.get_device_name(device) device_name = torch.cuda.get_device_name(device)
print(f'{device} detected: {device_name} - Memory (free/total): {round(mem_free, 2)}Gb / {round(mem_total, 2)}Gb') log.debug(f'{device} detected: {device_name} - Memory (free/total): {round(mem_free, 2)}Gb / {round(mem_total, 2)}Gb')
devices.append({'device': device, 'device_name': device_name, 'mem_free': mem_free}) devices.append({'device': device, 'device_name': device_name, 'mem_free': mem_free})
devices.sort(key=lambda x:x['mem_free'], reverse=True) devices.sort(key=lambda x:x['mem_free'], reverse=True)
@ -82,7 +91,7 @@ def auto_pick_devices(currently_active_devices):
devices = list(map(lambda x: x['device'], devices)) devices = list(map(lambda x: x['device'], devices))
return devices return devices
def device_init(thread_data, device): def device_init(context, device):
''' '''
This function assumes the 'device' has already been verified to be compatible. This function assumes the 'device' has already been verified to be compatible.
`get_device_delta()` has already filtered out incompatible devices. `get_device_delta()` has already filtered out incompatible devices.
@ -91,27 +100,45 @@ def device_init(thread_data, device):
validate_device_id(device, log_prefix='device_init') validate_device_id(device, log_prefix='device_init')
if device == 'cpu': if device == 'cpu':
thread_data.device = 'cpu' context.device = 'cpu'
thread_data.device_name = get_processor_name() context.device_name = get_processor_name()
print('Render device CPU available as', thread_data.device_name) context.half_precision = False
log.debug(f'Render device CPU available as {context.device_name}')
return return
thread_data.device_name = torch.cuda.get_device_name(device) context.device_name = torch.cuda.get_device_name(device)
thread_data.device = device context.device = device
# Force full precision on 1660 and 1650 NVIDIA cards to avoid creating green images # Force full precision on 1660 and 1650 NVIDIA cards to avoid creating green images
device_name = thread_data.device_name.lower() if needs_to_force_full_precision(context):
thread_data.force_full_precision = ('nvidia' in device_name or 'geforce' in device_name) and (' 1660' in device_name or ' 1650' in device_name) log.warn(f'forcing full precision on this GPU, to avoid green images. GPU detected: {context.device_name}')
if thread_data.force_full_precision:
print('forcing full precision on NVIDIA 16xx cards, to avoid green images. GPU detected: ', thread_data.device_name)
# Apply force_full_precision now before models are loaded. # Apply force_full_precision now before models are loaded.
thread_data.precision = 'full' context.half_precision = False
print(f'Setting {device} as active') log.info(f'Setting {device} as active, with precision: {"half" if context.half_precision else "full"}')
torch.cuda.device(device) torch.cuda.device(device)
return return
def needs_to_force_full_precision(context):
if 'FORCE_FULL_PRECISION' in os.environ:
return True
device_name = context.device_name.lower()
return (('nvidia' in device_name or 'geforce' in device_name) and (' 1660' in device_name or ' 1650' in device_name)) or ('Quadro T2000' in device_name)
def get_max_vram_usage_level(device):
if device != 'cpu':
_, mem_total = torch.cuda.mem_get_info(device)
mem_total /= float(10**9)
if mem_total < 4.5:
return 'low'
elif mem_total < 6.5:
return 'balanced'
return 'high'
def validate_device_id(device, log_prefix=''): def validate_device_id(device, log_prefix=''):
def is_valid(): def is_valid():
if not isinstance(device, str): if not isinstance(device, str):
@ -132,7 +159,7 @@ def is_device_compatible(device):
try: try:
validate_device_id(device, log_prefix='is_device_compatible') validate_device_id(device, log_prefix='is_device_compatible')
except: except:
print(str(e)) log.error(str(e))
return False return False
if device == 'cpu': return True if device == 'cpu': return True
@ -141,10 +168,10 @@ def is_device_compatible(device):
_, mem_total = torch.cuda.mem_get_info(device) _, mem_total = torch.cuda.mem_get_info(device)
mem_total /= float(10**9) mem_total /= float(10**9)
if mem_total < 3.0: if mem_total < 3.0:
print(f'GPU {device} with less than 3 GB of VRAM is not compatible with Stable Diffusion') log.warn(f'GPU {device} with less than 3 GB of VRAM is not compatible with Stable Diffusion')
return False return False
except RuntimeError as e: except RuntimeError as e:
print(str(e)) log.error(str(e))
return False return False
return True return True
@ -164,5 +191,5 @@ def get_processor_name():
if "model name" in line: if "model name" in line:
return re.sub(".*model name.*:", "", line, 1).strip() return re.sub(".*model name.*:", "", line, 1).strip()
except: except:
print(traceback.format_exc()) log.error(traceback.format_exc())
return "cpu" return "cpu"

View File

@ -0,0 +1,223 @@
import os
from easydiffusion import app, device_manager
from easydiffusion.types import TaskData
from easydiffusion.utils import log
from sdkit import Context
from sdkit.models import load_model, unload_model, get_model_info_from_db, scan_model
from sdkit.utils import hash_file_quick
KNOWN_MODEL_TYPES = ['stable-diffusion', 'vae', 'hypernetwork', 'gfpgan', 'realesrgan']
MODEL_EXTENSIONS = {
'stable-diffusion': ['.ckpt', '.safetensors'],
'vae': ['.vae.pt', '.ckpt', '.safetensors'],
'hypernetwork': ['.pt', '.safetensors'],
'gfpgan': ['.pth'],
'realesrgan': ['.pth'],
}
DEFAULT_MODELS = {
'stable-diffusion': [ # needed to support the legacy installations
'custom-model', # only one custom model file was supported initially, creatively named 'custom-model'
'sd-v1-4', # Default fallback.
],
'gfpgan': ['GFPGANv1.3'],
'realesrgan': ['RealESRGAN_x4plus'],
}
VRAM_USAGE_LEVEL_TO_OPTIMIZATIONS = {
'balanced': {'KEEP_FS_AND_CS_IN_CPU', 'SET_ATTENTION_STEP_TO_4'},
'low': {'KEEP_ENTIRE_MODEL_IN_CPU'},
'high': {},
}
MODELS_TO_LOAD_ON_START = ['stable-diffusion', 'vae', 'hypernetwork']
known_models = {}
def init():
make_model_folders()
getModels() # run this once, to cache the picklescan results
def load_default_models(context: Context):
set_vram_optimizations(context)
# init default model paths
for model_type in MODELS_TO_LOAD_ON_START:
context.model_paths[model_type] = resolve_model_to_use(model_type=model_type)
load_model(context, model_type)
def unload_all(context: Context):
for model_type in KNOWN_MODEL_TYPES:
unload_model(context, model_type)
def resolve_model_to_use(model_name:str=None, model_type:str=None):
model_extensions = MODEL_EXTENSIONS.get(model_type, [])
default_models = DEFAULT_MODELS.get(model_type, [])
config = app.getConfig()
model_dirs = [os.path.join(app.MODELS_DIR, model_type), app.SD_DIR]
if not model_name: # When None try user configured model.
# config = getConfig()
if 'model' in config and model_type in config['model']:
model_name = config['model'][model_type]
if model_name:
# Check models directory
models_dir_path = os.path.join(app.MODELS_DIR, model_type, model_name)
for model_extension in model_extensions:
if os.path.exists(models_dir_path + model_extension):
return models_dir_path + model_extension
if os.path.exists(model_name + model_extension):
return os.path.abspath(model_name + model_extension)
# Default locations
if model_name in default_models:
default_model_path = os.path.join(app.SD_DIR, model_name)
for model_extension in model_extensions:
if os.path.exists(default_model_path + model_extension):
return default_model_path + model_extension
# Can't find requested model, check the default paths.
for default_model in default_models:
for model_dir in model_dirs:
default_model_path = os.path.join(model_dir, default_model)
for model_extension in model_extensions:
if os.path.exists(default_model_path + model_extension):
if model_name is not None:
log.warn(f'Could not find the configured custom model {model_name}{model_extension}. Using the default one: {default_model_path}{model_extension}')
return default_model_path + model_extension
return None
def reload_models_if_necessary(context: Context, task_data: TaskData):
model_paths_in_req = {
'stable-diffusion': task_data.use_stable_diffusion_model,
'vae': task_data.use_vae_model,
'hypernetwork': task_data.use_hypernetwork_model,
'gfpgan': task_data.use_face_correction,
'realesrgan': task_data.use_upscale,
}
models_to_reload = {model_type: path for model_type, path in model_paths_in_req.items() if context.model_paths.get(model_type) != path}
if set_vram_optimizations(context): # reload SD
models_to_reload['stable-diffusion'] = model_paths_in_req['stable-diffusion']
if 'stable-diffusion' in models_to_reload:
quick_hash = hash_file_quick(models_to_reload['stable-diffusion'])
known_model_info = get_model_info_from_db(quick_hash=quick_hash)
for model_type, model_path_in_req in models_to_reload.items():
context.model_paths[model_type] = model_path_in_req
action_fn = unload_model if context.model_paths[model_type] is None else load_model
action_fn(context, model_type, scan_model=False) # we've scanned them already
def resolve_model_paths(task_data: TaskData):
task_data.use_stable_diffusion_model = resolve_model_to_use(task_data.use_stable_diffusion_model, model_type='stable-diffusion')
task_data.use_vae_model = resolve_model_to_use(task_data.use_vae_model, model_type='vae')
task_data.use_hypernetwork_model = resolve_model_to_use(task_data.use_hypernetwork_model, model_type='hypernetwork')
if task_data.use_face_correction: task_data.use_face_correction = resolve_model_to_use(task_data.use_face_correction, 'gfpgan')
if task_data.use_upscale: task_data.use_upscale = resolve_model_to_use(task_data.use_upscale, 'realesrgan')
def set_vram_optimizations(context: Context):
config = app.getConfig()
max_usage_level = device_manager.get_max_vram_usage_level(context.device)
vram_usage_level = config.get('vram_usage_level', 'balanced')
v = {'low': 0, 'balanced': 1, 'high': 2}
if v[vram_usage_level] > v[max_usage_level]:
log.error(f'Requested GPU Memory Usage level ({vram_usage_level}) is higher than what is ' + \
f'possible ({max_usage_level}) on this device ({context.device}). Using "{max_usage_level}" instead')
vram_usage_level = max_usage_level
vram_optimizations = VRAM_USAGE_LEVEL_TO_OPTIMIZATIONS[vram_usage_level]
if vram_optimizations != context.vram_optimizations:
context.vram_optimizations = vram_optimizations
return True
return False
def make_model_folders():
for model_type in KNOWN_MODEL_TYPES:
model_dir_path = os.path.join(app.MODELS_DIR, model_type)
os.makedirs(model_dir_path, exist_ok=True)
help_file_name = f'Place your {model_type} model files here.txt'
help_file_contents = f'Supported extensions: {" or ".join(MODEL_EXTENSIONS.get(model_type))}'
with open(os.path.join(model_dir_path, help_file_name), 'w', encoding='utf-8') as f:
f.write(help_file_contents)
def is_malicious_model(file_path):
try:
scan_result = scan_model(file_path)
if scan_result.issues_count > 0 or scan_result.infected_files > 0:
log.warn(":warning: [bold red]Scan %s: %d scanned, %d issue, %d infected.[/bold red]" % (file_path, scan_result.scanned_files, scan_result.issues_count, scan_result.infected_files))
return True
else:
log.debug("Scan %s: [green]%d scanned, %d issue, %d infected.[/green]" % (file_path, scan_result.scanned_files, scan_result.issues_count, scan_result.infected_files))
return False
except Exception as e:
log.error(f'error while scanning: {file_path}, error: {e}')
return False
def getModels():
models = {
'active': {
'stable-diffusion': 'sd-v1-4',
'vae': '',
'hypernetwork': '',
},
'options': {
'stable-diffusion': ['sd-v1-4'],
'vae': [],
'hypernetwork': [],
},
}
models_scanned = 0
def listModels(model_type):
nonlocal models_scanned
model_extensions = MODEL_EXTENSIONS.get(model_type, [])
models_dir = os.path.join(app.MODELS_DIR, model_type)
if not os.path.exists(models_dir):
os.makedirs(models_dir)
for file in os.listdir(models_dir):
for model_extension in model_extensions:
if not file.endswith(model_extension):
continue
model_path = os.path.join(models_dir, file)
mtime = os.path.getmtime(model_path)
mod_time = known_models[model_path] if model_path in known_models else -1
if mod_time != mtime:
models_scanned += 1
if is_malicious_model(model_path):
models['scan-error'] = file
return
known_models[model_path] = mtime
model_name = file[:-len(model_extension)]
models['options'][model_type].append(model_name)
models['options'][model_type] = [*set(models['options'][model_type])] # remove duplicates
models['options'][model_type].sort()
# custom models
listModels(model_type='stable-diffusion')
listModels(model_type='vae')
listModels(model_type='hypernetwork')
if models_scanned > 0: log.info(f'[green]Scanned {models_scanned} models. Nothing infected[/]')
# legacy
custom_weight_path = os.path.join(app.SD_DIR, 'custom-model.ckpt')
if os.path.exists(custom_weight_path):
models['options']['stable-diffusion'].append('custom-model')
return models

View File

@ -0,0 +1,124 @@
import queue
import time
import json
from easydiffusion import device_manager
from easydiffusion.types import TaskData, Response, Image as ResponseImage, UserInitiatedStop, GenerateImageRequest
from easydiffusion.utils import get_printable_request, save_images_to_disk, log
from sdkit import Context
from sdkit.generate import generate_images
from sdkit.filter import apply_filters
from sdkit.utils import img_to_buffer, img_to_base64_str, latent_samples_to_images, gc
context = Context() # thread-local
'''
runtime data (bound locally to this thread), for e.g. device, references to loaded models, optimization flags etc
'''
def init(device):
'''
Initializes the fields that will be bound to this runtime's context, and sets the current torch device
'''
context.stop_processing = False
context.temp_images = {}
context.partial_x_samples = None
device_manager.device_init(context, device)
def make_images(req: GenerateImageRequest, task_data: TaskData, data_queue: queue.Queue, task_temp_images: list, step_callback):
context.stop_processing = False
log.info(f'request: {get_printable_request(req)}')
log.info(f'task data: {task_data.dict()}')
images = make_images_internal(req, task_data, data_queue, task_temp_images, step_callback)
res = Response(req, task_data, images=construct_response(images, task_data, base_seed=req.seed))
res = res.json()
data_queue.put(json.dumps(res))
log.info('Task completed')
return res
def make_images_internal(req: GenerateImageRequest, task_data: TaskData, data_queue: queue.Queue, task_temp_images: list, step_callback):
images, user_stopped = generate_images_internal(req, task_data, data_queue, task_temp_images, step_callback, task_data.stream_image_progress)
filtered_images = filter_images(task_data, images, user_stopped)
if task_data.save_to_disk_path is not None:
save_images_to_disk(images, filtered_images, req, task_data)
return filtered_images if task_data.show_only_filtered_image else images + filtered_images
def generate_images_internal(req: GenerateImageRequest, task_data: TaskData, data_queue: queue.Queue, task_temp_images: list, step_callback, stream_image_progress: bool):
context.temp_images.clear()
callback = make_step_callback(req, task_data, data_queue, task_temp_images, step_callback, stream_image_progress)
try:
images = generate_images(context, callback=callback, **req.dict())
user_stopped = False
except UserInitiatedStop:
images = []
user_stopped = True
if context.partial_x_samples is not None:
images = latent_samples_to_images(context, context.partial_x_samples)
context.partial_x_samples = None
finally:
gc(context)
return images, user_stopped
def filter_images(task_data: TaskData, images: list, user_stopped):
if user_stopped or (task_data.use_face_correction is None and task_data.use_upscale is None):
return images
filters_to_apply = []
if task_data.use_face_correction and 'gfpgan' in task_data.use_face_correction.lower(): filters_to_apply.append('gfpgan')
if task_data.use_upscale and 'realesrgan' in task_data.use_upscale.lower(): filters_to_apply.append('realesrgan')
return apply_filters(context, filters_to_apply, images)
def construct_response(images: list, task_data: TaskData, base_seed: int):
return [
ResponseImage(
data=img_to_base64_str(img, task_data.output_format, task_data.output_quality),
seed=base_seed + i
) for i, img in enumerate(images)
]
def make_step_callback(req: GenerateImageRequest, task_data: TaskData, data_queue: queue.Queue, task_temp_images: list, step_callback, stream_image_progress: bool):
n_steps = req.num_inference_steps if req.init_image is None else int(req.num_inference_steps * req.prompt_strength)
last_callback_time = -1
def update_temp_img(x_samples, task_temp_images: list):
partial_images = []
images = latent_samples_to_images(context, x_samples)
for i, img in enumerate(images):
buf = img_to_buffer(img, output_format='JPEG')
context.temp_images[f"{task_data.request_id}/{i}"] = buf
task_temp_images[i] = buf
partial_images.append({'path': f"/image/tmp/{task_data.request_id}/{i}"})
del images
return partial_images
def on_image_step(x_samples, i):
nonlocal last_callback_time
context.partial_x_samples = x_samples
step_time = time.time() - last_callback_time if last_callback_time != -1 else -1
last_callback_time = time.time()
progress = {"step": i, "step_time": step_time, "total_steps": n_steps}
if stream_image_progress and i % 5 == 0:
progress['output'] = update_temp_img(x_samples, task_temp_images)
data_queue.put(json.dumps(progress))
step_callback()
if context.stop_processing:
raise UserInitiatedStop("User requested that we stop processing")
return on_image_step

219
ui/easydiffusion/server.py Normal file
View File

@ -0,0 +1,219 @@
"""server.py: FastAPI SD-UI Web Host.
Notes:
async endpoints always run on the main thread. Without they run on the thread pool.
"""
import os
import traceback
import datetime
from typing import List, Union
from fastapi import FastAPI, HTTPException
from fastapi.staticfiles import StaticFiles
from starlette.responses import FileResponse, JSONResponse, StreamingResponse
from pydantic import BaseModel
from easydiffusion import app, model_manager, task_manager
from easydiffusion.types import TaskData, GenerateImageRequest
from easydiffusion.utils import log
log.info(f'started in {app.SD_DIR}')
log.info(f'started at {datetime.datetime.now():%x %X}')
server_api = FastAPI()
NOCACHE_HEADERS={"Cache-Control": "no-cache, no-store, must-revalidate", "Pragma": "no-cache", "Expires": "0"}
class NoCacheStaticFiles(StaticFiles):
def is_not_modified(self, response_headers, request_headers) -> bool:
if 'content-type' in response_headers and ('javascript' in response_headers['content-type'] or 'css' in response_headers['content-type']):
response_headers.update(NOCACHE_HEADERS)
return False
return super().is_not_modified(response_headers, request_headers)
class SetAppConfigRequest(BaseModel):
update_branch: str = None
render_devices: Union[List[str], List[int], str, int] = None
model_vae: str = None
ui_open_browser_on_start: bool = None
listen_to_network: bool = None
listen_port: int = None
def init():
server_api.mount('/media', NoCacheStaticFiles(directory=os.path.join(app.SD_UI_DIR, 'media')), name="media")
for plugins_dir, dir_prefix in app.UI_PLUGINS_SOURCES:
server_api.mount(f'/plugins/{dir_prefix}', NoCacheStaticFiles(directory=plugins_dir), name=f"plugins-{dir_prefix}")
@server_api.post('/app_config')
async def set_app_config(req : SetAppConfigRequest):
return set_app_config_internal(req)
@server_api.get('/get/{key:path}')
def read_web_data(key:str=None):
return read_web_data_internal(key)
@server_api.get('/ping') # Get server and optionally session status.
def ping(session_id:str=None):
return ping_internal(session_id)
@server_api.post('/render')
def render(req: dict):
return render_internal(req)
@server_api.get('/image/stream/{task_id:int}')
def stream(task_id:int):
return stream_internal(task_id)
@server_api.get('/image/stop')
def stop(task: int):
return stop_internal(task)
@server_api.get('/image/tmp/{task_id:int}/{img_id:int}')
def get_image(task_id: int, img_id: int):
return get_image_internal(task_id, img_id)
@server_api.get('/')
def read_root():
return FileResponse(os.path.join(app.SD_UI_DIR, 'index.html'), headers=NOCACHE_HEADERS)
@server_api.on_event("shutdown")
def shutdown_event(): # Signal render thread to close on shutdown
task_manager.current_state_error = SystemExit('Application shutting down.')
# API implementations
def set_app_config_internal(req : SetAppConfigRequest):
config = app.getConfig()
if req.update_branch is not None:
config['update_branch'] = req.update_branch
if req.render_devices is not None:
update_render_devices_in_config(config, req.render_devices)
if req.ui_open_browser_on_start is not None:
if 'ui' not in config:
config['ui'] = {}
config['ui']['open_browser_on_start'] = req.ui_open_browser_on_start
if req.listen_to_network is not None:
if 'net' not in config:
config['net'] = {}
config['net']['listen_to_network'] = bool(req.listen_to_network)
if req.listen_port is not None:
if 'net' not in config:
config['net'] = {}
config['net']['listen_port'] = int(req.listen_port)
try:
app.setConfig(config)
if req.render_devices:
app.update_render_threads()
return JSONResponse({'status': 'OK'}, headers=NOCACHE_HEADERS)
except Exception as e:
log.error(traceback.format_exc())
raise HTTPException(status_code=500, detail=str(e))
def update_render_devices_in_config(config, render_devices):
if render_devices not in ('cpu', 'auto') and not render_devices.startswith('cuda:'):
raise HTTPException(status_code=400, detail=f'Invalid render device requested: {render_devices}')
if render_devices.startswith('cuda:'):
render_devices = render_devices.split(',')
config['render_devices'] = render_devices
def read_web_data_internal(key:str=None):
if not key: # /get without parameters, stable-diffusion easter egg.
raise HTTPException(status_code=418, detail="StableDiffusion is drawing a teapot!") # HTTP418 I'm a teapot
elif key == 'app_config':
return JSONResponse(app.getConfig(), headers=NOCACHE_HEADERS)
elif key == 'system_info':
config = app.getConfig()
system_info = {
'devices': task_manager.get_devices(),
'hosts': app.getIPConfig(),
'default_output_dir': os.path.join(os.path.expanduser("~"), app.OUTPUT_DIRNAME),
}
system_info['devices']['config'] = config.get('render_devices', "auto")
return JSONResponse(system_info, headers=NOCACHE_HEADERS)
elif key == 'models':
return JSONResponse(model_manager.getModels(), headers=NOCACHE_HEADERS)
elif key == 'modifiers': return FileResponse(os.path.join(app.SD_UI_DIR, 'modifiers.json'), headers=NOCACHE_HEADERS)
elif key == 'ui_plugins': return JSONResponse(app.getUIPlugins(), headers=NOCACHE_HEADERS)
else:
raise HTTPException(status_code=404, detail=f'Request for unknown {key}') # HTTP404 Not Found
def ping_internal(session_id:str=None):
if task_manager.is_alive() <= 0: # Check that render threads are alive.
if task_manager.current_state_error: raise HTTPException(status_code=500, detail=str(task_manager.current_state_error))
raise HTTPException(status_code=500, detail='Render thread is dead.')
if task_manager.current_state_error and not isinstance(task_manager.current_state_error, StopAsyncIteration): raise HTTPException(status_code=500, detail=str(task_manager.current_state_error))
# Alive
response = {'status': str(task_manager.current_state)}
if session_id:
session = task_manager.get_cached_session(session_id, update_ttl=True)
response['tasks'] = {id(t): t.status for t in session.tasks}
response['devices'] = task_manager.get_devices()
return JSONResponse(response, headers=NOCACHE_HEADERS)
def render_internal(req: dict):
try:
# separate out the request data into rendering and task-specific data
render_req: GenerateImageRequest = GenerateImageRequest.parse_obj(req)
task_data: TaskData = TaskData.parse_obj(req)
render_req.init_image_mask = req.get('mask') # hack: will rename this in the HTTP API in a future revision
app.save_to_config(task_data.use_stable_diffusion_model, task_data.use_vae_model, task_data.use_hypernetwork_model, task_data.vram_usage_level)
# enqueue the task
new_task = task_manager.render(render_req, task_data)
response = {
'status': str(task_manager.current_state),
'queue': len(task_manager.tasks_queue),
'stream': f'/image/stream/{id(new_task)}',
'task': id(new_task)
}
return JSONResponse(response, headers=NOCACHE_HEADERS)
except ChildProcessError as e: # Render thread is dead
raise HTTPException(status_code=500, detail=f'Rendering thread has died.') # HTTP500 Internal Server Error
except ConnectionRefusedError as e: # Unstarted task pending limit reached, deny queueing too many.
raise HTTPException(status_code=503, detail=str(e)) # HTTP503 Service Unavailable
except Exception as e:
log.error(traceback.format_exc())
raise HTTPException(status_code=500, detail=str(e))
def stream_internal(task_id:int):
#TODO Move to WebSockets ??
task = task_manager.get_cached_task(task_id, update_ttl=True)
if not task: raise HTTPException(status_code=404, detail=f'Request {task_id} not found.') # HTTP404 NotFound
#if (id(task) != task_id): raise HTTPException(status_code=409, detail=f'Wrong task id received. Expected:{id(task)}, Received:{task_id}') # HTTP409 Conflict
if task.buffer_queue.empty() and not task.lock.locked():
if task.response:
#log.info(f'Session {session_id} sending cached response')
return JSONResponse(task.response, headers=NOCACHE_HEADERS)
raise HTTPException(status_code=425, detail='Too Early, task not started yet.') # HTTP425 Too Early
#log.info(f'Session {session_id} opened live render stream {id(task.buffer_queue)}')
return StreamingResponse(task.read_buffer_generator(), media_type='application/json')
def stop_internal(task: int):
if not task:
if task_manager.current_state == task_manager.ServerStates.Online or task_manager.current_state == task_manager.ServerStates.Unavailable:
raise HTTPException(status_code=409, detail='Not currently running any tasks.') # HTTP409 Conflict
task_manager.current_state_error = StopAsyncIteration('')
return {'OK'}
task_id = task
task = task_manager.get_cached_task(task_id, update_ttl=False)
if not task: raise HTTPException(status_code=404, detail=f'Task {task_id} was not found.') # HTTP404 Not Found
if isinstance(task.error, StopAsyncIteration): raise HTTPException(status_code=409, detail=f'Task {task_id} is already stopped.') # HTTP409 Conflict
task.error = StopAsyncIteration(f'Task {task_id} stop requested.')
return {'OK'}
def get_image_internal(task_id: int, img_id: int):
task = task_manager.get_cached_task(task_id, update_ttl=True)
if not task: raise HTTPException(status_code=410, detail=f'Task {task_id} could not be found.') # HTTP404 NotFound
if not task.temp_images[img_id]: raise HTTPException(status_code=425, detail='Too Early, task data is not available yet.') # HTTP425 Too Early
try:
img_data = task.temp_images[img_id]
img_data.seek(0)
return StreamingResponse(img_data, media_type='image/jpeg')
except KeyError as e:
raise HTTPException(status_code=500, detail=str(e))

View File

@ -11,12 +11,13 @@ TASK_TTL = 15 * 60 # seconds, Discard last session's task timeout
import torch import torch
import queue, threading, time, weakref import queue, threading, time, weakref
from typing import Any, Generator, Hashable, Optional, Union from typing import Any, Hashable
from pydantic import BaseModel from easydiffusion import device_manager
from sd_internal import Request, Response, runtime, device_manager from easydiffusion.types import TaskData, GenerateImageRequest
from easydiffusion.utils import log
THREAD_NAME_PREFIX = 'Runtime-Render/' THREAD_NAME_PREFIX = ''
ERR_LOCK_FAILED = ' failed to acquire lock within timeout.' ERR_LOCK_FAILED = ' failed to acquire lock within timeout.'
LOCK_TIMEOUT = 15 # Maximum locking time in seconds before failing a task. LOCK_TIMEOUT = 15 # Maximum locking time in seconds before failing a task.
# It's better to get an exception than a deadlock... ALWAYS use timeout in critical paths. # It's better to get an exception than a deadlock... ALWAYS use timeout in critical paths.
@ -36,11 +37,13 @@ class ServerStates:
class Unavailable(Symbol): pass class Unavailable(Symbol): pass
class RenderTask(): # Task with output queue and completion lock. class RenderTask(): # Task with output queue and completion lock.
def __init__(self, req: Request): def __init__(self, req: GenerateImageRequest, task_data: TaskData):
self.request: Request = req # Initial Request task_data.request_id = id(self)
self.render_request: GenerateImageRequest = req # Initial Request
self.task_data: TaskData = task_data
self.response: Any = None # Copy of the last reponse self.response: Any = None # Copy of the last reponse
self.render_device = None # Select the task affinity. (Not used to change active devices). self.render_device = None # Select the task affinity. (Not used to change active devices).
self.temp_images:list = [None] * req.num_outputs * (1 if req.show_only_filtered_image else 2) self.temp_images:list = [None] * req.num_outputs * (1 if task_data.show_only_filtered_image else 2)
self.error: Exception = None self.error: Exception = None
self.lock: threading.Lock = threading.Lock() # Locks at task start and unlocks when task is completed self.lock: threading.Lock = threading.Lock() # Locks at task start and unlocks when task is completed
self.buffer_queue: queue.Queue = queue.Queue() # Queue of JSON string segments self.buffer_queue: queue.Queue = queue.Queue() # Queue of JSON string segments
@ -51,53 +54,25 @@ class RenderTask(): # Task with output queue and completion lock.
self.buffer_queue.task_done() self.buffer_queue.task_done()
yield res yield res
except queue.Empty as e: yield except queue.Empty as e: yield
@property
# defaults from https://huggingface.co/blog/stable_diffusion def status(self):
class ImageRequest(BaseModel): if self.lock.locked():
session_id: str = "session" return 'running'
prompt: str = "" if isinstance(self.error, StopAsyncIteration):
negative_prompt: str = "" return 'stopped'
init_image: str = None # base64 if self.error:
mask: str = None # base64 return 'error'
num_outputs: int = 1 if not self.buffer_queue.empty():
num_inference_steps: int = 50 return 'buffer'
guidance_scale: float = 7.5 if self.response:
width: int = 512 return 'completed'
height: int = 512 return 'pending'
seed: int = 42 @property
prompt_strength: float = 0.8 def is_pending(self):
sampler: str = None # "ddim", "plms", "heun", "euler", "euler_a", "dpm2", "dpm2_a", "lms" return bool(not self.response and not self.error)
# allow_nsfw: bool = False
save_to_disk_path: str = None
turbo: bool = True
use_cpu: bool = False ##TODO Remove after UI and plugins transition.
render_device: str = None # Select the task affinity. (Not used to change active devices).
use_full_precision: bool = False
use_face_correction: str = None # or "GFPGANv1.3"
use_upscale: str = None # or "RealESRGAN_x4plus" or "RealESRGAN_x4plus_anime_6B"
use_stable_diffusion_model: str = "sd-v1-4"
use_vae_model: str = None
show_only_filtered_image: bool = False
output_format: str = "jpeg" # or "png"
stream_progress_updates: bool = False
stream_image_progress: bool = False
class FilterRequest(BaseModel):
session_id: str = "session"
model: str = None
name: str = ""
init_image: str = None # base64
width: int = 512
height: int = 512
save_to_disk_path: str = None
turbo: bool = True
render_device: str = None
use_full_precision: bool = False
output_format: str = "jpeg" # or "png"
# Temporary cache to allow to query tasks results for a short time after they are completed. # Temporary cache to allow to query tasks results for a short time after they are completed.
class TaskCache(): class DataCache():
def __init__(self): def __init__(self):
self._base = dict() self._base = dict()
self._lock: threading.Lock = threading.Lock() self._lock: threading.Lock = threading.Lock()
@ -106,7 +81,7 @@ class TaskCache():
def _is_expired(self, timestamp: int) -> bool: def _is_expired(self, timestamp: int) -> bool:
return int(time.time()) >= timestamp return int(time.time()) >= timestamp
def clean(self) -> None: def clean(self) -> None:
if not self._lock.acquire(blocking=True, timeout=LOCK_TIMEOUT): raise Exception('TaskCache.clean' + ERR_LOCK_FAILED) if not self._lock.acquire(blocking=True, timeout=LOCK_TIMEOUT): raise Exception('DataCache.clean' + ERR_LOCK_FAILED)
try: try:
# Create a list of expired keys to delete # Create a list of expired keys to delete
to_delete = [] to_delete = []
@ -116,16 +91,22 @@ class TaskCache():
to_delete.append(key) to_delete.append(key)
# Remove Items # Remove Items
for key in to_delete: for key in to_delete:
(_, val) = self._base[key]
if isinstance(val, RenderTask):
log.debug(f'RenderTask {key} expired. Data removed.')
elif isinstance(val, SessionState):
log.debug(f'Session {key} expired. Data removed.')
else:
log.debug(f'Key {key} expired. Data removed.')
del self._base[key] del self._base[key]
print(f'Session {key} expired. Data removed.')
finally: finally:
self._lock.release() self._lock.release()
def clear(self) -> None: def clear(self) -> None:
if not self._lock.acquire(blocking=True, timeout=LOCK_TIMEOUT): raise Exception('TaskCache.clear' + ERR_LOCK_FAILED) if not self._lock.acquire(blocking=True, timeout=LOCK_TIMEOUT): raise Exception('DataCache.clear' + ERR_LOCK_FAILED)
try: self._base.clear() try: self._base.clear()
finally: self._lock.release() finally: self._lock.release()
def delete(self, key: Hashable) -> bool: def delete(self, key: Hashable) -> bool:
if not self._lock.acquire(blocking=True, timeout=LOCK_TIMEOUT): raise Exception('TaskCache.delete' + ERR_LOCK_FAILED) if not self._lock.acquire(blocking=True, timeout=LOCK_TIMEOUT): raise Exception('DataCache.delete' + ERR_LOCK_FAILED)
try: try:
if key not in self._base: if key not in self._base:
return False return False
@ -134,7 +115,7 @@ class TaskCache():
finally: finally:
self._lock.release() self._lock.release()
def keep(self, key: Hashable, ttl: int) -> bool: def keep(self, key: Hashable, ttl: int) -> bool:
if not self._lock.acquire(blocking=True, timeout=LOCK_TIMEOUT): raise Exception('TaskCache.keep' + ERR_LOCK_FAILED) if not self._lock.acquire(blocking=True, timeout=LOCK_TIMEOUT): raise Exception('DataCache.keep' + ERR_LOCK_FAILED)
try: try:
if key in self._base: if key in self._base:
_, value = self._base.get(key) _, value = self._base.get(key)
@ -144,25 +125,24 @@ class TaskCache():
finally: finally:
self._lock.release() self._lock.release()
def put(self, key: Hashable, value: Any, ttl: int) -> bool: def put(self, key: Hashable, value: Any, ttl: int) -> bool:
if not self._lock.acquire(blocking=True, timeout=LOCK_TIMEOUT): raise Exception('TaskCache.put' + ERR_LOCK_FAILED) if not self._lock.acquire(blocking=True, timeout=LOCK_TIMEOUT): raise Exception('DataCache.put' + ERR_LOCK_FAILED)
try: try:
self._base[key] = ( self._base[key] = (
self._get_ttl_time(ttl), value self._get_ttl_time(ttl), value
) )
except Exception as e: except Exception as e:
print(str(e)) log.error(traceback.format_exc())
print(traceback.format_exc())
return False return False
else: else:
return True return True
finally: finally:
self._lock.release() self._lock.release()
def tryGet(self, key: Hashable) -> Any: def tryGet(self, key: Hashable) -> Any:
if not self._lock.acquire(blocking=True, timeout=LOCK_TIMEOUT): raise Exception('TaskCache.tryGet' + ERR_LOCK_FAILED) if not self._lock.acquire(blocking=True, timeout=LOCK_TIMEOUT): raise Exception('DataCache.tryGet' + ERR_LOCK_FAILED)
try: try:
ttl, value = self._base.get(key, (None, None)) ttl, value = self._base.get(key, (None, None))
if ttl is not None and self._is_expired(ttl): if ttl is not None and self._is_expired(ttl):
print(f'Session {key} expired. Discarding data.') log.debug(f'Session {key} expired. Discarding data.')
del self._base[key] del self._base[key]
return None return None
return value return value
@ -173,43 +153,40 @@ manager_lock = threading.RLock()
render_threads = [] render_threads = []
current_state = ServerStates.Init current_state = ServerStates.Init
current_state_error:Exception = None current_state_error:Exception = None
current_model_path = None
current_vae_path = None
tasks_queue = [] tasks_queue = []
task_cache = TaskCache() session_cache = DataCache()
default_model_to_load = None task_cache = DataCache()
default_vae_to_load = None
weak_thread_data = weakref.WeakKeyDictionary() weak_thread_data = weakref.WeakKeyDictionary()
idle_event: threading.Event = threading.Event()
def preload_model(ckpt_file_path=None, vae_file_path=None): class SessionState():
global current_state, current_state_error, current_model_path, current_vae_path def __init__(self, id: str):
if ckpt_file_path == None: self._id = id
ckpt_file_path = default_model_to_load self._tasks_ids = []
if vae_file_path == None: @property
vae_file_path = default_vae_to_load def id(self):
if ckpt_file_path == current_model_path and vae_file_path == current_vae_path: return self._id
return @property
current_state = ServerStates.LoadingModel def tasks(self):
try: tasks = []
from . import runtime for task_id in self._tasks_ids:
runtime.thread_data.ckpt_file = ckpt_file_path task = task_cache.tryGet(task_id)
runtime.thread_data.vae_file = vae_file_path if task:
runtime.load_model_ckpt() tasks.append(task)
current_model_path = ckpt_file_path return tasks
current_vae_path = vae_file_path def put(self, task, ttl=TASK_TTL):
current_state_error = None task_id = id(task)
current_state = ServerStates.Online self._tasks_ids.append(task_id)
except Exception as e: if not task_cache.put(task_id, task, ttl):
current_model_path = None return False
current_vae_path = None while len(self._tasks_ids) > len(render_threads) * 2:
current_state_error = e self._tasks_ids.pop(0)
current_state = ServerStates.Unavailable return True
print(traceback.format_exc())
def thread_get_next_task(): def thread_get_next_task():
from . import runtime from easydiffusion import renderer
if not manager_lock.acquire(blocking=True, timeout=LOCK_TIMEOUT): if not manager_lock.acquire(blocking=True, timeout=LOCK_TIMEOUT):
print('Render thread on device', runtime.thread_data.device, 'failed to acquire manager lock.') log.warn(f'Render thread on device: {renderer.context.device} failed to acquire manager lock.')
return None return None
if len(tasks_queue) <= 0: if len(tasks_queue) <= 0:
manager_lock.release() manager_lock.release()
@ -217,7 +194,7 @@ def thread_get_next_task():
task = None task = None
try: # Select a render task. try: # Select a render task.
for queued_task in tasks_queue: for queued_task in tasks_queue:
if queued_task.render_device and runtime.thread_data.device != queued_task.render_device: if queued_task.render_device and renderer.context.device != queued_task.render_device:
# Is asking for a specific render device. # Is asking for a specific render device.
if is_alive(queued_task.render_device) > 0: if is_alive(queued_task.render_device) > 0:
continue # requested device alive, skip current one. continue # requested device alive, skip current one.
@ -226,7 +203,7 @@ def thread_get_next_task():
queued_task.error = Exception(queued_task.render_device + ' is not currently active.') queued_task.error = Exception(queued_task.render_device + ' is not currently active.')
task = queued_task task = queued_task
break break
if not queued_task.render_device and runtime.thread_data.device == 'cpu' and is_alive() > 1: if not queued_task.render_device and renderer.context.device == 'cpu' and is_alive() > 1:
# not asking for any specific devices, cpu want to grab task but other render devices are alive. # not asking for any specific devices, cpu want to grab task but other render devices are alive.
continue # Skip Tasks, don't run on CPU unless there is nothing else or user asked for it. continue # Skip Tasks, don't run on CPU unless there is nothing else or user asked for it.
task = queued_task task = queued_task
@ -238,40 +215,47 @@ def thread_get_next_task():
manager_lock.release() manager_lock.release()
def thread_render(device): def thread_render(device):
global current_state, current_state_error, current_model_path, current_vae_path global current_state, current_state_error
from . import runtime
from easydiffusion import renderer, model_manager
try: try:
runtime.thread_init(device) renderer.init(device)
except Exception as e:
print(traceback.format_exc())
weak_thread_data[threading.current_thread()] = { weak_thread_data[threading.current_thread()] = {
'error': e 'device': renderer.context.device,
'device_name': renderer.context.device_name,
'alive': True
}
current_state = ServerStates.LoadingModel
model_manager.load_default_models(renderer.context)
current_state = ServerStates.Online
except Exception as e:
log.error(traceback.format_exc())
weak_thread_data[threading.current_thread()] = {
'error': e,
'alive': False
} }
return return
weak_thread_data[threading.current_thread()] = {
'device': runtime.thread_data.device,
'device_name': runtime.thread_data.device_name,
'alive': True
}
if runtime.thread_data.device != 'cpu' or is_alive() == 1:
preload_model()
current_state = ServerStates.Online
while True: while True:
session_cache.clean()
task_cache.clean() task_cache.clean()
if not weak_thread_data[threading.current_thread()]['alive']: if not weak_thread_data[threading.current_thread()]['alive']:
print(f'Shutting down thread for device {runtime.thread_data.device}') log.info(f'Shutting down thread for device {renderer.context.device}')
runtime.unload_models() model_manager.unload_all(renderer.context)
runtime.unload_filters()
return return
if isinstance(current_state_error, SystemExit): if isinstance(current_state_error, SystemExit):
current_state = ServerStates.Unavailable current_state = ServerStates.Unavailable
return return
task = thread_get_next_task() task = thread_get_next_task()
if task is None: if task is None:
time.sleep(0.05) idle_event.clear()
idle_event.wait(timeout=1)
continue continue
if task.error is not None: if task.error is not None:
print(task.error) log.error(task.error)
task.response = {"status": 'failed', "detail": str(task.error)} task.response = {"status": 'failed', "detail": str(task.error)}
task.buffer_queue.put(json.dumps(task.response)) task.buffer_queue.put(json.dumps(task.response))
continue continue
@ -280,70 +264,62 @@ def thread_render(device):
task.response = {"status": 'failed', "detail": str(task.error)} task.response = {"status": 'failed', "detail": str(task.error)}
task.buffer_queue.put(json.dumps(task.response)) task.buffer_queue.put(json.dumps(task.response))
continue continue
print(f'Session {task.request.session_id} starting task {id(task)} on {runtime.thread_data.device_name}') log.info(f'Session {task.task_data.session_id} starting task {id(task)} on {renderer.context.device_name}')
if not task.lock.acquire(blocking=False): raise Exception('Got locked task from queue.') if not task.lock.acquire(blocking=False): raise Exception('Got locked task from queue.')
try: try:
if runtime.thread_data.device == 'cpu' and is_alive() > 1: def step_callback():
# CPU is not the only device. Keep track of active time to unload resources later. global current_state_error
runtime.thread_data.lastActive = time.time()
# Open data generator.
res = runtime.mk_img(task.request)
if current_model_path == task.request.use_stable_diffusion_model:
current_state = ServerStates.Rendering
else:
current_state = ServerStates.LoadingModel
# Start reading from generator.
dataQueue = None
if task.request.stream_progress_updates:
dataQueue = task.buffer_queue
for result in res:
if current_state == ServerStates.LoadingModel:
current_state = ServerStates.Rendering
current_model_path = task.request.use_stable_diffusion_model
current_vae_path = task.request.use_vae_model
if isinstance(current_state_error, SystemExit) or isinstance(current_state_error, StopAsyncIteration) or isinstance(task.error, StopAsyncIteration): if isinstance(current_state_error, SystemExit) or isinstance(current_state_error, StopAsyncIteration) or isinstance(task.error, StopAsyncIteration):
runtime.thread_data.stop_processing = True renderer.context.stop_processing = True
if isinstance(current_state_error, StopAsyncIteration): if isinstance(current_state_error, StopAsyncIteration):
task.error = current_state_error task.error = current_state_error
current_state_error = None current_state_error = None
print(f'Session {task.request.session_id} sent cancel signal for task {id(task)}') log.info(f'Session {task.task_data.session_id} sent cancel signal for task {id(task)}')
if dataQueue:
dataQueue.put(result) current_state = ServerStates.LoadingModel
if isinstance(result, str): model_manager.resolve_model_paths(task.task_data)
result = json.loads(result) model_manager.reload_models_if_necessary(renderer.context, task.task_data)
task.response = result
if 'output' in result: current_state = ServerStates.Rendering
for out_obj in result['output']: task.response = renderer.make_images(task.render_request, task.task_data, task.buffer_queue, task.temp_images, step_callback)
if 'path' in out_obj: # Before looping back to the generator, mark cache as still alive.
img_id = out_obj['path'][out_obj['path'].rindex('/') + 1:] task_cache.keep(id(task), TASK_TTL)
task.temp_images[int(img_id)] = runtime.thread_data.temp_images[out_obj['path'][11:]] session_cache.keep(task.task_data.session_id, TASK_TTL)
elif 'data' in out_obj:
buf = runtime.base64_str_to_buffer(out_obj['data'])
task.temp_images[result['output'].index(out_obj)] = buf
# Before looping back to the generator, mark cache as still alive.
task_cache.keep(task.request.session_id, TASK_TTL)
except Exception as e: except Exception as e:
task.error = e task.error = e
print(traceback.format_exc()) task.response = {"status": 'failed', "detail": str(task.error)}
task.buffer_queue.put(json.dumps(task.response))
log.error(traceback.format_exc())
continue continue
finally: finally:
# Task completed # Task completed
task.lock.release() task.lock.release()
task_cache.keep(task.request.session_id, TASK_TTL) task_cache.keep(id(task), TASK_TTL)
session_cache.keep(task.task_data.session_id, TASK_TTL)
if isinstance(task.error, StopAsyncIteration): if isinstance(task.error, StopAsyncIteration):
print(f'Session {task.request.session_id} task {id(task)} cancelled!') log.info(f'Session {task.task_data.session_id} task {id(task)} cancelled!')
elif task.error is not None: elif task.error is not None:
print(f'Session {task.request.session_id} task {id(task)} failed!') log.info(f'Session {task.task_data.session_id} task {id(task)} failed!')
else: else:
print(f'Session {task.request.session_id} task {id(task)} completed by {runtime.thread_data.device_name}.') log.info(f'Session {task.task_data.session_id} task {id(task)} completed by {renderer.context.device_name}.')
current_state = ServerStates.Online current_state = ServerStates.Online
def get_cached_task(session_id:str, update_ttl:bool=False): def get_cached_task(task_id:str, update_ttl:bool=False):
# By calling keep before tryGet, wont discard if was expired. # By calling keep before tryGet, wont discard if was expired.
if update_ttl and not task_cache.keep(session_id, TASK_TTL): if update_ttl and not task_cache.keep(task_id, TASK_TTL):
# Failed to keep task, already gone. # Failed to keep task, already gone.
return None return None
return task_cache.tryGet(session_id) return task_cache.tryGet(task_id)
def get_cached_session(session_id:str, update_ttl:bool=False):
if update_ttl:
session_cache.keep(session_id, TASK_TTL)
session = session_cache.tryGet(session_id)
if not session:
session = SessionState(session_id)
session_cache.put(session_id, session, TASK_TTL)
return session
def get_devices(): def get_devices():
devices = { devices = {
@ -363,6 +339,7 @@ def get_devices():
'name': torch.cuda.get_device_name(device), 'name': torch.cuda.get_device_name(device),
'mem_free': mem_free, 'mem_free': mem_free,
'mem_total': mem_total, 'mem_total': mem_total,
'max_vram_usage_level': device_manager.get_max_vram_usage_level(device),
} }
# list the compatible devices # list the compatible devices
@ -412,7 +389,7 @@ def is_alive(device=None):
def start_render_thread(device): def start_render_thread(device):
if not manager_lock.acquire(blocking=True, timeout=LOCK_TIMEOUT): raise Exception('start_render_thread' + ERR_LOCK_FAILED) if not manager_lock.acquire(blocking=True, timeout=LOCK_TIMEOUT): raise Exception('start_render_thread' + ERR_LOCK_FAILED)
print('Start new Rendering Thread on device', device) log.info(f'Start new Rendering Thread on device: {device}')
try: try:
rthread = threading.Thread(target=thread_render, kwargs={'device': device}) rthread = threading.Thread(target=thread_render, kwargs={'device': device})
rthread.daemon = True rthread.daemon = True
@ -424,7 +401,7 @@ def start_render_thread(device):
timeout = DEVICE_START_TIMEOUT timeout = DEVICE_START_TIMEOUT
while not rthread.is_alive() or not rthread in weak_thread_data or not 'device' in weak_thread_data[rthread]: while not rthread.is_alive() or not rthread in weak_thread_data or not 'device' in weak_thread_data[rthread]:
if rthread in weak_thread_data and 'error' in weak_thread_data[rthread]: if rthread in weak_thread_data and 'error' in weak_thread_data[rthread]:
print(rthread, device, 'error:', weak_thread_data[rthread]['error']) log.error(f"{rthread}, {device}, error: {weak_thread_data[rthread]['error']}")
return False return False
if timeout <= 0: if timeout <= 0:
return False return False
@ -436,11 +413,11 @@ def stop_render_thread(device):
try: try:
device_manager.validate_device_id(device, log_prefix='stop_render_thread') device_manager.validate_device_id(device, log_prefix='stop_render_thread')
except: except:
print(traceback.format_exc()) log.error(traceback.format_exc())
return False return False
if not manager_lock.acquire(blocking=True, timeout=LOCK_TIMEOUT): raise Exception('stop_render_thread' + ERR_LOCK_FAILED) if not manager_lock.acquire(blocking=True, timeout=LOCK_TIMEOUT): raise Exception('stop_render_thread' + ERR_LOCK_FAILED)
print('Stopping Rendering Thread on device', device) log.info(f'Stopping Rendering Thread on device: {device}')
try: try:
thread_to_remove = None thread_to_remove = None
@ -463,81 +440,51 @@ def stop_render_thread(device):
def update_render_threads(render_devices, active_devices): def update_render_threads(render_devices, active_devices):
devices_to_start, devices_to_stop = device_manager.get_device_delta(render_devices, active_devices) devices_to_start, devices_to_stop = device_manager.get_device_delta(render_devices, active_devices)
print('devices_to_start', devices_to_start) log.debug(f'devices_to_start: {devices_to_start}')
print('devices_to_stop', devices_to_stop) log.debug(f'devices_to_stop: {devices_to_stop}')
for device in devices_to_stop: for device in devices_to_stop:
if is_alive(device) <= 0: if is_alive(device) <= 0:
print(device, 'is not alive') log.debug(f'{device} is not alive')
continue continue
if not stop_render_thread(device): if not stop_render_thread(device):
print(device, 'could not stop render thread') log.warn(f'{device} could not stop render thread')
for device in devices_to_start: for device in devices_to_start:
if is_alive(device) >= 1: if is_alive(device) >= 1:
print(device, 'already registered.') log.debug(f'{device} already registered.')
continue continue
if not start_render_thread(device): if not start_render_thread(device):
print(device, 'failed to start.') log.warn(f'{device} failed to start.')
if is_alive() <= 0: # No running devices, probably invalid user config. if is_alive() <= 0: # No running devices, probably invalid user config.
raise EnvironmentError('ERROR: No active render devices! Please verify the "render_devices" value in config.json') raise EnvironmentError('ERROR: No active render devices! Please verify the "render_devices" value in config.json')
print('active devices', get_devices()['active']) log.debug(f"active devices: {get_devices()['active']}")
def shutdown_event(): # Signal render thread to close on shutdown def shutdown_event(): # Signal render thread to close on shutdown
global current_state_error global current_state_error
current_state_error = SystemExit('Application shutting down.') current_state_error = SystemExit('Application shutting down.')
def render(req : ImageRequest): def render(render_req: GenerateImageRequest, task_data: TaskData):
if is_alive() <= 0: # Render thread is dead current_thread_count = is_alive()
if current_thread_count <= 0: # Render thread is dead
raise ChildProcessError('Rendering thread has died.') raise ChildProcessError('Rendering thread has died.')
# Alive, check if task in cache # Alive, check if task in cache
task = task_cache.tryGet(req.session_id) session = get_cached_session(task_data.session_id, update_ttl=True)
if task and not task.response and not task.error and not task.lock.locked(): pending_tasks = list(filter(lambda t: t.is_pending, session.tasks))
# Unstarted task pending, deny queueing more than one. if current_thread_count < len(pending_tasks):
raise ConnectionRefusedError(f'Session {req.session_id} has an already pending task.') raise ConnectionRefusedError(f'Session {task_data.session_id} already has {len(pending_tasks)} pending tasks out of {current_thread_count}.')
#
from . import runtime
r = Request()
r.session_id = req.session_id
r.prompt = req.prompt
r.negative_prompt = req.negative_prompt
r.init_image = req.init_image
r.mask = req.mask
r.num_outputs = req.num_outputs
r.num_inference_steps = req.num_inference_steps
r.guidance_scale = req.guidance_scale
r.width = req.width
r.height = req.height
r.seed = req.seed
r.prompt_strength = req.prompt_strength
r.sampler = req.sampler
# r.allow_nsfw = req.allow_nsfw
r.turbo = req.turbo
r.use_full_precision = req.use_full_precision
r.save_to_disk_path = req.save_to_disk_path
r.use_upscale: str = req.use_upscale
r.use_face_correction = req.use_face_correction
r.use_stable_diffusion_model = req.use_stable_diffusion_model
r.use_vae_model = req.use_vae_model
r.show_only_filtered_image = req.show_only_filtered_image
r.output_format = req.output_format
r.stream_progress_updates = True # the underlying implementation only supports streaming new_task = RenderTask(render_req, task_data)
r.stream_image_progress = req.stream_image_progress if session.put(new_task, TASK_TTL):
if not req.stream_progress_updates:
r.stream_image_progress = False
new_task = RenderTask(r)
if task_cache.put(r.session_id, new_task, TASK_TTL):
# Use twice the normal timeout for adding user requests. # Use twice the normal timeout for adding user requests.
# Tries to force task_cache.put to fail before tasks_queue.put would. # Tries to force session.put to fail before tasks_queue.put would.
if manager_lock.acquire(blocking=True, timeout=LOCK_TIMEOUT * 2): if manager_lock.acquire(blocking=True, timeout=LOCK_TIMEOUT * 2):
try: try:
tasks_queue.append(new_task) tasks_queue.append(new_task)
idle_event.set()
return new_task return new_task
finally: finally:
manager_lock.release() manager_lock.release()

87
ui/easydiffusion/types.py Normal file
View File

@ -0,0 +1,87 @@
from pydantic import BaseModel
from typing import Any
class GenerateImageRequest(BaseModel):
prompt: str = ""
negative_prompt: str = ""
seed: int = 42
width: int = 512
height: int = 512
num_outputs: int = 1
num_inference_steps: int = 50
guidance_scale: float = 7.5
init_image: Any = None
init_image_mask: Any = None
prompt_strength: float = 0.8
preserve_init_image_color_profile = False
sampler_name: str = None # "ddim", "plms", "heun", "euler", "euler_a", "dpm2", "dpm2_a", "lms"
hypernetwork_strength: float = 0
class TaskData(BaseModel):
request_id: str = None
session_id: str = "session"
save_to_disk_path: str = None
vram_usage_level: str = "balanced" # or "low" or "medium"
use_face_correction: str = None # or "GFPGANv1.3"
use_upscale: str = None # or "RealESRGAN_x4plus" or "RealESRGAN_x4plus_anime_6B"
use_stable_diffusion_model: str = "sd-v1-4"
use_stable_diffusion_config: str = "v1-inference"
use_vae_model: str = None
use_hypernetwork_model: str = None
show_only_filtered_image: bool = False
output_format: str = "jpeg" # or "png"
output_quality: int = 75
metadata_output_format: str = "txt" # or "json"
stream_image_progress: bool = False
class Image:
data: str # base64
seed: int
is_nsfw: bool
path_abs: str = None
def __init__(self, data, seed):
self.data = data
self.seed = seed
def json(self):
return {
"data": self.data,
"seed": self.seed,
"path_abs": self.path_abs,
}
class Response:
render_request: GenerateImageRequest
task_data: TaskData
images: list
def __init__(self, render_request: GenerateImageRequest, task_data: TaskData, images: list):
self.render_request = render_request
self.task_data = task_data
self.images = images
def json(self):
del self.render_request.init_image
del self.render_request.init_image_mask
res = {
"status": 'succeeded',
"render_request": self.render_request.dict(),
"task_data": self.task_data.dict(),
"output": [],
}
for image in self.images:
res["output"].append(image.json())
return res
class UserInitiatedStop(Exception):
pass

View File

@ -0,0 +1,8 @@
import logging
log = logging.getLogger('easydiffusion')
from .save_utils import (
save_images_to_disk,
get_printable_request,
)

View File

@ -0,0 +1,79 @@
import os
import time
import base64
import re
from easydiffusion.types import TaskData, GenerateImageRequest
from sdkit.utils import save_images, save_dicts
filename_regex = re.compile('[^a-zA-Z0-9]')
# keep in sync with `ui/media/js/dnd.js`
TASK_TEXT_MAPPING = {
'prompt': 'Prompt',
'width': 'Width',
'height': 'Height',
'seed': 'Seed',
'num_inference_steps': 'Steps',
'guidance_scale': 'Guidance Scale',
'prompt_strength': 'Prompt Strength',
'use_face_correction': 'Use Face Correction',
'use_upscale': 'Use Upscaling',
'sampler_name': 'Sampler',
'negative_prompt': 'Negative Prompt',
'use_stable_diffusion_model': 'Stable Diffusion model',
'use_hypernetwork_model': 'Hypernetwork model',
'hypernetwork_strength': 'Hypernetwork Strength'
}
def save_images_to_disk(images: list, filtered_images: list, req: GenerateImageRequest, task_data: TaskData):
save_dir_path = os.path.join(task_data.save_to_disk_path, filename_regex.sub('_', task_data.session_id))
metadata_entries = get_metadata_entries_for_request(req, task_data)
if task_data.show_only_filtered_image or filtered_images == images:
save_images(filtered_images, save_dir_path, file_name=make_filename_callback(req), output_format=task_data.output_format, output_quality=task_data.output_quality)
save_dicts(metadata_entries, save_dir_path, file_name=make_filename_callback(req), output_format=task_data.metadata_output_format)
else:
save_images(images, save_dir_path, file_name=make_filename_callback(req), output_format=task_data.output_format, output_quality=task_data.output_quality)
save_images(filtered_images, save_dir_path, file_name=make_filename_callback(req, suffix='filtered'), output_format=task_data.output_format, output_quality=task_data.output_quality)
save_dicts(metadata_entries, save_dir_path, file_name=make_filename_callback(req, suffix='filtered'), output_format=task_data.metadata_output_format)
def get_metadata_entries_for_request(req: GenerateImageRequest, task_data: TaskData):
metadata = get_printable_request(req)
metadata.update({
'use_stable_diffusion_model': task_data.use_stable_diffusion_model,
'use_vae_model': task_data.use_vae_model,
'use_hypernetwork_model': task_data.use_hypernetwork_model,
'use_face_correction': task_data.use_face_correction,
'use_upscale': task_data.use_upscale,
})
# if text, format it in the text format expected by the UI
is_txt_format = (task_data.metadata_output_format.lower() == 'txt')
if is_txt_format:
metadata = {TASK_TEXT_MAPPING[key]: val for key, val in metadata.items() if key in TASK_TEXT_MAPPING}
entries = [metadata.copy() for _ in range(req.num_outputs)]
for i, entry in enumerate(entries):
entry['Seed' if is_txt_format else 'seed'] = req.seed + i
return entries
def get_printable_request(req: GenerateImageRequest):
metadata = req.dict()
del metadata['init_image']
del metadata['init_image_mask']
return metadata
def make_filename_callback(req: GenerateImageRequest, suffix=None):
def make_filename(i):
img_id = base64.b64encode(int(time.time()+i).to_bytes(8, 'big')).decode() # Generate unique ID based on time.
img_id = img_id.translate({43:None, 47:None, 61:None})[-8:] # Remove + / = and keep last 8 chars.
prompt_flattened = filename_regex.sub('_', req.prompt)[:50]
name = f"{prompt_flattened}_{img_id}"
name = name if suffix is None else f'{name}_{suffix}'
return name
return make_filename

View File

@ -3,6 +3,7 @@
<head> <head>
<title>Stable Diffusion UI</title> <title>Stable Diffusion UI</title>
<meta name="viewport" content="width=device-width, initial-scale=1.0"> <meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta name="theme-color" content="#673AB6">
<link rel="icon" type="image/png" href="/media/images/favicon-16x16.png" sizes="16x16"> <link rel="icon" type="image/png" href="/media/images/favicon-16x16.png" sizes="16x16">
<link rel="icon" type="image/png" href="/media/images/favicon-32x32.png" sizes="32x32"> <link rel="icon" type="image/png" href="/media/images/favicon-32x32.png" sizes="32x32">
<link rel="stylesheet" href="/media/css/fonts.css"> <link rel="stylesheet" href="/media/css/fonts.css">
@ -11,16 +12,21 @@
<link rel="stylesheet" href="/media/css/auto-save.css"> <link rel="stylesheet" href="/media/css/auto-save.css">
<link rel="stylesheet" href="/media/css/modifier-thumbnails.css"> <link rel="stylesheet" href="/media/css/modifier-thumbnails.css">
<link rel="stylesheet" href="/media/css/fontawesome-all.min.css"> <link rel="stylesheet" href="/media/css/fontawesome-all.min.css">
<link rel="stylesheet" href="/media/css/drawingboard.min.css"> <link rel="stylesheet" href="/media/css/image-editor.css">
<link rel="stylesheet" href="/media/css/jquery-confirm.min.css">
<link rel="manifest" href="/media/manifest.webmanifest">
<script src="/media/js/jquery-3.6.1.min.js"></script> <script src="/media/js/jquery-3.6.1.min.js"></script>
<script src="/media/js/drawingboard.min.js"></script> <script src="/media/js/jquery-confirm.min.js"></script>
<script src="/media/js/marked.min.js"></script> <script src="/media/js/marked.min.js"></script>
</head> </head>
<body> <body>
<div id="container"> <div id="container">
<div id="top-nav"> <div id="top-nav">
<div id="logo"> <div id="logo">
<h1>Stable Diffusion UI <small>v2.4.13 <span id="updateBranchLabel"></span></small></h1> <h1>
Easy Diffusion
<small>v2.5.0 <span id="updateBranchLabel"></span></small>
</h1>
</div> </div>
<div id="server-status"> <div id="server-status">
<div id="server-status-color"></div> <div id="server-status-color"></div>
@ -49,7 +55,7 @@
<input id="prompt_from_file" name="prompt_from_file" type="file" /> <!-- hidden --> <input id="prompt_from_file" name="prompt_from_file" type="file" /> <!-- hidden -->
<label for="negative_prompt" class="collapsible" id="negative_prompt_handle"> <label for="negative_prompt" class="collapsible" id="negative_prompt_handle">
Negative Prompt Negative Prompt
<a href="https://github.com/cmdr2/stable-diffusion-ui/wiki/Writing-prompts#negative-prompts" target="_blank"><i class="fa-solid fa-circle-question help-btn"><span class="simple-tooltip right">Click to learn more about Negative Prompts</span></i></a> <a href="https://github.com/cmdr2/stable-diffusion-ui/wiki/Writing-prompts#negative-prompts" target="_blank"><i class="fa-solid fa-circle-question help-btn"><span class="simple-tooltip top-left">Click to learn more about Negative Prompts</span></i></a>
<small>(optional)</small> <small>(optional)</small>
</label> </label>
<div class="collapsible-content"> <div class="collapsible-content">
@ -58,33 +64,47 @@
</div> </div>
<div id="editor-inputs-init-image" class="row"> <div id="editor-inputs-init-image" class="row">
<label for="init_image">Initial Image (img2img) <small>(optional)</small> </label> <input id="init_image" name="init_image" type="file" /><br/> <label for="init_image">Initial Image (img2img) <small>(optional)</small> </label>
<div id="init_image_preview_container" class="image_preview_container"> <div id="init_image_preview_container" class="image_preview_container">
<div id="init_image_wrapper"> <div id="init_image_wrapper">
<img id="init_image_preview" src="" /> <img id="init_image_preview" src="" />
<span id="init_image_size_box"></span> <span id="init_image_size_box"></span>
<button class="init_image_clear image_clear_btn">X</button> <button class="init_image_clear image_clear_btn"><i class="fa-solid fa-xmark"></i></button>
</div>
<div id="init_image_buttons">
<div class="button">
<i class="fa-regular fa-folder-open"></i>
Browse
<input id="init_image" name="init_image" type="file" />
</div>
<div id="init_image_button_draw" class="button">
<i class="fa-solid fa-pencil"></i>
Draw
</div>
<div id="inpaint_button_container">
<div id="init_image_button_inpaint" class="button">
<i class="fa-solid fa-paintbrush"></i>
Inpaint
</div>
<input id="enable_mask" name="enable_mask" type="checkbox">
</div>
</div> </div>
<br/>
<input id="enable_mask" name="enable_mask" type="checkbox">
<label for="enable_mask">
In-Painting (beta)
<a href="https://github.com/cmdr2/stable-diffusion-ui/wiki/Inpainting" target="_blank"><i class="fa-solid fa-circle-question help-btn"><span class="simple-tooltip right">Click to learn more about InPainting</span></i></a>
<small>(select the area which the AI will paint into)</small>
</label>
<div id="inpaintingEditor"></div>
</div> </div>
</div> </div>
<div id="editor-inputs-tags-container" class="row"> <div id="editor-inputs-tags-container" class="row">
<label>Image Modifiers: <small>(click an Image Modifier to remove it)</small></label> <label>Image Modifiers <i class="fa-solid fa-circle-question help-btn"><span class="simple-tooltip top-left">click an Image Modifier to remove it, use Ctrl+Mouse Wheel to adjust its weight</span></i>:</label>
<div id="editor-inputs-tags-list"></div> <div id="editor-inputs-tags-list"></div>
</div> </div>
<button id="makeImage" class="primaryButton">Make Image</button> <button id="makeImage" class="primaryButton">Make Image</button>
<button id="stopImage" class="secondaryButton">Stop All</button> <div id="render-buttons">
<button id="stopImage" class="secondaryButton">Stop All</button>
<button id="pause"><i class="fa-solid fa-pause"></i> Pause All</button>
<button id="resume"><i class="fa-solid fa-play"></i> Resume</button>
</div>
</div> </div>
<span class="line-separator"></span> <span class="line-separator"></span>
@ -93,7 +113,7 @@
<h4 class="collapsible"> <h4 class="collapsible">
Image Settings Image Settings
<i id="reset-image-settings" class="fa-solid fa-arrow-rotate-left section-button"> <i id="reset-image-settings" class="fa-solid fa-arrow-rotate-left section-button">
<span class="simple-tooltip right"> <span class="simple-tooltip top-left">
Reset Image Settings Reset Image Settings
</span> </span>
</i> </i>
@ -101,32 +121,42 @@
<div id="editor-settings-entries" class="collapsible-content"> <div id="editor-settings-entries" class="collapsible-content">
<div><table> <div><table>
<tr><b class="settings-subheader">Image Settings</b></tr> <tr><b class="settings-subheader">Image Settings</b></tr>
<tr class="pl-5"><td><label for="seed">Seed:</label></td><td><input id="seed" name="seed" size="10" value="30000" onkeypress="preventNonNumericalInput(event)"> <input id="random_seed" name="random_seed" type="checkbox" checked><label for="random_seed">Random</label></td></tr> <tr class="pl-5"><td><label for="seed">Seed:</label></td><td><input id="seed" name="seed" size="10" value="0" onkeypress="preventNonNumericalInput(event)"> <input id="random_seed" name="random_seed" type="checkbox" checked><label for="random_seed">Random</label></td></tr>
<tr class="pl-5"><td><label for="num_outputs_total">Number of Images:</label></td><td><input id="num_outputs_total" name="num_outputs_total" value="1" size="1" onkeypress="preventNonNumericalInput(event)"> <label><small>(total)</small></label> <input id="num_outputs_parallel" name="num_outputs_parallel" value="1" size="1" onkeypress="preventNonNumericalInput(event)"> <label for="num_outputs_parallel"><small>(in parallel)</small></label></td></tr> <tr class="pl-5"><td><label for="num_outputs_total">Number of Images:</label></td><td><input id="num_outputs_total" name="num_outputs_total" value="1" size="1" onkeypress="preventNonNumericalInput(event)"> <label><small>(total)</small></label> <input id="num_outputs_parallel" name="num_outputs_parallel" value="1" size="1" onkeypress="preventNonNumericalInput(event)"> <label for="num_outputs_parallel"><small>(in parallel)</small></label></td></tr>
<tr class="pl-5"><td><label for="stable_diffusion_model">Model:</label></td><td> <tr class="pl-5"><td><label for="stable_diffusion_model">Model:</label></td><td>
<select id="stable_diffusion_model" name="stable_diffusion_model"> <select id="stable_diffusion_model" name="stable_diffusion_model">
<!-- <option value="sd-v1-4" selected>sd-v1-4</option> --> <!-- <option value="sd-v1-4" selected>sd-v1-4</option> -->
</select> </select>
<a href="https://github.com/cmdr2/stable-diffusion-ui/wiki/Custom-Models" target="_blank"><i class="fa-solid fa-circle-question help-btn"><span class="simple-tooltip right">Click to learn more about custom models</span></i></a> <a href="https://github.com/cmdr2/stable-diffusion-ui/wiki/Custom-Models" target="_blank"><i class="fa-solid fa-circle-question help-btn"><span class="simple-tooltip top-left">Click to learn more about custom models</span></i></a>
</td></tr> </td></tr>
<!-- <tr id="modelConfigSelection" class="pl-5"><td><label for="model_config">Model Config:</i></label></td><td>
<select id="model_config" name="model_config">
</select>
</td></tr> -->
<tr class="pl-5"><td><label for="vae_model">Custom VAE:</i></label></td><td> <tr class="pl-5"><td><label for="vae_model">Custom VAE:</i></label></td><td>
<select id="vae_model" name="vae_model"> <select id="vae_model" name="vae_model">
<!-- <option value="" selected>None</option> --> <!-- <option value="" selected>None</option> -->
</select> </select>
<a href="https://github.com/cmdr2/stable-diffusion-ui/wiki/VAE-Variational-Auto-Encoder" target="_blank"><i class="fa-solid fa-circle-question help-btn"><span class="simple-tooltip right">Click to learn more about VAEs</span></i></a> <a href="https://github.com/cmdr2/stable-diffusion-ui/wiki/VAE-Variational-Auto-Encoder" target="_blank"><i class="fa-solid fa-circle-question help-btn"><span class="simple-tooltip top-left">Click to learn more about VAEs</span></i></a>
</td></tr> </td></tr>
<tr id="samplerSelection" class="pl-5"><td><label for="sampler">Sampler:</label></td><td> <tr id="samplerSelection" class="pl-5"><td><label for="sampler_name">Sampler:</label></td><td>
<select id="sampler" name="sampler"> <select id="sampler_name" name="sampler_name">
<option value="plms">plms</option> <option value="plms">PLMS</option>
<option value="ddim">ddim</option> <option value="ddim">DDIM</option>
<option value="heun">heun</option> <option value="heun">Heun</option>
<option value="euler">euler</option> <option value="euler">Euler</option>
<option value="euler_a" selected>euler_a</option> <option value="euler_a" selected>Euler Ancestral</option>
<option value="dpm2">dpm2</option> <option value="dpm2">DPM2</option>
<option value="dpm2_a">dpm2_a</option> <option value="dpm2_a">DPM2 Ancestral</option>
<option value="lms">lms</option> <option value="lms">LMS</option>
<option value="dpm_solver_stability">DPM Solver (Stability AI)</option>
<option value="dpmpp_2s_a" selected>DPM++ 2s Ancestral</option>
<option value="dpmpp_2m">DPM++ 2m</option>
<option value="dpmpp_sde">DPM++ SDE</option>
<option value="dpm_fast">DPM Fast</option>
<option value="dpm_adaptive">DPM Adaptive</option>
</select> </select>
<a href="https://github.com/cmdr2/stable-diffusion-ui/wiki/How-to-Use#samplers" target="_blank"><i class="fa-solid fa-circle-question help-btn"><span class="simple-tooltip right">Click to learn more about samplers</span></i></a> <a href="https://github.com/cmdr2/stable-diffusion-ui/wiki/How-to-Use#samplers" target="_blank"><i class="fa-solid fa-circle-question help-btn"><span class="simple-tooltip top-left">Click to learn more about samplers</span></i></a>
</td></tr> </td></tr>
<tr class="pl-5"><td><label>Image Size: </label></td><td> <tr class="pl-5"><td><label>Image Size: </label></td><td>
<select id="width" name="width" value="512"> <select id="width" name="width" value="512">
@ -175,19 +205,32 @@
<label for="height"><small>(height)</small></label> <label for="height"><small>(height)</small></label>
</td></tr> </td></tr>
<tr class="pl-5"><td><label for="num_inference_steps">Inference Steps:</label></td><td> <input id="num_inference_steps" name="num_inference_steps" size="4" value="25" onkeypress="preventNonNumericalInput(event)"></td></tr> <tr class="pl-5"><td><label for="num_inference_steps">Inference Steps:</label></td><td> <input id="num_inference_steps" name="num_inference_steps" size="4" value="25" onkeypress="preventNonNumericalInput(event)"></td></tr>
<tr class="pl-5"><td><label for="guidance_scale_slider">Guidance Scale:</label></td><td> <input id="guidance_scale_slider" name="guidance_scale_slider" class="editor-slider" value="75" type="range" min="10" max="500"> <input id="guidance_scale" name="guidance_scale" size="4" pattern="^[0-9\.]+$" onkeypress="preventNonNumericalInput(event)"></td></tr> <tr class="pl-5"><td><label for="guidance_scale_slider">Guidance Scale:</label></td><td> <input id="guidance_scale_slider" name="guidance_scale_slider" class="editor-slider" value="75" type="range" min="11" max="500"> <input id="guidance_scale" name="guidance_scale" size="4" pattern="^[0-9\.]+$" onkeypress="preventNonNumericalInput(event)"></td></tr>
<tr id="prompt_strength_container" class="pl-5"><td><label for="prompt_strength_slider">Prompt Strength:</label></td><td> <input id="prompt_strength_slider" name="prompt_strength_slider" class="editor-slider" value="80" type="range" min="0" max="99"> <input id="prompt_strength" name="prompt_strength" size="4" pattern="^[0-9\.]+$" onkeypress="preventNonNumericalInput(event)"><br/></td></tr></span> <tr id="prompt_strength_container" class="pl-5"><td><label for="prompt_strength_slider">Prompt Strength:</label></td><td> <input id="prompt_strength_slider" name="prompt_strength_slider" class="editor-slider" value="80" type="range" min="0" max="99"> <input id="prompt_strength" name="prompt_strength" size="4" pattern="^[0-9\.]+$" onkeypress="preventNonNumericalInput(event)"><br/></td></tr>
<tr class="pl-5"><td><label for="hypernetwork_model">Hypernetwork:</i></label></td><td>
<select id="hypernetwork_model" name="hypernetwork_model">
<!-- <option value="" selected>None</option> -->
</select>
</td></tr>
<tr id="hypernetwork_strength_container" class="pl-5">
<td><label for="hypernetwork_strength_slider">Hypernetwork Strength:</label></td>
<td> <input id="hypernetwork_strength_slider" name="hypernetwork_strength_slider" class="editor-slider" value="100" type="range" min="0" max="100"> <input id="hypernetwork_strength" name="hypernetwork_strength" size="4" pattern="^[0-9\.]+$" onkeypress="preventNonNumericalInput(event)"><br/></td>
</tr>
<tr class="pl-5"><td><label for="output_format">Output Format:</label></td><td> <tr class="pl-5"><td><label for="output_format">Output Format:</label></td><td>
<select id="output_format" name="output_format"> <select id="output_format" name="output_format">
<option value="jpeg" selected>jpeg</option> <option value="jpeg" selected>jpeg</option>
<option value="png">png</option> <option value="png">png</option>
</select> </select>
</td></tr> </td></tr>
<tr class="pl-5" id="output_quality_row"><td><label for="output_quality">JPEG Quality:</label></td><td>
<input id="output_quality_slider" name="output_quality" class="editor-slider" value="75" type="range" min="10" max="95"> <input id="output_quality" name="output_quality" size="4" pattern="^[0-9\.]+$" onkeypress="preventNonNumericalInput(event)">
</td></tr>
</table></div> </table></div>
<div><ul> <div><ul>
<li><b class="settings-subheader">Render Settings</b></li> <li><b class="settings-subheader">Render Settings</b></li>
<li class="pl-5"><input id="stream_image_progress" name="stream_image_progress" type="checkbox"> <label for="stream_image_progress">Show a live preview <small>(uses more VRAM, and slower image creation)</small></label></li> <li class="pl-5"><input id="stream_image_progress" name="stream_image_progress" type="checkbox"> <label for="stream_image_progress">Show a live preview <small>(uses more VRAM, slower images)</small></label></li>
<li id="apply_color_correction_setting" class="pl-5"><input id="apply_color_correction" name="apply_color_correction" type="checkbox"> <label for="apply_color_correction">Preserve color profile <small>(helps during inpainting)</small></label></li>
<li class="pl-5"><input id="use_face_correction" name="use_face_correction" type="checkbox"> <label for="use_face_correction">Fix incorrect faces and eyes <small>(uses GFPGAN)</small></label></li> <li class="pl-5"><input id="use_face_correction" name="use_face_correction" type="checkbox"> <label for="use_face_correction">Fix incorrect faces and eyes <small>(uses GFPGAN)</small></label></li>
<li class="pl-5"> <li class="pl-5">
<input id="use_upscale" name="use_upscale" type="checkbox"> <label for="use_upscale">Upscale image by 4x with </label> <input id="use_upscale" name="use_upscale" type="checkbox"> <label for="use_upscale">Upscale image by 4x with </label>
@ -247,8 +290,17 @@
<br/><br/> <br/><br/>
<div> <div>
<h3><i class="fa fa-microchip icon"></i> System Info</h3> <h3><i class="fa fa-microchip icon"></i> System Info</h3>
<div id="system-info"></div> <div id="system-info">
<table>
<tr><td><label>Processor:</label></td><td id="system-info-cpu" class="value"></td></tr>
<tr><td><label>Compatible Graphics Cards (all):</label></td><td id="system-info-gpus-all" class="value"></td></tr>
<tr><td></td><td>&nbsp;</td></tr>
<tr><td><label>Used for rendering 🔥:</label></td><td id="system-info-rendering-devices" class="value"></td></tr>
<tr><td><label>Server Addresses <i class="fa-solid fa-circle-question help-btn"><span class="simple-tooltip top-left">You can access Stable Diffusion UI from other devices using these addresses</span></i> :</label></td><td id="system-info-server-hosts" class="value"></td></tr>
</table>
</div>
</div> </div>
</div> </div>
</div> </div>
<div id="tab-content-about" class="tab-content"> <div id="tab-content-about" class="tab-content">
@ -314,6 +366,38 @@
</div> </div>
</div> </div>
<div id="image-editor" class="popup image-editor-popup">
<div>
<i class="close-button fa-solid fa-xmark"></i>
<h1>Image Editor</h1>
<div class="flex-container">
<div class="editor-controls-left"></div>
<div class="editor-controls-center">
<div></div>
</div>
<div class="editor-controls-right">
<div></div>
</div>
</div>
</div>
</div>
<div id="image-inpainter" class="popup image-editor-popup">
<div>
<i class="close-button fa-solid fa-xmark"></i>
<h1>Inpainter</h1>
<div class="flex-container">
<div class="editor-controls-left"></div>
<div class="editor-controls-center">
<div></div>
</div>
<div class="editor-controls-right">
<div></div>
</div>
</div>
</div>
</div>
<div id="footer-spacer"></div> <div id="footer-spacer"></div>
<div id="footer"> <div id="footer">
<div class="line-separator">&nbsp;</div> <div class="line-separator">&nbsp;</div>
@ -327,28 +411,33 @@
</div> </div>
</div> </div>
</body> </body>
<script src="media/js/utils.js"></script> <script src="media/js/utils.js"></script>
<script src="media/js/engine.js"></script>
<script src="media/js/parameters.js"></script> <script src="media/js/parameters.js"></script>
<script src="media/js/plugins.js"></script> <script src="media/js/plugins.js"></script>
<script src="media/js/inpainting-editor.js"></script>
<script src="media/js/image-modifiers.js"></script> <script src="media/js/image-modifiers.js"></script>
<script src="media/js/auto-save.js"></script> <script src="media/js/auto-save.js"></script>
<script src="media/js/main.js"></script> <script src="media/js/main.js"></script>
<script src="media/js/themes.js"></script> <script src="media/js/themes.js"></script>
<script src="media/js/dnd.js"></script> <script src="media/js/dnd.js"></script>
<script src="media/js/image-editor.js"></script>
<script> <script>
async function init() { async function init() {
await initSettings() await initSettings()
await getModels() await getModels()
await getDiskPath()
await getAppConfig() await getAppConfig()
await loadModifiers()
await loadUIPlugins() await loadUIPlugins()
await getDevices() await loadModifiers()
await getSystemInfo()
setInterval(healthCheck, HEALTH_PING_INTERVAL * 1000) SD.init({
healthCheck() events: {
statusChange: setServerStatus
, idle: onIdle
}
})
playSound() playSound()
} }

10
ui/main.py Normal file
View File

@ -0,0 +1,10 @@
from easydiffusion import model_manager, app, server
from easydiffusion.server import server_api # required for uvicorn
# Init the app
model_manager.init()
app.init()
server.init()
# start the browser ui
app.open_browser()

File diff suppressed because one or more lines are too long

View File

@ -0,0 +1,215 @@
.editor-controls-left {
padding-left: 32px;
text-align: left;
padding-bottom: 20px;
}
.editor-options-container {
display: flex;
row-gap: 10px;
max-width: 210px;
}
.editor-options-container > * {
flex: 1;
display: flex;
justify-content: center;
align-items: center;
}
.editor-options-container > * > * {
position: inherit;
width: 32px;
height: 32px;
border-radius: 16px;
background: var(--background-color3);
cursor: pointer;
transition: opacity 0.25s;
}
.editor-options-container > * > *:hover {
opacity: 0.75;
}
.editor-options-container > * > *.active {
border: 2px solid #3584e4;
}
.image_editor_opacity .editor-options-container > * > *:not(.active) {
border: 1px solid var(--background-color3);
}
.image_editor_color .editor-options-container {
flex-wrap: wrap;
}
.image_editor_color .editor-options-container > * {
flex: 20%;
}
.image_editor_color .editor-options-container > * > * {
position: relative;
}
.image_editor_color .editor-options-container > * > *.active::before {
content: "\f00c";
display: var(--fa-display,inline-block);
font-style: normal;
font-variant: normal;
line-height: 1;
text-rendering: auto;
font-family: var(--fa-style-family, "Font Awesome 6 Free");
font-weight: var(--fa-style, 900);
position: absolute;
left: 50%;
top: 50%;
transform: translate(-50%, -50%) scale(125%);
color: black;
}
.image_editor_color .editor-options-container > *:first-child {
flex: 100%;
}
.image_editor_color .editor-options-container > *:first-child > * {
width: 100%;
}
.image_editor_color .editor-options-container > *:first-child > * > input {
width: 100%;
height: 100%;
opacity: 0;
cursor: pointer;
}
.image_editor_color .editor-options-container > *:first-child > * > span {
position: absolute;
left: 50%;
top: 50%;
transform: translate(-50%, -50%);
opacity: 0.5;
}
.image_editor_color .editor-options-container > *:first-child > *.active > span {
opacity: 0;
}
.image_editor_tool .editor-options-container {
flex-wrap: wrap;
}
.image_editor_tool .editor-options-container > * {
padding: 2px;
flex: 50%;
}
.editor-controls-center {
/* background: var(--background-color2); */
flex: 1;
display: flex;
justify-content: center;
align-items: center;
}
.editor-controls-center > div {
position: relative;
background: black;
}
.editor-controls-center canvas {
position: absolute;
left: 0;
top: 0;
}
.editor-controls-right {
padding: 32px;
display: flex;
flex-direction: column;
}
.editor-controls-right > div:last-child {
flex: 1;
display: flex;
flex-direction: column;
min-width: 200px;
gap: 5px;
justify-content: end;
}
.image-editor-button {
width: 100%;
height: 32px;
border-radius: 16px;
background: var(--background-color3);
}
.editor-controls-right .image-editor-button {
margin-bottom: 4px;
}
#init_image_button_inpaint .input-toggle {
position: absolute;
left: 16px;
}
#init_image_button_inpaint .input-toggle input:not(:checked) ~ label {
pointer-events: none;
}
.image-editor-popup {
--popup-margin: 16px;
--popup-padding: 24px;
}
.image-editor-popup > div {
margin: var(--popup-margin);
padding: var(--popup-padding);
min-height: calc(100vh - (2 * var(--popup-margin)));
max-width: none;
}
.image-editor-popup h1 {
position: absolute;
top: 32px;
left: 50%;
transform: translateX(-50%);
}
@media screen and (max-width: 700px) {
.image-editor-popup > div {
margin: 0px;
padding: 0px;
}
.image-editor-popup h1 {
position: relative;
transform: none;
left: auto;
}
}
.image-editor-popup > div > div {
min-height: calc(100vh - (2 * var(--popup-margin)) - (2 * var(--popup-padding)));
}
.inpainter .image_editor_color {
display: none;
}
.inpainter .editor-canvas-background {
opacity: 0.75;
}
#init_image_preview_container .button {
display: flex;
padding: 6px;
height: 24px;
box-shadow: 2px 2px 1px 1px #00000088;
}
#init_image_preview_container .button:hover {
background: var(--background-color4)
}
.image-editor-popup .button {
display: flex;
}
.image-editor-popup h4 {
text-align: left;
}

9
ui/media/css/jquery-confirm.min.css vendored Normal file

File diff suppressed because one or more lines are too long

View File

@ -44,9 +44,6 @@ code {
margin-top: 5px; margin-top: 5px;
display: block; display: block;
} }
.image_preview_container {
margin-top: 10pt;
}
.image_clear_btn { .image_clear_btn {
position: absolute; position: absolute;
transform: translate(30%, -30%); transform: translate(30%, -30%);
@ -64,6 +61,11 @@ code {
top: 0px; top: 0px;
right: 0px; right: 0px;
} }
.image_clear_btn:active {
position: absolute;
top: 0px;
left: auto;
}
.settings-box ul { .settings-box ul {
font-size: 9pt; font-size: 9pt;
margin-bottom: 5px; margin-bottom: 5px;
@ -137,7 +139,7 @@ code {
padding: 16px; padding: 16px;
display: flex; display: flex;
flex-direction: column; flex-direction: column;
flex: 0 0 370pt; flex: 0 0 380pt;
} }
#editor label { #editor label {
font-weight: normal; font-weight: normal;
@ -189,15 +191,29 @@ code {
background: rgb(132, 8, 0); background: rgb(132, 8, 0);
border: 2px solid rgb(122, 29, 0); border: 2px solid rgb(122, 29, 0);
color: rgb(255, 221, 255); color: rgb(255, 221, 255);
width: 100%;
height: 30pt; height: 30pt;
border-radius: 6px; border-radius: 6px;
display: none; flex-grow: 2;
margin-top: 2pt;
} }
#stopImage:hover { #stopImage:hover {
background: rgb(177, 27, 0); background: rgb(177, 27, 0);
} }
div#render-buttons {
gap: 3px;
margin-top: 4px;
display: none;
}
button#pause {
flex-grow: 1;
background: var(--accent-color);
}
button#resume {
flex-grow: 1;
background: var(--accent-color);
display: none;
}
.flex-container { .flex-container {
display: flex; display: flex;
width: 100%; width: 100%;
@ -210,7 +226,7 @@ code {
} }
.collapsible-content { .collapsible-content {
display: block; display: block;
padding-left: 15px; padding-left: 10px;
} }
.collapsible-content h5 { .collapsible-content h5 {
padding: 5pt 0pt; padding: 5pt 0pt;
@ -263,39 +279,13 @@ img {
} }
.preview-prompt { .preview-prompt {
font-size: 13pt; font-size: 13pt;
margin-bottom: 10pt; display: inline;
} }
#coffeeButton { #coffeeButton {
height: 23px; height: 23px;
transform: translateY(25%); transform: translateY(25%);
} }
#inpaintingEditor {
width: 300pt;
height: 300pt;
margin-top: 5pt;
}
.drawing-board-canvas-wrapper {
background-size: 100% 100%;
}
.drawing-board-controls {
min-width: 273px;
}
.drawing-board-control > button {
background-color: #eee;
border-radius: 3pt;
}
.drawing-board-control-inner {
background-color: #eee;
border-radius: 3pt;
}
#inpaintingEditor canvas {
opacity: 0.6;
}
#enable_mask {
margin-top: 8pt;
}
#top-nav { #top-nav {
position: relative; position: relative;
background: var(--background-color4); background: var(--background-color4);
@ -415,14 +405,34 @@ img {
.imageTaskContainer > div > .collapsible-handle { .imageTaskContainer > div > .collapsible-handle {
display: none; display: none;
} }
.dropTargetBefore::before{
content: "";
border: 1px solid #fff;
margin-bottom: -2px;
display: block;
box-shadow: 0 0 5px #fff;
transform: translate(0px, -14px);
}
.dropTargetAfter::after{
content: "";
border: 1px solid #fff;
margin-bottom: -2px;
display: block;
box-shadow: 0 0 5px #fff;
transform: translate(0px, 14px);
}
.drag-handle {
margin-right: 6px;
cursor: move;
}
.taskStatusLabel { .taskStatusLabel {
float: left;
font-size: 8pt; font-size: 8pt;
background:var(--background-color2); background:var(--background-color2);
border: 1px solid rgb(61, 62, 66); border: 1px solid rgb(61, 62, 66);
padding: 2pt 4pt; padding: 2pt 4pt;
border-radius: 2pt; border-radius: 2pt;
margin-right: 5pt; margin-right: 5pt;
display: inline;
} }
.activeTaskLabel { .activeTaskLabel {
background:rgb(0, 90, 30); background:rgb(0, 90, 30);
@ -472,6 +482,7 @@ img {
font-size: 10pt; font-size: 10pt;
color: #aaa; color: #aaa;
margin-bottom: 5pt; margin-bottom: 5pt;
margin-top: 5pt;
} }
.img-batch { .img-batch {
display: inline; display: inline;
@ -479,8 +490,58 @@ img {
#prompt_from_file { #prompt_from_file {
display: none; display: none;
} }
#init_image_preview_container {
display: flex;
margin-top: 6px;
margin-bottom: 8px;
}
#init_image_preview_container:not(.has-image) #init_image_wrapper,
#init_image_preview_container:not(.has-image) #inpaint_button_container {
display: none;
}
#init_image_buttons {
display: flex;
gap: 8px;
}
#init_image_preview_container.has-image #init_image_buttons {
flex-direction: column;
padding-left: 8px;
}
#init_image_buttons .button {
position: relative;
height: 32px;
width: 150px;
}
#init_image_buttons .button > input {
position: absolute;
left: 0;
top: 0;
right: 0;
bottom: 0;
opacity: 0;
}
#inpaint_button_container {
display: flex;
align-items: center;
gap: 8px;
}
#init_image_wrapper {
grid-row: span 3;
position: relative;
width: fit-content;
max-height: 150px;
}
#init_image_preview { #init_image_preview {
max-width: 150px;
max-height: 150px; max-height: 150px;
height: 100%; height: 100%;
width: 100%; width: 100%;
@ -488,23 +549,18 @@ img {
border-radius: 6px; border-radius: 6px;
transition: all 1s ease-in-out; transition: all 1s ease-in-out;
} }
/*
#init_image_preview:hover { #init_image_preview:hover {
max-width: 500px; max-width: 500px;
max-height: 1000px; max-height: 1000px;
transition: all 1s 0.5s ease-in-out; transition: all 1s 0.5s ease-in-out;
} } */
#init_image_wrapper {
position: relative;
width: fit-content;
}
#init_image_size_box { #init_image_size_box {
position: absolute; position: absolute;
right: 0px; right: 0px;
bottom: 3px; bottom: 0px;
padding: 3px; padding: 3px;
background: black; background: black;
color: white; color: white;
@ -556,6 +612,10 @@ option {
cursor: pointer; cursor: pointer;
} }
input[type="file"] * {
cursor: pointer;
}
input, input,
select, select,
textarea { textarea {
@ -594,12 +654,26 @@ input[type="file"] {
} }
button, button,
input::file-selector-button { input::file-selector-button,
.button {
padding: 2px 4px; padding: 2px 4px;
border-radius: 4px; border-radius: var(--input-border-radius);
background: var(--button-color); background: var(--button-color);
color: var(--button-text-color); color: var(--button-text-color);
border: var(--button-border); border: var(--button-border);
align-items: center;
justify-content: center;
cursor: pointer;
}
.button i {
margin-right: 8px;
}
button:hover,
.button:hover {
transition-duration: 0.1s;
background: hsl(var(--accent-hue), 100%, calc(var(--accent-lightness) + 6%));
} }
input::file-selector-button { input::file-selector-button {
@ -658,11 +732,15 @@ input::file-selector-button {
opacity: 1; opacity: 1;
} }
/* MOBILE SUPPORT */ /* Small screens */
@media screen and (max-width: 700px) { @media screen and (max-width: 1265px) {
#top-nav { #top-nav {
flex-direction: column; flex-direction: column;
} }
}
/* MOBILE SUPPORT */
@media screen and (max-width: 700px) {
body { body {
margin: 0px; margin: 0px;
} }
@ -712,7 +790,7 @@ input::file-selector-button {
padding-right: 0px; padding-right: 0px;
} }
#server-status { #server-status {
display: none; top: 75%;
} }
.popup > div { .popup > div {
padding-left: 5px !important; padding-left: 5px !important;
@ -730,6 +808,15 @@ input::file-selector-button {
} }
} }
@media screen and (max-width: 500px) {
#server-status #server-status-msg {
display: none;
}
#server-status:hover #server-status-msg {
display: inline;
}
}
@media (min-width: 700px) { @media (min-width: 700px) {
/* #editor { /* #editor {
max-width: 480px; max-width: 480px;
@ -750,6 +837,8 @@ input::file-selector-button {
#promptsFromFileBtn { #promptsFromFileBtn {
font-size: 9pt; font-size: 9pt;
display: inline;
background-color: var(--accent-color);
} }
.section-button { .section-button {
@ -839,6 +928,15 @@ input::file-selector-button {
transform: translate(-50%, 100%); transform: translate(-50%, 100%);
} }
.simple-tooltip.top-left {
top: 0px;
left: 0px;
transform: translate(calc(-100% + 15%), calc(-100% + 15%));
}
:hover > .simple-tooltip.top-left {
transform: translate(-80%, -100%);
}
/* PROGRESS BAR */ /* PROGRESS BAR */
.progress-bar { .progress-bar {
background: var(--background-color3); background: var(--background-color3);
@ -847,6 +945,7 @@ input::file-selector-button {
height: 16px; height: 16px;
position: relative; position: relative;
transition: 0.25s 1s border, 0.25s 1s height; transition: 0.25s 1s border, 0.25s 1s height;
clear: both;
} }
.progress-bar > div { .progress-bar > div {
background: var(--accent-color); background: var(--accent-color);
@ -951,8 +1050,8 @@ input::file-selector-button {
display: none; display: none;
} }
#tab-content-wrapper { #tab-content-wrapper > * {
border-top: 8px solid var(--background-color1); padding-top: 8px;
} }
.tab-content-inner { .tab-content-inner {
@ -989,16 +1088,89 @@ i.active {
float: right; float: right;
font-weight: bold; font-weight: bold;
} }
button:hover {
transition-duration: 0.1s;
background: hsl(var(--accent-hue), 100%, calc(var(--accent-lightness) + 6%));
}
button:active { button:active {
transition-duration: 0.1s; transition-duration: 0.1s;
background-color: hsl(var(--accent-hue), 100%, calc(var(--accent-lightness) + 24%)); background-color: hsl(var(--accent-hue), 100%, calc(var(--accent-lightness) + 24%));
position: relative;
top: 1px;
left: 1px;
}
div.task-initimg > img {
margin-right: 6px;
display: block;
}
div.task-fs-initimage {
display: none;
# position: absolute;
}
div.task-initimg:hover div.task-fs-initimage {
display: block;
position: absolute;
z-index: 9999;
box-shadow: 0 0 30px #000;
margin-top:-64px;
}
div.top-right {
position: absolute;
top: 8px;
right: 8px;
} }
button#save-system-settings-btn { button#save-system-settings-btn {
padding: 4pt 8pt; padding: 4pt 8pt;
} }
#ip-info a {
color:var(--text-color)
}
#ip-info div {
line-height: 200%;
}
/* SCROLLBARS */
:root {
--scrollbar-width: 14px;
--scrollbar-radius: 10px;
}
.scrollbar-editor::-webkit-scrollbar {
width: 8px;
}
.scrollbar-editor::-webkit-scrollbar-track {
border-radius: 10px;
}
.scrollbar-editor::-webkit-scrollbar-thumb {
background: --background-color2;
border-radius: 10px;
}
::-webkit-scrollbar {
width: var(--scrollbar-width);
}
::-webkit-scrollbar-track {
box-shadow: inset 0 0 5px var(--input-border-color);
border-radius: var(--input-border-radius);
}
::-webkit-scrollbar-thumb {
background: var(--background-color2);
border-radius: var(--scrollbar-radius);
}
body.pause {
border: solid 12px var(--accent-color);
}
body.wait-pause {
animation: blinker 2s linear infinite;
}
@keyframes blinker {
0% { border: solid 12px var(--accent-color); }
50% { border: solid 12px var(--background-color1); }
100% { border: solid 12px var(--accent-color); }
}

View File

@ -19,7 +19,7 @@
--input-border-color: var(--background-color4); --input-border-color: var(--background-color4);
--button-text-color: var(--input-text-color); --button-text-color: var(--input-text-color);
--button-color: var(--accent-color); --button-color: var(--input-background-color);
--button-border: none; --button-border: none;
/* other */ /* other */
@ -30,6 +30,9 @@
--primary-button-border: none; --primary-button-border: none;
--input-switch-padding: 1px; --input-switch-padding: 1px;
--input-height: 18px; --input-height: 18px;
/* Main theme color, hex color fallback. */
--theme-color-fallback: #673AB6;
} }
.theme-light { .theme-light {
@ -39,11 +42,12 @@
--background-color4: #cccccc; --background-color4: #cccccc;
--text-color: black; --text-color: black;
--button-text-color: white;
--input-text-color: black; --input-text-color: black;
--input-background-color: #f8f9fa; --input-background-color: #f8f9fa;
--input-border-color: grey; --input-border-color: grey;
--theme-color-fallback: #aaaaaa;
} }
.theme-discord { .theme-discord {
@ -58,6 +62,8 @@
--input-border-size: 2px; --input-border-size: 2px;
--input-background-color: #202225; --input-background-color: #202225;
--input-border-color: var(--input-background-color); --input-border-color: var(--input-background-color);
--theme-color-fallback: #202225;
} }
.theme-cool-blue { .theme-cool-blue {
@ -71,8 +77,10 @@
--background-color4: hsl(var(--main-hue), var(--main-saturation), calc(var(--value-base) - (3 * var(--value-step)))); --background-color4: hsl(var(--main-hue), var(--main-saturation), calc(var(--value-base) - (3 * var(--value-step))));
--input-background-color: var(--background-color3); --input-background-color: var(--background-color3);
--accent-hue: 212; --accent-hue: 212;
--theme-color-fallback: #0056b8;
} }
@ -87,6 +95,8 @@
--background-color4: hsl(var(--main-hue), var(--main-saturation), calc(var(--value-base) - (3 * var(--value-step)))); --background-color4: hsl(var(--main-hue), var(--main-saturation), calc(var(--value-base) - (3 * var(--value-step))));
--input-background-color: var(--background-color3); --input-background-color: var(--background-color3);
--theme-color-fallback: #5300b8;
} }
.theme-super-dark { .theme-super-dark {
@ -101,6 +111,8 @@
--input-background-color: var(--background-color3); --input-background-color: var(--background-color3);
--input-border-size: 0px; --input-border-size: 0px;
--theme-color-fallback: #000000;
} }
.theme-wild { .theme-wild {
@ -117,10 +129,11 @@
--input-border-size: 1px; --input-border-size: 1px;
--input-background-color: hsl(222, var(--main-saturation), calc(var(--value-base) - (2 * var(--value-step)))); --input-background-color: hsl(222, var(--main-saturation), calc(var(--value-base) - (2 * var(--value-step))));
--input-text-color: red; --input-text-color: #FF0000;
--input-border-color: green; --input-border-color: #005E05;
} }
.theme-gnomie { .theme-gnomie {
--background-color1: #242424; --background-color1: #242424;
--background-color2: #353535; --background-color2: #353535;
@ -136,11 +149,12 @@
--input-background-color: #2a2a2a; --input-background-color: #2a2a2a;
--input-border-size: 0px; --input-border-size: 0px;
--input-border-color: var(--input-background-color); --input-border-color: var(--input-background-color);
--theme-color-fallback: #2168bf;
} }
.theme-gnomie .panel-box { .theme-gnomie .panel-box {
border: none; border: none;
box-shadow: 0px 1px 2px rgba(0, 0, 0, 0.25); box-shadow: 0px 1px 2px rgba(0, 0, 0, 0.25);
border-radius: 10px; border-radius: 10px;
} }

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 10 KiB

View File

@ -14,13 +14,16 @@ const SETTINGS_IDS_LIST = [
"num_outputs_parallel", "num_outputs_parallel",
"stable_diffusion_model", "stable_diffusion_model",
"vae_model", "vae_model",
"sampler", "hypernetwork_model",
"sampler_name",
"width", "width",
"height", "height",
"num_inference_steps", "num_inference_steps",
"guidance_scale", "guidance_scale",
"prompt_strength", "prompt_strength",
"hypernetwork_strength",
"output_format", "output_format",
"output_quality",
"negative_prompt", "negative_prompt",
"stream_image_progress", "stream_image_progress",
"use_face_correction", "use_face_correction",
@ -33,9 +36,11 @@ const SETTINGS_IDS_LIST = [
"save_to_disk", "save_to_disk",
"diskPath", "diskPath",
"sound_toggle", "sound_toggle",
"turbo", "vram_usage_level",
"use_full_precision", "confirm_dangerous_actions",
"auto_save_settings" "metadata_output_format",
"auto_save_settings",
"apply_color_correction"
] ]
const IGNORE_BY_DEFAULT = [ const IGNORE_BY_DEFAULT = [
@ -55,6 +60,9 @@ async function initSettings() {
if (!element) { if (!element) {
console.error(`Missing settings element ${id}`) console.error(`Missing settings element ${id}`)
} }
if (id in SETTINGS) { // don't create it again
return
}
SETTINGS[id] = { SETTINGS[id] = {
key: id, key: id,
element: element, element: element,
@ -124,7 +132,7 @@ function loadSettings() {
var saved_settings_text = localStorage.getItem(SETTINGS_KEY) var saved_settings_text = localStorage.getItem(SETTINGS_KEY)
if (saved_settings_text) { if (saved_settings_text) {
var saved_settings = JSON.parse(saved_settings_text) var saved_settings = JSON.parse(saved_settings_text)
if (saved_settings.find(s => s.key == "auto_save_settings").value == false) { if (saved_settings.find(s => s.key == "auto_save_settings")?.value == false) {
setSetting("auto_save_settings", false) setSetting("auto_save_settings", false)
return return
} }
@ -270,7 +278,6 @@ function tryLoadOldSettings() {
"soundEnabled": "sound_toggle", "soundEnabled": "sound_toggle",
"saveToDisk": "save_to_disk", "saveToDisk": "save_to_disk",
"useCPU": "use_cpu", "useCPU": "use_cpu",
"useFullPrecision": "use_full_precision",
"useTurboMode": "turbo", "useTurboMode": "turbo",
"diskPath": "diskPath", "diskPath": "diskPath",
"useFaceCorrection": "use_face_correction", "useFaceCorrection": "use_face_correction",

View File

@ -25,6 +25,7 @@ function parseBoolean(stringValue) {
case "no": case "no":
case "off": case "off":
case "0": case "0":
case "none":
case null: case null:
case undefined: case undefined:
return false; return false;
@ -51,6 +52,13 @@ const TASK_MAPPING = {
readUI: () => negativePromptField.value, readUI: () => negativePromptField.value,
parse: (val) => val parse: (val) => val
}, },
active_tags: { name: "Image Modifiers",
setUI: (active_tags) => {
refreshModifiersState(active_tags)
},
readUI: () => activeTags.map(x => x.name),
parse: (val) => val
},
width: { name: 'Width', width: { name: 'Width',
setUI: (width) => { setUI: (width) => {
const oldVal = widthField.value const oldVal = widthField.value
@ -78,13 +86,14 @@ const TASK_MAPPING = {
if (!seed) { if (!seed) {
randomSeedField.checked = true randomSeedField.checked = true
seedField.disabled = true seedField.disabled = true
seedField.value = 0
return return
} }
randomSeedField.checked = false randomSeedField.checked = false
seedField.disabled = false seedField.disabled = false
seedField.value = seed seedField.value = seed
}, },
readUI: () => (randomSeedField.checked ? Math.floor(Math.random() * 10000000) : parseInt(seedField.value)), readUI: () => parseInt(seedField.value), // just return the value the user is seeing in the UI
parse: (val) => parseInt(val) parse: (val) => parseInt(val)
}, },
num_inference_steps: { name: 'Steps', num_inference_steps: { name: 'Steps',
@ -120,10 +129,12 @@ const TASK_MAPPING = {
}, },
mask: { name: 'Mask', mask: { name: 'Mask',
setUI: (mask) => { setUI: (mask) => {
inpaintingEditor.setImg(mask) setTimeout(() => { // add a delay to insure this happens AFTER the main image loads (which reloads the inpainter)
imageInpainter.setImg(mask)
}, 250)
maskSetting.checked = Boolean(mask) maskSetting.checked = Boolean(mask)
}, },
readUI: () => (maskSetting.checked ? inpaintingEditor.getImg() : undefined), readUI: () => (maskSetting.checked ? imageInpainter.getImg() : undefined),
parse: (val) => val parse: (val) => val
}, },
@ -150,9 +161,9 @@ const TASK_MAPPING = {
readUI: () => (useUpscalingField.checked ? upscaleModelField.value : undefined), readUI: () => (useUpscalingField.checked ? upscaleModelField.value : undefined),
parse: (val) => val parse: (val) => val
}, },
sampler: { name: 'Sampler', sampler_name: { name: 'Sampler',
setUI: (sampler) => { setUI: (sampler_name) => {
samplerField.value = sampler samplerField.value = sampler_name
}, },
readUI: () => samplerField.value, readUI: () => samplerField.value,
parse: (val) => val parse: (val) => val
@ -161,7 +172,7 @@ const TASK_MAPPING = {
setUI: (use_stable_diffusion_model) => { setUI: (use_stable_diffusion_model) => {
const oldVal = stableDiffusionModelField.value const oldVal = stableDiffusionModelField.value
use_stable_diffusion_model = getModelPath(use_stable_diffusion_model, ['.ckpt']) use_stable_diffusion_model = getModelPath(use_stable_diffusion_model, ['.ckpt', '.safetensors'])
stableDiffusionModelField.value = use_stable_diffusion_model stableDiffusionModelField.value = use_stable_diffusion_model
if (!stableDiffusionModelField.value) { if (!stableDiffusionModelField.value) {
@ -174,6 +185,7 @@ const TASK_MAPPING = {
use_vae_model: { name: 'VAE model', use_vae_model: { name: 'VAE model',
setUI: (use_vae_model) => { setUI: (use_vae_model) => {
const oldVal = vaeModelField.value const oldVal = vaeModelField.value
use_vae_model = (use_vae_model === undefined || use_vae_model === null || use_vae_model === 'None' ? '' : use_vae_model)
if (use_vae_model !== '') { if (use_vae_model !== '') {
use_vae_model = getModelPath(use_vae_model, ['.vae.pt', '.ckpt']) use_vae_model = getModelPath(use_vae_model, ['.vae.pt', '.ckpt'])
@ -184,10 +196,33 @@ const TASK_MAPPING = {
readUI: () => vaeModelField.value, readUI: () => vaeModelField.value,
parse: (val) => val parse: (val) => val
}, },
use_hypernetwork_model: { name: 'Hypernetwork model',
setUI: (use_hypernetwork_model) => {
const oldVal = hypernetworkModelField.value
use_hypernetwork_model = (use_hypernetwork_model === undefined || use_hypernetwork_model === null || use_hypernetwork_model === 'None' ? '' : use_hypernetwork_model)
numOutputsParallel: { name: 'Parallel Images', if (use_hypernetwork_model !== '') {
setUI: (numOutputsParallel) => { use_hypernetwork_model = getModelPath(use_hypernetwork_model, ['.pt'])
numOutputsParallelField.value = numOutputsParallel use_hypernetwork_model = use_hypernetwork_model !== '' ? use_hypernetwork_model : oldVal
}
hypernetworkModelField.value = use_hypernetwork_model
hypernetworkModelField.dispatchEvent(new Event('change'))
},
readUI: () => hypernetworkModelField.value,
parse: (val) => val
},
hypernetwork_strength: { name: 'Hypernetwork Strength',
setUI: (hypernetwork_strength) => {
hypernetworkStrengthField.value = hypernetwork_strength
updateHypernetworkStrengthSlider()
},
readUI: () => parseFloat(hypernetworkStrengthField.value),
parse: (val) => parseFloat(val)
},
num_outputs: { name: 'Parallel Images',
setUI: (num_outputs) => {
numOutputsParallelField.value = num_outputs
}, },
readUI: () => parseInt(numOutputsParallelField.value), readUI: () => parseInt(numOutputsParallelField.value),
parse: (val) => val parse: (val) => val
@ -207,13 +242,6 @@ const TASK_MAPPING = {
readUI: () => turboField.checked, readUI: () => turboField.checked,
parse: (val) => Boolean(val) parse: (val) => Boolean(val)
}, },
use_full_precision: { name: 'Use Full Precision',
setUI: (use_full_precision) => {
useFullPrecisionField.checked = use_full_precision
},
readUI: () => useFullPrecisionField.checked,
parse: (val) => Boolean(val)
},
stream_image_progress: { name: 'Stream Image Progress', stream_image_progress: { name: 'Stream Image Progress',
setUI: (stream_image_progress) => { setUI: (stream_image_progress) => {
@ -267,11 +295,6 @@ function restoreTaskToUI(task, fieldsToSkip) {
// restore the original tag // restore the original tag
promptField.value = task.reqBody.original_prompt || task.reqBody.prompt promptField.value = task.reqBody.original_prompt || task.reqBody.prompt
// Restore modifiers
if (task.reqBody.active_tags) {
refreshModifiersState(task.reqBody.active_tags)
}
// properly reset checkboxes // properly reset checkboxes
if (!('use_face_correction' in task.reqBody)) { if (!('use_face_correction' in task.reqBody)) {
useFaceCorrectionField.checked = false useFaceCorrectionField.checked = false
@ -287,18 +310,11 @@ function restoreTaskToUI(task, fieldsToSkip) {
// Show the source picture if present // Show the source picture if present
initImagePreview.src = (task.reqBody.init_image == undefined ? '' : task.reqBody.init_image) initImagePreview.src = (task.reqBody.init_image == undefined ? '' : task.reqBody.init_image)
if (IMAGE_REGEX.test(initImagePreview.src)) { if (IMAGE_REGEX.test(initImagePreview.src)) {
Boolean(task.reqBody.mask) ? inpaintingEditor.setImg(task.reqBody.mask) : inpaintingEditor.resetBackground() if (Boolean(task.reqBody.mask)) {
initImagePreviewContainer.style.display = 'block' setTimeout(() => { // add a delay to insure this happens AFTER the main image loads (which reloads the inpainter)
inpaintingEditorContainer.style.display = 'none' imageInpainter.setImg(task.reqBody.mask)
promptStrengthContainer.style.display = 'table-row' }, 250)
//samplerSelectionContainer.style.display = 'none' }
// maskSetting.checked = false
inpaintingEditorContainer.style.display = maskSetting.checked ? 'block' : 'none'
} else {
initImagePreviewContainer.style.display = 'none'
// inpaintingEditorContainer.style.display = 'none'
promptStrengthContainer.style.display = 'none'
// maskSetting.style.display = 'none'
} }
} }
function readUI() { function readUI() {
@ -326,9 +342,11 @@ function getModelPath(filename, extensions)
filename = filename.slice(0, filename.length - ext.length) filename = filename.slice(0, filename.length - ext.length)
} }
}) })
return filename
} }
const TASK_TEXT_MAPPING = { const TASK_TEXT_MAPPING = {
prompt: 'Prompt',
width: 'Width', width: 'Width',
height: 'Height', height: 'Height',
seed: 'Seed', seed: 'Seed',
@ -337,9 +355,11 @@ const TASK_TEXT_MAPPING = {
prompt_strength: 'Prompt Strength', prompt_strength: 'Prompt Strength',
use_face_correction: 'Use Face Correction', use_face_correction: 'Use Face Correction',
use_upscale: 'Use Upscaling', use_upscale: 'Use Upscaling',
sampler: 'Sampler', sampler_name: 'Sampler',
negative_prompt: 'Negative Prompt', negative_prompt: 'Negative Prompt',
use_stable_diffusion_model: 'Stable Diffusion model' use_stable_diffusion_model: 'Stable Diffusion model',
use_hypernetwork_model: 'Hypernetwork model',
hypernetwork_strength: 'Hypernetwork Strength'
} }
const afterPromptRe = /^\s*Width\s*:\s*\d+\s*(?:\r\n|\r|\n)+\s*Height\s*:\s*\d+\s*(\r\n|\r|\n)+Seed\s*:\s*\d+\s*$/igm const afterPromptRe = /^\s*Width\s*:\s*\d+\s*(?:\r\n|\r|\n)+\s*Height\s*:\s*\d+\s*(\r\n|\r|\n)+Seed\s*:\s*\d+\s*$/igm
function parseTaskFromText(str) { function parseTaskFromText(str) {
@ -387,6 +407,9 @@ async function parseContent(text) {
if (text.startsWith('{') && text.endsWith('}')) { if (text.startsWith('{') && text.endsWith('}')) {
try { try {
const task = JSON.parse(text) const task = JSON.parse(text)
if (!('reqBody' in task)) { // support the format saved to the disk, by the UI
task.reqBody = Object.assign({}, task)
}
restoreTaskToUI(task) restoreTaskToUI(task)
return true return true
} catch (e) { } catch (e) {
@ -406,7 +429,7 @@ async function parseContent(text) {
} }
async function readFile(file, i) { async function readFile(file, i) {
console.log(`Event %o reading file[${i}]:${file.name}...`, e) console.log(`Event %o reading file[${i}]:${file.name}...`)
const fileContent = (await file.text()).trim() const fileContent = (await file.text()).trim()
return await parseContent(fileContent) return await parseContent(fileContent)
} }
@ -454,7 +477,6 @@ document.addEventListener("dragover", dragOverHandler)
const TASK_REQ_NO_EXPORT = [ const TASK_REQ_NO_EXPORT = [
"use_cpu", "use_cpu",
"turbo", "turbo",
"use_full_precision",
"save_to_disk_path" "save_to_disk_path"
] ]
const resetSettings = document.getElementById('reset-image-settings') const resetSettings = document.getElementById('reset-image-settings')
@ -466,7 +488,7 @@ function checkReadTextClipboardPermission (result) {
// PASTE ICON // PASTE ICON
const pasteIcon = document.createElement('i') const pasteIcon = document.createElement('i')
pasteIcon.className = 'fa-solid fa-paste section-button' pasteIcon.className = 'fa-solid fa-paste section-button'
pasteIcon.innerHTML = `<span class="simple-tooltip right">Paste Image Settings</span>` pasteIcon.innerHTML = `<span class="simple-tooltip top-left">Paste Image Settings</span>`
pasteIcon.addEventListener('click', async (event) => { pasteIcon.addEventListener('click', async (event) => {
event.stopPropagation() event.stopPropagation()
// Add css class 'active' // Add css class 'active'
@ -506,7 +528,7 @@ function checkWriteToClipboardPermission (result) {
// COPY ICON // COPY ICON
const copyIcon = document.createElement('i') const copyIcon = document.createElement('i')
copyIcon.className = 'fa-solid fa-clipboard section-button' copyIcon.className = 'fa-solid fa-clipboard section-button'
copyIcon.innerHTML = `<span class="simple-tooltip right">Copy Image Settings</span>` copyIcon.innerHTML = `<span class="simple-tooltip top-left">Copy Image Settings</span>`
copyIcon.addEventListener('click', (event) => { copyIcon.addEventListener('click', (event) => {
event.stopPropagation() event.stopPropagation()
// Add css class 'active' // Add css class 'active'

File diff suppressed because one or more lines are too long

1310
ui/media/js/engine.js Normal file

File diff suppressed because it is too large Load Diff

706
ui/media/js/image-editor.js Normal file
View File

@ -0,0 +1,706 @@
var editorControlsLeft = document.getElementById("image-editor-controls-left")
const IMAGE_EDITOR_MAX_SIZE = 800
const IMAGE_EDITOR_BUTTONS = [
{
name: "Cancel",
icon: "fa-regular fa-circle-xmark",
handler: editor => {
editor.hide()
}
},
{
name: "Save",
icon: "fa-solid fa-floppy-disk",
handler: editor => {
editor.saveImage()
}
}
]
const defaultToolBegin = (editor, ctx, x, y, is_overlay = false) => {
ctx.beginPath()
ctx.moveTo(x, y)
}
const defaultToolMove = (editor, ctx, x, y, is_overlay = false) => {
ctx.lineTo(x, y)
if (is_overlay) {
ctx.clearRect(0, 0, editor.width, editor.height)
ctx.stroke()
}
}
const defaultToolEnd = (editor, ctx, x, y, is_overlay = false) => {
ctx.stroke()
if (is_overlay) {
ctx.clearRect(0, 0, editor.width, editor.height)
}
}
const IMAGE_EDITOR_TOOLS = [
{
id: "draw",
name: "Draw",
icon: "fa-solid fa-pencil",
cursor: "url(/media/images/fa-pencil.png) 0 24, pointer",
begin: defaultToolBegin,
move: defaultToolMove,
end: defaultToolEnd
},
{
id: "erase",
name: "Erase",
icon: "fa-solid fa-eraser",
cursor: "url(/media/images/fa-eraser.png) 0 18, pointer",
begin: defaultToolBegin,
move: (editor, ctx, x, y, is_overlay = false) => {
ctx.lineTo(x, y)
if (is_overlay) {
ctx.clearRect(0, 0, editor.width, editor.height)
ctx.globalCompositeOperation = "source-over"
ctx.globalAlpha = 1
ctx.filter = "none"
ctx.drawImage(editor.canvas_current, 0, 0)
editor.setBrush(editor.layers.overlay)
ctx.stroke()
editor.canvas_current.style.opacity = 0
}
},
end: (editor, ctx, x, y, is_overlay = false) => {
ctx.stroke()
if (is_overlay) {
ctx.clearRect(0, 0, editor.width, editor.height)
editor.canvas_current.style.opacity = ""
}
},
setBrush: (editor, layer) => {
layer.ctx.globalCompositeOperation = "destination-out"
}
},
{
id: "colorpicker",
name: "Color Picker",
icon: "fa-solid fa-eye-dropper",
cursor: "url(/media/images/fa-eye-dropper.png) 0 24, pointer",
begin: (editor, ctx, x, y, is_overlay = false) => {
var img_rgb = editor.layers.background.ctx.getImageData(x, y, 1, 1).data
var drawn_rgb = editor.ctx_current.getImageData(x, y, 1, 1).data
var drawn_opacity = drawn_rgb[3] / 255
editor.custom_color_input.value = rgbToHex({
r: (drawn_rgb[0] * drawn_opacity) + (img_rgb[0] * (1 - drawn_opacity)),
g: (drawn_rgb[1] * drawn_opacity) + (img_rgb[1] * (1 - drawn_opacity)),
b: (drawn_rgb[2] * drawn_opacity) + (img_rgb[2] * (1 - drawn_opacity)),
})
editor.custom_color_input.dispatchEvent(new Event("change"))
},
move: (editor, ctx, x, y, is_overlay = false) => {},
end: (editor, ctx, x, y, is_overlay = false) => {}
}
]
const IMAGE_EDITOR_ACTIONS = [
{
id: "clear",
name: "Clear",
icon: "fa-solid fa-xmark",
handler: (editor) => {
editor.ctx_current.clearRect(0, 0, editor.width, editor.height)
},
trackHistory: true
},
{
id: "undo",
name: "Undo",
icon: "fa-solid fa-rotate-left",
handler: (editor) => {
editor.history.undo()
},
trackHistory: false
},
{
id: "redo",
name: "Redo",
icon: "fa-solid fa-rotate-right",
handler: (editor) => {
editor.history.redo()
},
trackHistory: false
}
]
var IMAGE_EDITOR_SECTIONS = [
{
name: "tool",
title: "Tool",
default: "draw",
options: Array.from(IMAGE_EDITOR_TOOLS.map(t => t.id)),
initElement: (element, option) => {
var tool_info = IMAGE_EDITOR_TOOLS.find(t => t.id == option)
element.className = "image-editor-button button"
var sub_element = document.createElement("div")
var icon = document.createElement("i")
tool_info.icon.split(" ").forEach(c => icon.classList.add(c))
sub_element.appendChild(icon)
sub_element.append(tool_info.name)
element.appendChild(sub_element)
}
},
{
name: "color",
title: "Color",
default: "#f1c232",
options: [
"custom",
"#ea9999", "#e06666", "#cc0000", "#990000", "#660000",
"#f9cb9c", "#f6b26b", "#e69138", "#b45f06", "#783f04",
"#ffe599", "#ffd966", "#f1c232", "#bf9000", "#7f6000",
"#b6d7a8", "#93c47d", "#6aa84f", "#38761d", "#274e13",
"#a4c2f4", "#6d9eeb", "#3c78d8", "#1155cc", "#1c4587",
"#b4a7d6", "#8e7cc3", "#674ea7", "#351c75", "#20124d",
"#d5a6bd", "#c27ba0", "#a64d79", "#741b47", "#4c1130",
"#ffffff", "#c0c0c0", "#838383", "#525252", "#000000",
],
initElement: (element, option) => {
if (option == "custom") {
var input = document.createElement("input")
input.type = "color"
element.appendChild(input)
var span = document.createElement("span")
span.textContent = "Custom"
span.onclick = function(e) {
input.click()
}
element.appendChild(span)
}
else {
element.style.background = option
}
},
getCustom: editor => {
var input = editor.popup.querySelector(".image_editor_color input")
return input.value
}
},
{
name: "brush_size",
title: "Brush Size",
default: 48,
options: [ 6, 12, 16, 24, 30, 40, 48, 64 ],
initElement: (element, option) => {
element.parentElement.style.flex = option
element.style.width = option + "px"
element.style.height = option + "px"
element.style['margin-right'] = '2px'
element.style["border-radius"] = (option / 2).toFixed() + "px"
}
},
{
name: "opacity",
title: "Opacity",
default: 0,
options: [ 0, 0.2, 0.4, 0.6, 0.8 ],
initElement: (element, option) => {
element.style.background = `repeating-conic-gradient(rgba(0, 0, 0, ${option}) 0% 25%, rgba(255, 255, 255, ${option}) 0% 50%) 50% / 10px 10px`
}
},
{
name: "sharpness",
title: "Sharpness",
default: 0,
options: [ 0, 0.05, 0.1, 0.2, 0.3 ],
initElement: (element, option) => {
var size = 32
var blur_amount = parseInt(option * size)
var sub_element = document.createElement("div")
sub_element.style.background = `var(--background-color3)`
sub_element.style.filter = `blur(${blur_amount}px)`
sub_element.style.width = `${size - 4}px`
sub_element.style.height = `${size - 4}px`
sub_element.style['border-radius'] = `${size}px`
element.style.background = "none"
element.appendChild(sub_element)
}
}
]
class EditorHistory {
constructor(editor) {
this.editor = editor
this.events = [] // stack of all events (actions/edits)
this.current_edit = null
this.rewind_index = 0 // how many events back into the history we've rewound to. (current state is just after event at index 'length - this.rewind_index - 1')
}
push(event) {
// probably add something here eventually to save state every x events
if (this.rewind_index != 0) {
this.events = this.events.slice(0, 0 - this.rewind_index)
this.rewind_index = 0
}
var snapshot_frequency = 20 // (every x edits, take a snapshot of the current drawing state, for faster rewinding)
if (this.events.length > 0 && this.events.length % snapshot_frequency == 0) {
event.snapshot = this.editor.layers.drawing.ctx.getImageData(0, 0, this.editor.width, this.editor.height)
}
this.events.push(event)
}
pushAction(action) {
this.push({
type: "action",
id: action
});
}
editBegin(x, y) {
this.current_edit = {
type: "edit",
id: this.editor.getOptionValue("tool"),
options: Object.assign({}, this.editor.options),
points: [ { x: x, y: y } ]
}
}
editMove(x, y) {
if (this.current_edit) {
this.current_edit.points.push({ x: x, y: y })
}
}
editEnd(x, y) {
if (this.current_edit) {
this.push(this.current_edit)
this.current_edit = null
}
}
clear() {
this.events = []
}
undo() {
this.rewindTo(this.rewind_index + 1)
}
redo() {
this.rewindTo(this.rewind_index - 1)
}
rewindTo(new_rewind_index) {
if (new_rewind_index < 0 || new_rewind_index > this.events.length) {
return; // do nothing if target index is out of bounds
}
var ctx = this.editor.layers.drawing.ctx
ctx.clearRect(0, 0, this.editor.width, this.editor.height)
var target_index = this.events.length - 1 - new_rewind_index
var snapshot_index = target_index
while (snapshot_index > -1) {
if (this.events[snapshot_index].snapshot) {
break
}
snapshot_index--
}
if (snapshot_index != -1) {
ctx.putImageData(this.events[snapshot_index].snapshot, 0, 0);
}
for (var i = (snapshot_index + 1); i <= target_index; i++) {
var event = this.events[i]
if (event.type == "action") {
var action = IMAGE_EDITOR_ACTIONS.find(a => a.id == event.id)
action.handler(this.editor)
}
else if (event.type == "edit") {
var tool = IMAGE_EDITOR_TOOLS.find(t => t.id == event.id)
this.editor.setBrush(this.editor.layers.drawing, event.options)
var first_point = event.points[0]
tool.begin(this.editor, ctx, first_point.x, first_point.y)
for (var point_i = 1; point_i < event.points.length; point_i++) {
tool.move(this.editor, ctx, event.points[point_i].x, event.points[point_i].y)
}
var last_point = event.points[event.points.length - 1]
tool.end(this.editor, ctx, last_point.x, last_point.y)
}
}
// re-set brush to current settings
this.editor.setBrush(this.editor.layers.drawing)
this.rewind_index = new_rewind_index
}
}
class ImageEditor {
constructor(popup, inpainter = false) {
this.inpainter = inpainter
this.popup = popup
this.history = new EditorHistory(this)
if (inpainter) {
this.popup.classList.add("inpainter")
}
this.drawing = false
this.temp_previous_tool = null // used for the ctrl-colorpicker functionality
this.container = popup.querySelector(".editor-controls-center > div")
this.layers = {}
var layer_names = [
"background",
"drawing",
"overlay"
]
layer_names.forEach(name => {
let canvas = document.createElement("canvas")
canvas.className = `editor-canvas-${name}`
this.container.appendChild(canvas)
this.layers[name] = {
name: name,
canvas: canvas,
ctx: canvas.getContext("2d")
}
})
// add mouse handlers
this.container.addEventListener("mousedown", this.mouseHandler.bind(this))
this.container.addEventListener("mouseup", this.mouseHandler.bind(this))
this.container.addEventListener("mousemove", this.mouseHandler.bind(this))
this.container.addEventListener("mouseout", this.mouseHandler.bind(this))
this.container.addEventListener("mouseenter", this.mouseHandler.bind(this))
this.container.addEventListener("touchstart", this.mouseHandler.bind(this))
this.container.addEventListener("touchmove", this.mouseHandler.bind(this))
this.container.addEventListener("touchcancel", this.mouseHandler.bind(this))
this.container.addEventListener("touchend", this.mouseHandler.bind(this))
// initialize editor controls
this.options = {}
this.optionElements = {}
IMAGE_EDITOR_SECTIONS.forEach(section => {
section.id = `image_editor_${section.name}`
var sectionElement = document.createElement("div")
sectionElement.className = section.id
var title = document.createElement("h4")
title.innerText = section.title
sectionElement.appendChild(title)
var optionsContainer = document.createElement("div")
optionsContainer.classList.add("editor-options-container")
this.optionElements[section.name] = []
section.options.forEach((option, index) => {
var optionHolder = document.createElement("div")
var optionElement = document.createElement("div")
optionHolder.appendChild(optionElement)
section.initElement(optionElement, option)
optionElement.addEventListener("click", target => this.selectOption(section.name, index))
optionsContainer.appendChild(optionHolder)
this.optionElements[section.name].push(optionElement)
})
this.selectOption(section.name, section.options.indexOf(section.default))
sectionElement.appendChild(optionsContainer)
this.popup.querySelector(".editor-controls-left").appendChild(sectionElement)
})
this.custom_color_input = this.popup.querySelector(`input[type="color"]`)
this.custom_color_input.addEventListener("change", () => {
this.custom_color_input.parentElement.style.background = this.custom_color_input.value
this.selectOption("color", 0)
})
if (this.inpainter) {
this.selectOption("color", IMAGE_EDITOR_SECTIONS.find(s => s.name == "color").options.indexOf("#ffffff"))
this.selectOption("opacity", IMAGE_EDITOR_SECTIONS.find(s => s.name == "opacity").options.indexOf(0.4))
}
// initialize the right-side controls
var buttonContainer = document.createElement("div")
IMAGE_EDITOR_BUTTONS.forEach(button => {
var element = document.createElement("div")
var icon = document.createElement("i")
element.className = "image-editor-button button"
icon.className = button.icon
element.appendChild(icon)
element.append(button.name)
buttonContainer.appendChild(element)
element.addEventListener("click", event => button.handler(this))
})
var actionsContainer = document.createElement("div")
var actionsTitle = document.createElement("h4")
actionsTitle.textContent = "Actions"
actionsContainer.appendChild(actionsTitle);
IMAGE_EDITOR_ACTIONS.forEach(action => {
var element = document.createElement("div")
var icon = document.createElement("i")
element.className = "image-editor-button button"
icon.className = action.icon
element.appendChild(icon)
element.append(action.name)
actionsContainer.appendChild(element)
element.addEventListener("click", event => this.runAction(action.id))
})
this.popup.querySelector(".editor-controls-right").appendChild(actionsContainer)
this.popup.querySelector(".editor-controls-right").appendChild(buttonContainer)
this.keyHandlerBound = this.keyHandler.bind(this)
this.setSize(512, 512)
}
show() {
this.popup.classList.add("active")
document.addEventListener("keydown", this.keyHandlerBound)
document.addEventListener("keyup", this.keyHandlerBound)
}
hide() {
this.popup.classList.remove("active")
document.removeEventListener("keydown", this.keyHandlerBound)
document.removeEventListener("keyup", this.keyHandlerBound)
}
setSize(width, height) {
if (width == this.width && height == this.height) {
return
}
if (width > height) {
var max_size = Math.min(parseInt(window.innerWidth * 0.9), width, 768)
var multiplier = max_size / width
width = (multiplier * width).toFixed()
height = (multiplier * height).toFixed()
}
else {
var max_size = Math.min(parseInt(window.innerHeight * 0.9), height, 768)
var multiplier = max_size / height
width = (multiplier * width).toFixed()
height = (multiplier * height).toFixed()
}
this.width = width
this.height = height
this.container.style.width = width + "px"
this.container.style.height = height + "px"
Object.values(this.layers).forEach(layer => {
layer.canvas.width = width
layer.canvas.height = height
})
if (this.inpainter) {
this.saveImage() // We've reset the size of the image so inpainting is different
}
this.setBrush()
this.history.clear()
}
get tool() {
var tool_id = this.getOptionValue("tool")
return IMAGE_EDITOR_TOOLS.find(t => t.id == tool_id);
}
loadTool() {
this.drawing = false
this.container.style.cursor = this.tool.cursor;
}
setImage(url, width, height) {
this.setSize(width, height)
this.layers.drawing.ctx.clearRect(0, 0, this.width, this.height)
this.layers.background.ctx.clearRect(0, 0, this.width, this.height)
if (url) {
var image = new Image()
image.onload = () => {
this.layers.background.ctx.drawImage(image, 0, 0, this.width, this.height)
}
image.src = url
}
else {
this.layers.background.ctx.fillStyle = "#ffffff"
this.layers.background.ctx.beginPath()
this.layers.background.ctx.rect(0, 0, this.width, this.height)
this.layers.background.ctx.fill()
}
this.history.clear()
}
saveImage() {
if (!this.inpainter) {
// This is not an inpainter, so save the image as the new img2img input
this.layers.background.ctx.drawImage(this.layers.drawing.canvas, 0, 0, this.width, this.height)
var base64 = this.layers.background.canvas.toDataURL()
initImagePreview.src = base64 // this will trigger the rest of the app to use it
}
else {
// This is an inpainter, so make sure the toggle is set accordingly
var is_blank = !this.layers.drawing.ctx
.getImageData(0, 0, this.width, this.height).data
.some(channel => channel !== 0)
maskSetting.checked = !is_blank
}
this.hide()
}
getImg() { // a drop-in replacement of the drawingboard version
return this.layers.drawing.canvas.toDataURL()
}
setImg(dataUrl) { // a drop-in replacement of the drawingboard version
var image = new Image()
image.onload = () => {
var ctx = this.layers.drawing.ctx;
ctx.clearRect(0, 0, this.width, this.height)
ctx.globalCompositeOperation = "source-over"
ctx.globalAlpha = 1
ctx.filter = "none"
ctx.drawImage(image, 0, 0, this.width, this.height)
this.setBrush(this.layers.drawing)
}
image.src = dataUrl
}
runAction(action_id) {
var action = IMAGE_EDITOR_ACTIONS.find(a => a.id == action_id)
if (action.trackHistory) {
this.history.pushAction(action_id)
}
action.handler(this)
}
setBrush(layer = null, options = null) {
if (options == null) {
options = this.options
}
if (layer) {
layer.ctx.lineCap = "round"
layer.ctx.lineJoin = "round"
layer.ctx.lineWidth = options.brush_size
layer.ctx.fillStyle = options.color
layer.ctx.strokeStyle = options.color
var sharpness = parseInt(options.sharpness * options.brush_size)
layer.ctx.filter = sharpness == 0 ? `none` : `blur(${sharpness}px)`
layer.ctx.globalAlpha = (1 - options.opacity)
layer.ctx.globalCompositeOperation = "source-over"
var tool = IMAGE_EDITOR_TOOLS.find(t => t.id == options.tool)
if (tool && tool.setBrush) {
tool.setBrush(editor, layer)
}
}
else {
Object.values([ "drawing", "overlay" ]).map(name => this.layers[name]).forEach(l => {
this.setBrush(l)
})
}
}
get ctx_overlay() {
return this.layers.overlay.ctx
}
get ctx_current() { // the idea is this will help support having custom layers and editing each one
return this.layers.drawing.ctx
}
get canvas_current() {
return this.layers.drawing.canvas
}
keyHandler(event) { // handles keybinds like ctrl+z, ctrl+y
if (!this.popup.classList.contains("active")) {
document.removeEventListener("keydown", this.keyHandlerBound)
document.removeEventListener("keyup", this.keyHandlerBound)
return // this catches if something else closes the window but doesnt properly unbind the key handler
}
// keybindings
if (event.type == "keydown") {
if ((event.key == "z" || event.key == "Z") && event.ctrlKey) {
if (!event.shiftKey) {
this.history.undo()
}
else {
this.history.redo()
}
}
if (event.key == "y" && event.ctrlKey) {
this.history.redo()
}
}
// dropper ctrl holding handler stuff
var dropper_active = this.temp_previous_tool != null;
if (dropper_active && !event.ctrlKey) {
this.selectOption("tool", IMAGE_EDITOR_TOOLS.findIndex(t => t.id == this.temp_previous_tool))
this.temp_previous_tool = null
}
else if (!dropper_active && event.ctrlKey) {
this.temp_previous_tool = this.getOptionValue("tool")
this.selectOption("tool", IMAGE_EDITOR_TOOLS.findIndex(t => t.id == "colorpicker"))
}
}
mouseHandler(event) {
var bbox = this.layers.overlay.canvas.getBoundingClientRect()
var x = (event.clientX || 0) - bbox.left
var y = (event.clientY || 0) - bbox.top
var type = event.type;
var touchmap = {
touchstart: "mousedown",
touchmove: "mousemove",
touchend: "mouseup",
touchcancel: "mouseup"
}
if (type in touchmap) {
type = touchmap[type]
if (event.touches && event.touches[0]) {
var touch = event.touches[0]
var x = (touch.clientX || 0) - bbox.left
var y = (touch.clientY || 0) - bbox.top
}
}
event.preventDefault()
// do drawing-related stuff
if (type == "mousedown" || (type == "mouseenter" && event.buttons == 1)) {
this.drawing = true
this.tool.begin(this, this.ctx_current, x, y)
this.tool.begin(this, this.ctx_overlay, x, y, true)
this.history.editBegin(x, y)
}
if (type == "mouseup" || type == "mousemove") {
if (this.drawing) {
if (x > 0 && y > 0) {
this.tool.move(this, this.ctx_current, x, y)
this.tool.move(this, this.ctx_overlay, x, y, true)
this.history.editMove(x, y)
}
}
}
if (type == "mouseup" || type == "mouseout") {
if (this.drawing) {
this.drawing = false
this.tool.end(this, this.ctx_current, x, y)
this.tool.end(this, this.ctx_overlay, x, y, true)
this.history.editEnd(x, y)
}
}
}
getOptionValue(section_name) {
var section = IMAGE_EDITOR_SECTIONS.find(s => s.name == section_name)
return this.options && section_name in this.options ? this.options[section_name] : section.default
}
selectOption(section_name, option_index) {
var section = IMAGE_EDITOR_SECTIONS.find(s => s.name == section_name)
var value = section.options[option_index]
this.options[section_name] = value == "custom" ? section.getCustom(this) : value
this.optionElements[section_name].forEach(element => element.classList.remove("active"))
this.optionElements[section_name][option_index].classList.add("active")
// change the editor
this.setBrush()
if (section.name == "tool") {
this.loadTool()
}
}
}
function rgbToHex(rgb) {
function componentToHex(c) {
var hex = parseInt(c).toString(16)
return hex.length == 1 ? "0" + hex : hex
}
return "#" + componentToHex(rgb.r) + componentToHex(rgb.g) + componentToHex(rgb.b)
}
const imageEditor = new ImageEditor(document.getElementById("image-editor"))
const imageInpainter = new ImageEditor(document.getElementById("image-inpainter"), true)
imageEditor.setImage(null, 512, 512)
imageInpainter.setImage(null, 512, 512)
document.getElementById("init_image_button_draw").addEventListener("click", () => {
imageEditor.show()
})
document.getElementById("init_image_button_inpaint").addEventListener("click", () => {
imageInpainter.show()
})
img2imgUnload() // no init image when the app starts

View File

@ -85,14 +85,13 @@ function createModifierGroup(modifierGroup, initiallyExpanded) {
if(typeof modifierCard == 'object') { if(typeof modifierCard == 'object') {
modifiersEl.appendChild(modifierCard) modifiersEl.appendChild(modifierCard)
const trimmedName = trimModifiers(modifierName)
modifierCard.addEventListener('click', () => { modifierCard.addEventListener('click', () => {
if (activeTags.map(x => x.name).includes(modifierName)) { if (activeTags.map(x => trimModifiers(x.name)).includes(trimmedName)) {
// remove modifier from active array // remove modifier from active array
activeTags = activeTags.filter(x => x.name != modifierName) activeTags = activeTags.filter(x => trimModifiers(x.name) != trimmedName)
modifierCard.classList.remove(activeCardClass) toggleCardState(trimmedName, false)
modifierCard.querySelector('.modifier-card-image-overlay').innerText = '+'
} else { } else {
// add modifier to active array // add modifier to active array
activeTags.push({ activeTags.push({
@ -101,10 +100,7 @@ function createModifierGroup(modifierGroup, initiallyExpanded) {
'originElement': modifierCard, 'originElement': modifierCard,
'previews': modifierPreviews 'previews': modifierPreviews
}) })
toggleCardState(trimmedName, true)
modifierCard.classList.add(activeCardClass)
modifierCard.querySelector('.modifier-card-image-overlay').innerText = '-'
} }
refreshTagsList() refreshTagsList()
@ -125,6 +121,10 @@ function createModifierGroup(modifierGroup, initiallyExpanded) {
return e return e
} }
function trimModifiers(tag) {
return tag.replace(/^\(+|\)+$/g, '').replace(/^\[+|\]+$/g, '')
}
async function loadModifiers() { async function loadModifiers() {
try { try {
let res = await fetch('/get/modifiers') let res = await fetch('/get/modifiers')
@ -219,11 +219,10 @@ function refreshTagsList() {
editorModifierTagsList.appendChild(tag.element) editorModifierTagsList.appendChild(tag.element)
tag.element.addEventListener('click', () => { tag.element.addEventListener('click', () => {
let idx = activeTags.indexOf(tag) let idx = activeTags.findIndex(o => { return o.name === tag.name })
if (idx !== -1 && activeTags[idx].originElement !== undefined) { if (idx !== -1) {
activeTags[idx].originElement.classList.remove(activeCardClass) toggleCardState(activeTags[idx].name, false)
activeTags[idx].originElement.querySelector('.modifier-card-image-overlay').innerText = '+'
activeTags.splice(idx, 1) activeTags.splice(idx, 1)
refreshTagsList() refreshTagsList()
@ -236,6 +235,23 @@ function refreshTagsList() {
editorModifierTagsList.appendChild(brk) editorModifierTagsList.appendChild(brk)
} }
function toggleCardState(modifierName, makeActive) {
document.querySelector('#editor-modifiers').querySelectorAll('.modifier-card').forEach(card => {
const name = card.querySelector('.modifier-card-label').innerText
if ( trimModifiers(modifierName) == trimModifiers(name)
|| trimModifiers(modifierName) == 'by ' + trimModifiers(name)) {
if(makeActive) {
card.classList.add(activeCardClass)
card.querySelector('.modifier-card-image-overlay').innerText = '-'
}
else{
card.classList.remove(activeCardClass)
card.querySelector('.modifier-card-image-overlay').innerText = '+'
}
}
})
}
function changePreviewImages(val) { function changePreviewImages(val) {
const previewImages = document.querySelectorAll('.modifier-card-image-container img') const previewImages = document.querySelectorAll('.modifier-card-image-container img')
@ -310,31 +326,7 @@ function saveCustomModifiers() {
} }
function loadCustomModifiers() { function loadCustomModifiers() {
let customModifiers = localStorage.getItem(CUSTOM_MODIFIERS_KEY, '') PLUGINS['MODIFIERS_LOAD'].forEach(fn=>fn.loader.call())
customModifiersTextBox.value = customModifiers
if (customModifiersGroupElement !== undefined) {
customModifiersGroupElement.remove()
}
if (customModifiers && customModifiers.trim() !== '') {
customModifiers = customModifiers.split('\n')
customModifiers = customModifiers.filter(m => m.trim() !== '')
customModifiers = customModifiers.map(function(m) {
return {
"modifier": m
}
})
let customGroup = {
'category': 'Custom Modifiers',
'modifiers': customModifiers
}
customModifiersGroupElement = createModifierGroup(customGroup, true)
createCollapsibles(customModifiersGroupElement)
}
} }
customModifiersTextBox.addEventListener('change', saveCustomModifiers) customModifiersTextBox.addEventListener('change', saveCustomModifiers)

View File

@ -1,41 +0,0 @@
const INPAINTING_EDITOR_SIZE = 450
let inpaintingEditorContainer = document.querySelector('#inpaintingEditor')
let inpaintingEditor = new DrawingBoard.Board('inpaintingEditor', {
color: "#ffffff",
background: false,
size: 30,
webStorage: false,
controls: [{'DrawingMode': {'filler': false}}, 'Size', 'Navigation']
})
let inpaintingEditorCanvasBackground = document.querySelector('.drawing-board-canvas-wrapper')
function resizeInpaintingEditor(widthValue, heightValue) {
if (widthValue === heightValue) {
widthValue = INPAINTING_EDITOR_SIZE
heightValue = INPAINTING_EDITOR_SIZE
} else if (widthValue > heightValue) {
heightValue = (heightValue / widthValue) * INPAINTING_EDITOR_SIZE
widthValue = INPAINTING_EDITOR_SIZE
} else {
widthValue = (widthValue / heightValue) * INPAINTING_EDITOR_SIZE
heightValue = INPAINTING_EDITOR_SIZE
}
if (inpaintingEditor.opts.aspectRatio === (widthValue / heightValue).toFixed(3)) {
// Same ratio, don't reset the canvas.
return
}
inpaintingEditor.opts.aspectRatio = (widthValue / heightValue).toFixed(3)
inpaintingEditorContainer.style.width = widthValue + 'px'
inpaintingEditorContainer.style.height = heightValue + 'px'
inpaintingEditor.opts.enlargeYourContainer = true
inpaintingEditor.opts.size = inpaintingEditor.ctx.lineWidth
inpaintingEditor.resize()
inpaintingEditor.ctx.lineCap = "round"
inpaintingEditor.ctx.lineJoin = "round"
inpaintingEditor.ctx.lineWidth = inpaintingEditor.opts.size
inpaintingEditor.setColor(inpaintingEditor.opts.color)
}

10
ui/media/js/jquery-confirm.min.js vendored Normal file

File diff suppressed because one or more lines are too long

File diff suppressed because it is too large Load Diff

View File

@ -5,9 +5,9 @@
*/ */
var ParameterType = { var ParameterType = {
checkbox: "checkbox", checkbox: "checkbox",
select: "select", select: "select",
select_multiple: "select_multiple", select_multiple: "select_multiple",
custom: "custom", custom: "custom",
}; };
/** /**
@ -23,184 +23,217 @@
/** @type {Array.<Parameter>} */ /** @type {Array.<Parameter>} */
var PARAMETERS = [ var PARAMETERS = [
{ {
id: "theme", id: "theme",
type: ParameterType.select, type: ParameterType.select,
label: "Theme", label: "Theme",
default: "theme-default", default: "theme-default",
note: "customize the look and feel of the ui", note: "customize the look and feel of the ui",
options: [ // Note: options expanded dynamically options: [ // Note: options expanded dynamically
{ {
value: "theme-default", value: "theme-default",
label: "Default" label: "Default"
} }
], ],
icon: "fa-palette" icon: "fa-palette"
}, },
{ {
id: "save_to_disk", id: "save_to_disk",
type: ParameterType.checkbox, type: ParameterType.checkbox,
label: "Auto-Save Images", label: "Auto-Save Images",
note: "automatically saves images to the specified location", note: "automatically saves images to the specified location",
icon: "fa-download", icon: "fa-download",
default: false, default: false,
}, },
{ {
id: "diskPath", id: "diskPath",
type: ParameterType.custom, type: ParameterType.custom,
label: "Save Location", label: "Save Location",
render: (parameter) => { render: (parameter) => {
return `<input id="${parameter.id}" name="${parameter.id}" size="30" disabled>` return `<input id="${parameter.id}" name="${parameter.id}" size="30" disabled>`
} }
}, },
{ {
id: "sound_toggle", id: "metadata_output_format",
type: ParameterType.checkbox, type: ParameterType.select,
label: "Enable Sound", label: "Metadata format",
note: "plays a sound on task completion", note: "will be saved to disk in this format",
icon: "fa-volume-low", default: "txt",
default: true, options: [
}, {
{ value: "txt",
id: "ui_open_browser_on_start", label: "txt"
type: ParameterType.checkbox, },
label: "Open browser on startup", {
note: "starts the default browser on startup", value: "json",
icon: "fa-window-restore", label: "json"
default: true, }
}, ],
{ },
id: "turbo", {
type: ParameterType.checkbox, id: "sound_toggle",
label: "Turbo Mode", type: ParameterType.checkbox,
note: "generates images faster, but uses an additional 1 GB of GPU memory", label: "Enable Sound",
icon: "fa-forward", note: "plays a sound on task completion",
default: true, icon: "fa-volume-low",
}, default: true,
{ },
id: "use_cpu", {
type: ParameterType.checkbox, id: "process_order_toggle",
label: "Use CPU (not GPU)", type: ParameterType.checkbox,
note: "warning: this will be *very* slow", label: "Process newest jobs first",
icon: "fa-microchip", note: "reverse the normal processing order",
default: false, default: false,
}, },
{ {
id: "auto_pick_gpus", id: "ui_open_browser_on_start",
type: ParameterType.checkbox, type: ParameterType.checkbox,
label: "Automatically pick the GPUs (experimental)", label: "Open browser on startup",
default: false, note: "starts the default browser on startup",
}, icon: "fa-window-restore",
{ default: true,
id: "use_gpus", },
type: ParameterType.select_multiple, {
label: "GPUs to use (experimental)", id: "vram_usage_level",
note: "to process in parallel", type: ParameterType.select,
default: false, label: "GPU Memory Usage",
}, note: "Faster performance requires more GPU memory (VRAM)<br/><br/>" +
{ "<b>Balanced:</b> nearly as fast as High, much lower VRAM usage<br/>" +
id: "use_full_precision", "<b>High:</b> fastest, maximum GPU memory usage</br>" +
type: ParameterType.checkbox, "<b>Low:</b> slowest, force-used for GPUs with 4 GB (or less) memory",
label: "Use Full Precision", icon: "fa-forward",
note: "for GPU-only. warning: this will consume more VRAM", default: "balanced",
icon: "fa-crosshairs", options: [
default: false, {value: "balanced", label: "Balanced"},
}, {value: "high", label: "High"},
{ {value: "low", label: "Low"}
id: "auto_save_settings", ],
type: ParameterType.checkbox, },
label: "Auto-Save Settings", {
note: "restores settings on browser load", id: "use_cpu",
icon: "fa-gear", type: ParameterType.checkbox,
default: true, label: "Use CPU (not GPU)",
}, note: "warning: this will be *very* slow",
{ icon: "fa-microchip",
id: "listen_to_network", default: false,
type: ParameterType.checkbox, },
label: "Make Stable Diffusion available on your network", {
note: "Other devices on your network can access this web page", id: "auto_pick_gpus",
icon: "fa-network-wired", type: ParameterType.checkbox,
default: true, label: "Automatically pick the GPUs (experimental)",
}, default: false,
{ },
id: "listen_port", {
type: ParameterType.custom, id: "use_gpus",
label: "Network port", type: ParameterType.select_multiple,
note: "Port that this server listens to. The '9000' part in 'http://localhost:9000'", label: "GPUs to use (experimental)",
icon: "fa-anchor", note: "to process in parallel",
render: (parameter) => { default: false,
return `<input id="${parameter.id}" name="${parameter.id}" size="6" value="9000" onkeypress="preventNonNumericalInput(event)">` },
} {
}, id: "auto_save_settings",
{ type: ParameterType.checkbox,
id: "use_beta_channel", label: "Auto-Save Settings",
type: ParameterType.checkbox, note: "restores settings on browser load",
label: "Beta channel", icon: "fa-gear",
note: "Get the latest features immediately (but could be less stable). Please restart the program after changing this.", default: true,
icon: "fa-fire", },
default: false, {
}, id: "confirm_dangerous_actions",
type: ParameterType.checkbox,
label: "Confirm dangerous actions",
note: "Actions that might lead to data loss must either be clicked with the shift key pressed, or confirmed in an 'Are you sure?' dialog",
icon: "fa-check-double",
default: true,
},
{
id: "listen_to_network",
type: ParameterType.checkbox,
label: "Make Stable Diffusion available on your network",
note: "Other devices on your network can access this web page",
icon: "fa-network-wired",
default: true,
},
{
id: "listen_port",
type: ParameterType.custom,
label: "Network port",
note: "Port that this server listens to. The '9000' part in 'http://localhost:9000'",
icon: "fa-anchor",
render: (parameter) => {
return `<input id="${parameter.id}" name="${parameter.id}" size="6" value="9000" onkeypress="preventNonNumericalInput(event)">`
}
},
{
id: "use_beta_channel",
type: ParameterType.checkbox,
label: "Beta channel",
note: "Get the latest features immediately (but could be less stable). Please restart the program after changing this.",
icon: "fa-fire",
default: false,
},
]; ];
function getParameterSettingsEntry(id) { function getParameterSettingsEntry(id) {
let parameter = PARAMETERS.filter(p => p.id === id) let parameter = PARAMETERS.filter(p => p.id === id)
if (parameter.length === 0) { if (parameter.length === 0) {
return return
} }
return parameter[0].settingsEntry return parameter[0].settingsEntry
} }
function getParameterElement(parameter) { function getParameterElement(parameter) {
switch (parameter.type) { switch (parameter.type) {
case ParameterType.checkbox: case ParameterType.checkbox:
var is_checked = parameter.default ? " checked" : ""; var is_checked = parameter.default ? " checked" : "";
return `<input id="${parameter.id}" name="${parameter.id}"${is_checked} type="checkbox">` return `<input id="${parameter.id}" name="${parameter.id}"${is_checked} type="checkbox">`
case ParameterType.select: case ParameterType.select:
case ParameterType.select_multiple: case ParameterType.select_multiple:
var options = (parameter.options || []).map(option => `<option value="${option.value}">${option.label}</option>`).join("") var options = (parameter.options || []).map(option => `<option value="${option.value}">${option.label}</option>`).join("")
var multiple = (parameter.type == ParameterType.select_multiple ? 'multiple' : '') var multiple = (parameter.type == ParameterType.select_multiple ? 'multiple' : '')
return `<select id="${parameter.id}" name="${parameter.id}" ${multiple}>${options}</select>` return `<select id="${parameter.id}" name="${parameter.id}" ${multiple}>${options}</select>`
case ParameterType.custom: case ParameterType.custom:
return parameter.render(parameter) return parameter.render(parameter)
default: default:
console.error(`Invalid type for parameter ${parameter.id}`); console.error(`Invalid type for parameter ${parameter.id}`);
return "ERROR: Invalid Type" return "ERROR: Invalid Type"
} }
} }
let parametersTable = document.querySelector("#system-settings .parameters-table") let parametersTable = document.querySelector("#system-settings .parameters-table")
/* fill in the system settings popup table */ /* fill in the system settings popup table */
function initParameters() { function initParameters() {
PARAMETERS.forEach(parameter => { PARAMETERS.forEach(parameter => {
var element = getParameterElement(parameter) var element = getParameterElement(parameter)
var note = parameter.note ? `<small>${parameter.note}</small>` : ""; var note = parameter.note ? `<small>${parameter.note}</small>` : "";
var icon = parameter.icon ? `<i class="fa ${parameter.icon}"></i>` : ""; var icon = parameter.icon ? `<i class="fa ${parameter.icon}"></i>` : "";
var newrow = document.createElement('div') var newrow = document.createElement('div')
newrow.innerHTML = ` newrow.innerHTML = `
<div>${icon}</div> <div>${icon}</div>
<div><label for="${parameter.id}">${parameter.label}</label>${note}</div> <div><label for="${parameter.id}">${parameter.label}</label>${note}</div>
<div>${element}</div>` <div>${element}</div>`
parametersTable.appendChild(newrow) parametersTable.appendChild(newrow)
parameter.settingsEntry = newrow parameter.settingsEntry = newrow
}) })
} }
initParameters() initParameters()
let turboField = document.querySelector('#turbo') let vramUsageLevelField = document.querySelector('#vram_usage_level')
let useCPUField = document.querySelector('#use_cpu') let useCPUField = document.querySelector('#use_cpu')
let autoPickGPUsField = document.querySelector('#auto_pick_gpus') let autoPickGPUsField = document.querySelector('#auto_pick_gpus')
let useGPUsField = document.querySelector('#use_gpus') let useGPUsField = document.querySelector('#use_gpus')
let useFullPrecisionField = document.querySelector('#use_full_precision')
let saveToDiskField = document.querySelector('#save_to_disk') let saveToDiskField = document.querySelector('#save_to_disk')
let diskPathField = document.querySelector('#diskPath') let diskPathField = document.querySelector('#diskPath')
let listenToNetworkField = document.querySelector("#listen_to_network") let listenToNetworkField = document.querySelector("#listen_to_network")
let listenPortField = document.querySelector("#listen_port") let listenPortField = document.querySelector("#listen_port")
let useBetaChannelField = document.querySelector("#use_beta_channel") let useBetaChannelField = document.querySelector("#use_beta_channel")
let uiOpenBrowserOnStartField = document.querySelector("#ui_open_browser_on_start") let uiOpenBrowserOnStartField = document.querySelector("#ui_open_browser_on_start")
let confirmDangerousActionsField = document.querySelector("#confirm_dangerous_actions")
let saveSettingsBtn = document.querySelector('#save-system-settings-btn') let saveSettingsBtn = document.querySelector('#save-system-settings-btn')
async function changeAppConfig(configDelta) { async function changeAppConfig(configDelta) {
try { try {
let res = await fetch('/app_config', { let res = await fetch('/app_config', {
@ -230,12 +263,12 @@ async function getAppConfig() {
if (config.ui && config.ui.open_browser_on_start === false) { if (config.ui && config.ui.open_browser_on_start === false) {
uiOpenBrowserOnStartField.checked = false uiOpenBrowserOnStartField.checked = false
} }
if (config.net && config.net.listen_to_network === false) { if (config.net && config.net.listen_to_network === false) {
listenToNetworkField.checked = false listenToNetworkField.checked = false
} }
if (config.net && config.net.listen_port !== undefined) { if (config.net && config.net.listen_port !== undefined) {
listenPortField.value = config.net.listen_port listenPortField.value = config.net.listen_port
} }
console.log('get config status response', config) console.log('get config status response', config)
} catch (e) { } catch (e) {
@ -263,7 +296,6 @@ function getCurrentRenderDeviceSelection() {
useCPUField.addEventListener('click', function() { useCPUField.addEventListener('click', function() {
let gpuSettingEntry = getParameterSettingsEntry('use_gpus') let gpuSettingEntry = getParameterSettingsEntry('use_gpus')
let autoPickGPUSettingEntry = getParameterSettingsEntry('auto_pick_gpus') let autoPickGPUSettingEntry = getParameterSettingsEntry('auto_pick_gpus')
console.log("hello", this.checked);
if (this.checked) { if (this.checked) {
gpuSettingEntry.style.display = 'none' gpuSettingEntry.style.display = 'none'
autoPickGPUSettingEntry.style.display = 'none' autoPickGPUSettingEntry.style.display = 'none'
@ -296,86 +328,107 @@ autoPickGPUsField.addEventListener('click', function() {
gpuSettingEntry.style.display = (this.checked ? 'none' : '') gpuSettingEntry.style.display = (this.checked ? 'none' : '')
}) })
async function getDiskPath() { async function setDiskPath(defaultDiskPath) {
try { var diskPath = getSetting("diskPath")
var diskPath = getSetting("diskPath") if (diskPath == '' || diskPath == undefined || diskPath == "undefined") {
if (diskPath == '' || diskPath == undefined || diskPath == "undefined") { setSetting("diskPath", defaultDiskPath)
let res = await fetch('/get/output_dir')
if (res.status === 200) {
res = await res.json()
res = res.output_dir
setSetting("diskPath", res)
}
}
} catch (e) {
console.log('error fetching output dir path', e)
} }
} }
async function getDevices() { function setDeviceInfo(devices) {
try { let cpu = devices.all.cpu.name
let res = await fetch('/get/devices') let allGPUs = Object.keys(devices.all).filter(d => d != 'cpu')
if (res.status === 200) { let activeGPUs = Object.keys(devices.active)
res = await res.json()
let allDeviceIds = Object.keys(res['all']).filter(d => d !== 'cpu') function ID_TO_TEXT(d) {
let activeDeviceIds = Object.keys(res['active']).filter(d => d !== 'cpu') let info = devices.all[d]
if ("mem_free" in info && "mem_total" in info) {
if (activeDeviceIds.length === 0) { return `${info.name} <small>(${d}) (${info.mem_free.toFixed(1)}Gb free / ${info.mem_total.toFixed(1)} Gb total)</small>`
useCPUField.checked = true } else {
} return `${info.name} <small>(${d}) (no memory info)</small>`
if (allDeviceIds.length < MIN_GPUS_TO_SHOW_SELECTION || useCPUField.checked) {
let gpuSettingEntry = getParameterSettingsEntry('use_gpus')
gpuSettingEntry.style.display = 'none'
let autoPickGPUSettingEntry = getParameterSettingsEntry('auto_pick_gpus')
autoPickGPUSettingEntry.style.display = 'none'
}
if (allDeviceIds.length === 0) {
useCPUField.checked = true
useCPUField.disabled = true // no compatible GPUs, so make the CPU mandatory
}
autoPickGPUsField.checked = (res['config'] === 'auto')
useGPUsField.innerHTML = ''
allDeviceIds.forEach(device => {
let deviceName = res['all'][device]['name']
let deviceOption = `<option value="${device}">${deviceName} (${device})</option>`
useGPUsField.insertAdjacentHTML('beforeend', deviceOption)
})
if (autoPickGPUsField.checked) {
let gpuSettingEntry = getParameterSettingsEntry('use_gpus')
gpuSettingEntry.style.display = 'none'
} else {
$('#use_gpus').val(activeDeviceIds)
}
} }
}
allGPUs = allGPUs.map(ID_TO_TEXT)
activeGPUs = activeGPUs.map(ID_TO_TEXT)
let systemInfoEl = document.querySelector('#system-info')
systemInfoEl.querySelector('#system-info-cpu').innerText = cpu
systemInfoEl.querySelector('#system-info-gpus-all').innerHTML = allGPUs.join('</br>')
systemInfoEl.querySelector('#system-info-rendering-devices').innerHTML = activeGPUs.join('</br>')
}
function setHostInfo(hosts) {
let port = listenPortField.value
hosts = hosts.map(addr => `http://${addr}:${port}/`).map(url => `<div><a href="${url}">${url}</a></div>`)
document.querySelector('#system-info-server-hosts').innerHTML = hosts.join('')
}
async function getSystemInfo() {
try {
const res = await SD.getSystemInfo()
let devices = res['devices']
let allDeviceIds = Object.keys(devices['all']).filter(d => d !== 'cpu')
let activeDeviceIds = Object.keys(devices['active']).filter(d => d !== 'cpu')
if (activeDeviceIds.length === 0) {
useCPUField.checked = true
}
if (allDeviceIds.length < MIN_GPUS_TO_SHOW_SELECTION || useCPUField.checked) {
let gpuSettingEntry = getParameterSettingsEntry('use_gpus')
gpuSettingEntry.style.display = 'none'
let autoPickGPUSettingEntry = getParameterSettingsEntry('auto_pick_gpus')
autoPickGPUSettingEntry.style.display = 'none'
}
if (allDeviceIds.length === 0) {
useCPUField.checked = true
useCPUField.disabled = true // no compatible GPUs, so make the CPU mandatory
}
autoPickGPUsField.checked = (devices['config'] === 'auto')
useGPUsField.innerHTML = ''
allDeviceIds.forEach(device => {
let deviceName = devices['all'][device]['name']
let deviceOption = `<option value="${device}">${deviceName} (${device})</option>`
useGPUsField.insertAdjacentHTML('beforeend', deviceOption)
})
if (autoPickGPUsField.checked) {
let gpuSettingEntry = getParameterSettingsEntry('use_gpus')
gpuSettingEntry.style.display = 'none'
} else {
$('#use_gpus').val(activeDeviceIds)
}
setDeviceInfo(devices)
setHostInfo(res['hosts'])
setDiskPath(res['default_output_dir'])
} catch (e) { } catch (e) {
console.log('error fetching devices', e) console.log('error fetching devices', e)
} }
} }
saveSettingsBtn.addEventListener('click', function() { saveSettingsBtn.addEventListener('click', function() {
let updateBranch = (useBetaChannelField.checked ? 'beta' : 'main') if (listenPortField.value == '') {
alert('The network port field must not be empty.')
if (listenPortField.value == '') { return
alert('The network port field must not be empty.') }
} else if (listenPortField.value<1 || listenPortField.value>65535) { if (listenPortField.value < 1 || listenPortField.value > 65535) {
alert('The network port must be a number from 1 to 65535') alert('The network port must be a number from 1 to 65535')
} else { return
changeAppConfig({ }
'render_devices': getCurrentRenderDeviceSelection(), let updateBranch = (useBetaChannelField.checked ? 'beta' : 'main')
'update_branch': updateBranch, changeAppConfig({
'ui_open_browser_on_start': uiOpenBrowserOnStartField.checked, 'render_devices': getCurrentRenderDeviceSelection(),
'listen_to_network': listenToNetworkField.checked, 'update_branch': updateBranch,
'listen_port': listenPortField.value 'ui_open_browser_on_start': uiOpenBrowserOnStartField.checked,
}) 'listen_to_network': listenToNetworkField.checked,
} 'listen_port': listenPortField.value
})
saveSettingsBtn.classList.add('active') saveSettingsBtn.classList.add('active')
asyncDelay(300).then(() => saveSettingsBtn.classList.remove('active')) asyncDelay(300).then(() => saveSettingsBtn.classList.remove('active'))
}) })

View File

@ -24,23 +24,48 @@ const PLUGINS = {
* } * }
* }) * })
*/ */
IMAGE_INFO_BUTTONS: [] IMAGE_INFO_BUTTONS: [],
MODIFIERS_LOAD: [],
TASK_CREATE: [],
OUTPUTS_FORMATS: new ServiceContainer(
function png() { return (reqBody) => new SD.RenderTask(reqBody) }
, function jpeg() { return (reqBody) => new SD.RenderTask(reqBody) }
),
}
PLUGINS.OUTPUTS_FORMATS.register = function(...args) {
const service = ServiceContainer.prototype.register.apply(this, args)
if (typeof outputFormatField !== 'undefined') {
const newOption = document.createElement("option")
newOption.setAttribute("value", service.name)
newOption.innerText = service.name
outputFormatField.appendChild(newOption)
}
return service
}
function loadScript(url) {
const script = document.createElement('script')
const promiseSrc = new PromiseSource()
script.addEventListener('error', () => promiseSrc.reject(new Error(`Script "${url}" couldn't be loaded.`)))
script.addEventListener('load', () => promiseSrc.resolve(url))
script.src = url + '?t=' + Date.now()
console.log('loading script', url)
document.head.appendChild(script)
return promiseSrc.promise
} }
async function loadUIPlugins() { async function loadUIPlugins() {
try { try {
let res = await fetch('/get/ui_plugins') const res = await fetch('/get/ui_plugins')
if (res.status === 200) { if (!res.ok) {
res = await res.json() console.error(`Error HTTP${res.status} while loading plugins list. - ${res.statusText}`)
res.forEach(pluginPath => { return
let script = document.createElement('script')
script.src = pluginPath + '?t=' + Date.now()
console.log('loading plugin', pluginPath)
document.head.appendChild(script)
})
} }
const plugins = await res.json()
const loadingPromises = plugins.map(loadScript)
return await Promise.allSettled(loadingPromises)
} catch (e) { } catch (e) {
console.log('error fetching plugin paths', e) console.log('error fetching plugin paths', e)
} }

View File

@ -60,6 +60,7 @@ function themeFieldChanged() {
body.style = ""; body.style = "";
var theme = THEMES.find(t => t.key == theme_key); var theme = THEMES.find(t => t.key == theme_key);
let borderColor = undefined
if (theme) { if (theme) {
// refresh variables incase they are back referencing // refresh variables incase they are back referencing
Array.from(DEFAULT_THEME.rule.style) Array.from(DEFAULT_THEME.rule.style)
@ -67,7 +68,14 @@ function themeFieldChanged() {
.forEach(cssVariable => { .forEach(cssVariable => {
body.style.setProperty(cssVariable, DEFAULT_THEME.rule.style.getPropertyValue(cssVariable)); body.style.setProperty(cssVariable, DEFAULT_THEME.rule.style.getPropertyValue(cssVariable));
}); });
borderColor = theme.rule.style.getPropertyValue('--input-border-color').trim()
if (!borderColor.startsWith('#')) {
borderColor = theme.rule.style.getPropertyValue('--theme-color-fallback')
}
} else {
borderColor = DEFAULT_THEME.rule.style.getPropertyValue('--theme-color-fallback')
} }
document.querySelector('meta[name="theme-color"]').setAttribute("content", borderColor)
} }
themeField.addEventListener('change', themeFieldChanged); themeField.addEventListener('change', themeFieldChanged);

View File

@ -1,32 +1,37 @@
"use strict";
// https://gomakethings.com/finding-the-next-and-previous-sibling-elements-that-match-a-selector-with-vanilla-js/ // https://gomakethings.com/finding-the-next-and-previous-sibling-elements-that-match-a-selector-with-vanilla-js/
function getNextSibling(elem, selector) { function getNextSibling(elem, selector) {
// Get the next sibling element // Get the next sibling element
var sibling = elem.nextElementSibling let sibling = elem.nextElementSibling
// If there's no selector, return the first sibling // If there's no selector, return the first sibling
if (!selector) return sibling if (!selector) {
return sibling
}
// If the sibling matches our selector, use it // If the sibling matches our selector, use it
// If not, jump to the next sibling and continue the loop // If not, jump to the next sibling and continue the loop
while (sibling) { while (sibling) {
if (sibling.matches(selector)) return sibling if (sibling.matches(selector)) {
sibling = sibling.nextElementSibling return sibling
} }
sibling = sibling.nextElementSibling
}
} }
/* Panel Stuff */ /* Panel Stuff */
// true = open // true = open
var COLLAPSIBLES_INITIALIZED = false; let COLLAPSIBLES_INITIALIZED = false;
const COLLAPSIBLES_KEY = "collapsibles"; const COLLAPSIBLES_KEY = "collapsibles";
const COLLAPSIBLE_PANELS = []; // filled in by createCollapsibles with all the elements matching .collapsible const COLLAPSIBLE_PANELS = []; // filled in by createCollapsibles with all the elements matching .collapsible
// on-init call this for any panels that are marked open // on-init call this for any panels that are marked open
function toggleCollapsible(element) { function toggleCollapsible(element) {
var collapsibleHeader = element.querySelector(".collapsible"); const collapsibleHeader = element.querySelector(".collapsible");
var handle = element.querySelector(".collapsible-handle"); const handle = element.querySelector(".collapsible-handle");
collapsibleHeader.classList.toggle("active") collapsibleHeader.classList.toggle("active")
let content = getNextSibling(collapsibleHeader, '.collapsible-content') let content = getNextSibling(collapsibleHeader, '.collapsible-content')
if (!collapsibleHeader.classList.contains("active")) { if (!collapsibleHeader.classList.contains("active")) {
@ -40,6 +45,7 @@ function toggleCollapsible(element) {
handle.innerHTML = '&#x2796;' // minus handle.innerHTML = '&#x2796;' // minus
} }
} }
document.dispatchEvent(new CustomEvent('collapsibleClick', { detail: collapsibleHeader }))
if (COLLAPSIBLES_INITIALIZED && COLLAPSIBLE_PANELS.includes(element)) { if (COLLAPSIBLES_INITIALIZED && COLLAPSIBLE_PANELS.includes(element)) {
saveCollapsibles() saveCollapsibles()
@ -47,16 +53,16 @@ function toggleCollapsible(element) {
} }
function saveCollapsibles() { function saveCollapsibles() {
var values = {} let values = {}
COLLAPSIBLE_PANELS.forEach(element => { COLLAPSIBLE_PANELS.forEach(element => {
var value = element.querySelector(".collapsible").className.indexOf("active") !== -1 let value = element.querySelector(".collapsible").className.indexOf("active") !== -1
values[element.id] = value values[element.id] = value
}) })
localStorage.setItem(COLLAPSIBLES_KEY, JSON.stringify(values)) localStorage.setItem(COLLAPSIBLES_KEY, JSON.stringify(values))
} }
function createCollapsibles(node) { function createCollapsibles(node) {
var save = false let save = false
if (!node) { if (!node) {
node = document node = document
save = true save = true
@ -81,7 +87,7 @@ function createCollapsibles(node) {
}) })
}) })
if (save) { if (save) {
var saved = localStorage.getItem(COLLAPSIBLES_KEY) let saved = localStorage.getItem(COLLAPSIBLES_KEY)
if (!saved) { if (!saved) {
saved = tryLoadOldCollapsibles(); saved = tryLoadOldCollapsibles();
} }
@ -89,9 +95,9 @@ function createCollapsibles(node) {
saveCollapsibles() saveCollapsibles()
saved = localStorage.getItem(COLLAPSIBLES_KEY) saved = localStorage.getItem(COLLAPSIBLES_KEY)
} }
var values = JSON.parse(saved) let values = JSON.parse(saved)
COLLAPSIBLE_PANELS.forEach(element => { COLLAPSIBLE_PANELS.forEach(element => {
var value = element.querySelector(".collapsible").className.indexOf("active") !== -1 let value = element.querySelector(".collapsible").className.indexOf("active") !== -1
if (values[element.id] != value) { if (values[element.id] != value) {
toggleCollapsible(element) toggleCollapsible(element)
} }
@ -101,17 +107,17 @@ function createCollapsibles(node) {
} }
function tryLoadOldCollapsibles() { function tryLoadOldCollapsibles() {
var old_map = { const old_map = {
"advancedPanelOpen": "editor-settings", "advancedPanelOpen": "editor-settings",
"modifiersPanelOpen": "editor-modifiers", "modifiersPanelOpen": "editor-modifiers",
"negativePromptPanelOpen": "editor-inputs-prompt" "negativePromptPanelOpen": "editor-inputs-prompt"
}; };
if (localStorage.getItem(Object.keys(old_map)[0])) { if (localStorage.getItem(Object.keys(old_map)[0])) {
var result = {}; let result = {};
Object.keys(old_map).forEach(key => { Object.keys(old_map).forEach(key => {
var value = localStorage.getItem(key); const value = localStorage.getItem(key);
if (value !== null) { if (value !== null) {
result[old_map[key]] = value == true || value == "true" result[old_map[key]] = (value == true || value == "true")
localStorage.removeItem(key) localStorage.removeItem(key)
} }
}); });
@ -150,17 +156,17 @@ function millisecondsToStr(milliseconds) {
return (number > 1) ? 's' : '' return (number > 1) ? 's' : ''
} }
var temp = Math.floor(milliseconds / 1000) let temp = Math.floor(milliseconds / 1000)
var hours = Math.floor((temp %= 86400) / 3600) let hours = Math.floor((temp %= 86400) / 3600)
var s = '' let s = ''
if (hours) { if (hours) {
s += hours + ' hour' + numberEnding(hours) + ' ' s += hours + ' hour' + numberEnding(hours) + ' '
} }
var minutes = Math.floor((temp %= 3600) / 60) let minutes = Math.floor((temp %= 3600) / 60)
if (minutes) { if (minutes) {
s += minutes + ' minute' + numberEnding(minutes) + ' ' s += minutes + ' minute' + numberEnding(minutes) + ' '
} }
var seconds = temp % 60 let seconds = temp % 60
if (!hours && minutes < 4 && seconds) { if (!hours && minutes < 4 && seconds) {
s += seconds + ' second' + numberEnding(seconds) s += seconds + ' second' + numberEnding(seconds)
} }
@ -178,7 +184,7 @@ function BraceExpander() {
function bracePair(tkns, iPosn, iNest, lstCommas) { function bracePair(tkns, iPosn, iNest, lstCommas) {
if (iPosn >= tkns.length || iPosn < 0) return null; if (iPosn >= tkns.length || iPosn < 0) return null;
var t = tkns[iPosn], let t = tkns[iPosn],
n = (t === '{') ? ( n = (t === '{') ? (
iNest + 1 iNest + 1
) : (t === '}' ? ( ) : (t === '}' ? (
@ -198,7 +204,7 @@ function BraceExpander() {
function andTree(dctSofar, tkns) { function andTree(dctSofar, tkns) {
if (!tkns.length) return [dctSofar, []]; if (!tkns.length) return [dctSofar, []];
var dctParse = dctSofar ? dctSofar : { let dctParse = dctSofar ? dctSofar : {
fn: and, fn: and,
args: [] args: []
}, },
@ -231,14 +237,14 @@ function BraceExpander() {
// Parse of a PARADIGM subtree // Parse of a PARADIGM subtree
function orTree(dctSofar, tkns, lstCommas) { function orTree(dctSofar, tkns, lstCommas) {
if (!tkns.length) return [dctSofar, []]; if (!tkns.length) return [dctSofar, []];
var iLast = lstCommas.length; let iLast = lstCommas.length;
return { return {
fn: or, fn: or,
args: splitsAt( args: splitsAt(
lstCommas, tkns lstCommas, tkns
).map(function (x, i) { ).map(function (x, i) {
var ts = x.slice( let ts = x.slice(
1, i === iLast ? ( 1, i === iLast ? (
-1 -1
) : void 0 ) : void 0
@ -256,7 +262,7 @@ function BraceExpander() {
// List of unescaped braces and commas, and remaining strings // List of unescaped braces and commas, and remaining strings
function tokens(str) { function tokens(str) {
// Filter function excludes empty splitting artefacts // Filter function excludes empty splitting artefacts
var toS = function (x) { let toS = function (x) {
return x.toString(); return x.toString();
}; };
@ -270,7 +276,7 @@ function BraceExpander() {
// PARSE TREE OPERATOR (1 of 2) // PARSE TREE OPERATOR (1 of 2)
// Each possible head * each possible tail // Each possible head * each possible tail
function and(args) { function and(args) {
var lng = args.length, let lng = args.length,
head = lng ? args[0] : null, head = lng ? args[0] : null,
lstHead = "string" === typeof head ? ( lstHead = "string" === typeof head ? (
[head] [head]
@ -330,7 +336,7 @@ function BraceExpander() {
// s -> [s] // s -> [s]
this.expand = function(s) { this.expand = function(s) {
// BRACE EXPRESSION PARSED // BRACE EXPRESSION PARSED
var dctParse = andTree(null, tokens(s))[0]; let dctParse = andTree(null, tokens(s))[0];
// ABSTRACT SYNTAX TREE LOGGED // ABSTRACT SYNTAX TREE LOGGED
// console.log(pp(dctParse)); // console.log(pp(dctParse));
@ -341,12 +347,76 @@ function BraceExpander() {
} }
/** Pause the execution of an async function until timer elapse.
* @Returns a promise that will resolve after the specified timeout.
*/
function asyncDelay(timeout) { function asyncDelay(timeout) {
return new Promise(function(resolve, reject) { return new Promise(function(resolve, reject) {
setTimeout(resolve, timeout, true) setTimeout(resolve, timeout, true)
}) })
} }
function PromiseSource() {
const srcPromise = new Promise((resolve, reject) => {
Object.defineProperties(this, {
resolve: { value: resolve, writable: false }
, reject: { value: reject, writable: false }
})
})
Object.defineProperties(this, {
promise: {value: makeQuerablePromise(srcPromise), writable: false}
})
}
/** A debounce is a higher-order function, which is a function that returns another function
* that, as long as it continues to be invoked, will not be triggered.
* The function will be called after it stops being called for N milliseconds.
* If `immediate` is passed, trigger the function on the leading edge, instead of the trailing.
* @Returns a promise that will resolve to func return value.
*/
function debounce (func, wait, immediate) {
if (typeof wait === "undefined") {
wait = 40
}
if (typeof wait !== "number") {
throw new Error("wait is not an number.")
}
let timeout = null
let lastPromiseSrc = new PromiseSource()
const applyFn = function(context, args) {
let result = undefined
try {
result = func.apply(context, args)
} catch (err) {
lastPromiseSrc.reject(err)
}
if (result instanceof Promise) {
result.then(lastPromiseSrc.resolve, lastPromiseSrc.reject)
} else {
lastPromiseSrc.resolve(result)
}
}
return function(...args) {
const callNow = Boolean(immediate && !timeout)
const context = this;
if (timeout) {
clearTimeout(timeout)
}
timeout = setTimeout(function () {
if (!immediate) {
applyFn(context, args)
}
lastPromiseSrc = new PromiseSource()
timeout = null
}, wait)
if (callNow) {
applyFn(context, args)
}
return lastPromiseSrc.promise
}
}
function preventNonNumericalInput(e) { function preventNonNumericalInput(e) {
e = e || window.event; e = e || window.event;
let charCode = (typeof e.which == "undefined") ? e.keyCode : e.which; let charCode = (typeof e.which == "undefined") ? e.keyCode : e.which;
@ -359,6 +429,83 @@ function preventNonNumericalInput(e) {
} }
} }
/** Returns the global object for the current execution environement.
* @Returns window in a browser, global in node and self in a ServiceWorker.
* @Notes Allows unit testing and use of the engine outside of a browser.
*/
function getGlobal() {
if (typeof globalThis === 'object') {
return globalThis
} else if (typeof global === 'object') {
return global
} else if (typeof self === 'object') {
return self
}
try {
return Function('return this')()
} catch {
// If the Function constructor fails, we're in a browser with eval disabled by CSP headers.
return window
} // Returns undefined if global can't be found.
}
/** Check if x is an Array or a TypedArray.
* @Returns true if x is an Array or a TypedArray, false otherwise.
*/
function isArrayOrTypedArray(x) {
return Boolean(typeof x === 'object' && (Array.isArray(x) || (ArrayBuffer.isView(x) && !(x instanceof DataView))))
}
function makeQuerablePromise(promise) {
if (typeof promise !== 'object') {
throw new Error('promise is not an object.')
}
if (!(promise instanceof Promise)) {
throw new Error('Argument is not a promise.')
}
// Don't modify a promise that's been already modified.
if ('isResolved' in promise || 'isRejected' in promise || 'isPending' in promise) {
return promise
}
let isPending = true
let isRejected = false
let rejectReason = undefined
let isResolved = false
let resolvedValue = undefined
const qurPro = promise.then(
function(val){
isResolved = true
isPending = false
resolvedValue = val
return val
}
, function(reason) {
rejectReason = reason
isRejected = true
isPending = false
throw reason
}
)
Object.defineProperties(qurPro, {
'isResolved': {
get: () => isResolved
}
, 'resolvedValue': {
get: () => resolvedValue
}
, 'isPending': {
get: () => isPending
}
, 'isRejected': {
get: () => isRejected
}
, 'rejectReason': {
get: () => rejectReason
}
})
return qurPro
}
/* inserts custom html to allow prettifying of inputs */ /* inserts custom html to allow prettifying of inputs */
function prettifyInputs(root_element) { function prettifyInputs(root_element) {
root_element.querySelectorAll(`input[type="checkbox"]`).forEach(element => { root_element.querySelectorAll(`input[type="checkbox"]`).forEach(element => {
@ -374,3 +521,156 @@ function prettifyInputs(root_element) {
} }
}) })
} }
class GenericEventSource {
#events = {};
#types = []
constructor(...eventsTypes) {
if (Array.isArray(eventsTypes) && eventsTypes.length === 1 && Array.isArray(eventsTypes[0])) {
eventsTypes = eventsTypes[0]
}
this.#types.push(...eventsTypes)
}
get eventTypes() {
return this.#types
}
/** Add a new event listener
*/
addEventListener(name, handler) {
if (!this.#types.includes(name)) {
throw new Error('Invalid event name.')
}
if (this.#events.hasOwnProperty(name)) {
this.#events[name].push(handler)
} else {
this.#events[name] = [handler]
}
}
/** Remove the event listener
*/
removeEventListener(name, handler) {
if (!this.#events.hasOwnProperty(name)) {
return
}
const index = this.#events[name].indexOf(handler)
if (index != -1) {
this.#events[name].splice(index, 1)
}
}
fireEvent(name, ...args) {
if (!this.#types.includes(name)) {
throw new Error(`Event ${String(name)} missing from Events.types`)
}
if (!this.#events.hasOwnProperty(name)) {
return Promise.resolve()
}
if (!args || !args.length) {
args = []
}
const evs = this.#events[name]
if (evs.length <= 0) {
return Promise.resolve()
}
return Promise.allSettled(evs.map((callback) => {
try {
return Promise.resolve(callback.apply(SD, args))
} catch (ex) {
return Promise.reject(ex)
}
}))
}
}
class ServiceContainer {
#services = new Map()
#singletons = new Map()
constructor(...servicesParams) {
servicesParams.forEach(this.register.bind(this))
}
get services () {
return this.#services
}
get singletons() {
return this.#singletons
}
register(params) {
if (ServiceContainer.isConstructor(params)) {
if (typeof params.name !== 'string') {
throw new Error('params.name is not a string.')
}
params = {name:params.name, definition:params}
}
if (typeof params !== 'object') {
throw new Error('params is not an object.')
}
[ 'name',
'definition',
].forEach((key) => {
if (!(key in params)) {
console.error('Invalid service %o registration.', params)
throw new Error(`params.${key} is not defined.`)
}
})
const opts = {definition: params.definition}
if ('dependencies' in params) {
if (Array.isArray(params.dependencies)) {
params.dependencies.forEach((dep) => {
if (typeof dep !== 'string') {
throw new Error('dependency name is not a string.')
}
})
opts.dependencies = params.dependencies
} else {
throw new Error('params.dependencies is not an array.')
}
}
if (params.singleton) {
opts.singleton = true
}
this.#services.set(params.name, opts)
return Object.assign({name: params.name}, opts)
}
get(name) {
const ctorInfos = this.#services.get(name)
if (!ctorInfos) {
return
}
if(!ServiceContainer.isConstructor(ctorInfos.definition)) {
return ctorInfos.definition
}
if(!ctorInfos.singleton) {
return this._createInstance(ctorInfos)
}
const singletonInstance = this.#singletons.get(name)
if(singletonInstance) {
return singletonInstance
}
const newSingletonInstance = this._createInstance(ctorInfos)
this.#singletons.set(name, newSingletonInstance)
return newSingletonInstance
}
_getResolvedDependencies(service) {
let classDependencies = []
if(service.dependencies) {
classDependencies = service.dependencies.map(this.get.bind(this))
}
return classDependencies
}
_createInstance(service) {
if (!ServiceContainer.isClass(service.definition)) {
// Call as normal function.
return service.definition(...this._getResolvedDependencies(service))
}
// Use new
return new service.definition(...this._getResolvedDependencies(service))
}
static isClass(definition) {
return typeof definition === 'function' && Boolean(definition.prototype) && definition.prototype.constructor === definition
}
static isConstructor(definition) {
return typeof definition === 'function'
}
}

View File

@ -0,0 +1,8 @@
{
"name": "Stable Diffusion UI",
"display": "standalone",
"display_override": [
"window-controls-overlay"
],
"theme_color": "#000000"
}

View File

@ -0,0 +1,45 @@
(function () {
"use strict"
var styleSheet = document.createElement("style");
styleSheet.textContent = `
.auto-scroll {
float: right;
}
`;
document.head.appendChild(styleSheet);
const autoScrollControl = document.createElement('div');
autoScrollControl.innerHTML = `<input id="auto_scroll" name="auto_scroll" type="checkbox">
<label for="auto_scroll">Scroll to generated image</label>`
autoScrollControl.className = "auto-scroll"
clearAllPreviewsBtn.parentNode.insertBefore(autoScrollControl, clearAllPreviewsBtn.nextSibling)
prettifyInputs(document);
let autoScroll = document.querySelector("#auto_scroll")
// save/restore the toggle state
autoScroll.addEventListener('click', (e) => {
localStorage.setItem('auto_scroll', autoScroll.checked)
})
autoScroll.checked = localStorage.getItem('auto_scroll') == "true"
// observe for changes in the preview pane
var observer = new MutationObserver(function (mutations) {
mutations.forEach(function (mutation) {
if (mutation.target.className == 'img-batch') {
Autoscroll(mutation.target)
}
})
})
observer.observe(document.getElementById('preview'), {
childList: true,
subtree: true
})
function Autoscroll(target) {
if (autoScroll.checked && target !== null) {
target.parentElement.parentElement.parentElement.scrollIntoView();
}
}
})()

View File

@ -1,7 +1,10 @@
(function () { (function () { "use strict"
"use strict" if (typeof editorModifierTagsList !== 'object') {
console.error('editorModifierTagsList missing...')
return
}
var styleSheet = document.createElement("style"); const styleSheet = document.createElement("style");
styleSheet.textContent = ` styleSheet.textContent = `
.modifier-card-tiny.drag-sort-active { .modifier-card-tiny.drag-sort-active {
background: transparent; background: transparent;
@ -12,7 +15,7 @@
document.head.appendChild(styleSheet); document.head.appendChild(styleSheet);
// observe for changes in tag list // observe for changes in tag list
var observer = new MutationObserver(function (mutations) { const observer = new MutationObserver(function (mutations) {
// mutations.forEach(function (mutation) { // mutations.forEach(function (mutation) {
if (editorModifierTagsList.childNodes.length > 0) { if (editorModifierTagsList.childNodes.length > 0) {
ModifierDragAndDrop(editorModifierTagsList) ModifierDragAndDrop(editorModifierTagsList)

View File

@ -1,8 +1,11 @@
(function () { (function () { "use strict"
"use strict" if (typeof editorModifierTagsList !== 'object') {
console.error('editorModifierTagsList missing...')
return
}
// observe for changes in tag list // observe for changes in tag list
var observer = new MutationObserver(function (mutations) { const observer = new MutationObserver(function (mutations) {
// mutations.forEach(function (mutation) { // mutations.forEach(function (mutation) {
if (editorModifierTagsList.childNodes.length > 0) { if (editorModifierTagsList.childNodes.length > 0) {
ModifierMouseWheel(editorModifierTagsList) ModifierMouseWheel(editorModifierTagsList)
@ -18,40 +21,42 @@
let overlays = document.querySelector('#editor-inputs-tags-list').querySelectorAll('.modifier-card-overlay') let overlays = document.querySelector('#editor-inputs-tags-list').querySelectorAll('.modifier-card-overlay')
overlays.forEach (i => { overlays.forEach (i => {
i.onwheel = (e) => { i.onwheel = (e) => {
e.preventDefault() if (e.ctrlKey == true) {
e.preventDefault()
const delta = Math.sign(event.deltaY)
let s = i.parentElement.getElementsByClassName('modifier-card-label')[0].getElementsByTagName("p")[0].innerText const delta = Math.sign(event.deltaY)
if (delta < 0) { let s = i.parentElement.getElementsByClassName('modifier-card-label')[0].getElementsByTagName("p")[0].innerText
// wheel scrolling up if (delta < 0) {
if (s.substring(0, 1) == '[' && s.substring(s.length-1) == ']') { // wheel scrolling up
s = s.substring(1, s.length - 1) if (s.substring(0, 1) == '[' && s.substring(s.length-1) == ']') {
} s = s.substring(1, s.length - 1)
else }
{ else
if (s.substring(0, 10) !== '('.repeat(10) && s.substring(s.length-10) !== ')'.repeat(10)) { {
s = '(' + s + ')' if (s.substring(0, 10) !== '('.repeat(10) && s.substring(s.length-10) !== ')'.repeat(10)) {
s = '(' + s + ')'
}
} }
} }
} else{
else{ // wheel scrolling down
// wheel scrolling down if (s.substring(0, 1) == '(' && s.substring(s.length-1) == ')') {
if (s.substring(0, 1) == '(' && s.substring(s.length-1) == ')') { s = s.substring(1, s.length - 1)
s = s.substring(1, s.length - 1) }
} else
else {
{ if (s.substring(0, 10) !== '['.repeat(10) && s.substring(s.length-10) !== ']'.repeat(10)) {
if (s.substring(0, 10) !== '['.repeat(10) && s.substring(s.length-10) !== ']'.repeat(10)) { s = '[' + s + ']'
s = '[' + s + ']' }
} }
} }
} i.parentElement.getElementsByClassName('modifier-card-label')[0].getElementsByTagName("p")[0].innerText = s
i.parentElement.getElementsByClassName('modifier-card-label')[0].getElementsByTagName("p")[0].innerText = s // update activeTags
// update activeTags for (let it = 0; it < overlays.length; it++) {
for (let it = 0; it < overlays.length; it++) { if (i == overlays[it]) {
if (i == overlays[it]) { activeTags[it].name = s
activeTags[it].name = s break
break }
} }
} }
} }

View File

@ -0,0 +1,29 @@
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<title>Jasmine Spec Runner v4.5.0</title>
<link rel="shortcut icon" type="image/png" href="./jasmine/jasmine_favicon.png">
<link rel="stylesheet" href="./jasmine/jasmine.css">
<script src="./jasmine/jasmine.js"></script>
<script src="./jasmine/jasmine-html.js"></script>
<script src="./jasmine/boot0.js"></script>
<!-- optional: include a file here that configures the Jasmine env -->
<script src="./jasmine/boot1.js"></script>
<!-- include source files here... -->
<script src="/media/js/utils.js?v=4"></script>
<script src="/media/js/engine.js?v=1"></script>
<!-- <script src="./engine.js?v=1"></script> -->
<script src="/media/js/plugins.js?v=1"></script>
<!-- include spec files here... -->
<script src="./jasmineSpec.js"></script>
</head>
<body>
</body>
</html>

View File

@ -0,0 +1,31 @@
(function() {
PLUGINS['MODIFIERS_LOAD'].push({
loader: function() {
let customModifiers = localStorage.getItem(CUSTOM_MODIFIERS_KEY, '')
customModifiersTextBox.value = customModifiers
if (customModifiersGroupElement !== undefined) {
customModifiersGroupElement.remove()
}
if (customModifiers && customModifiers.trim() !== '') {
customModifiers = customModifiers.split('\n')
customModifiers = customModifiers.filter(m => m.trim() !== '')
customModifiers = customModifiers.map(function(m) {
return {
"modifier": m
}
})
let customGroup = {
'category': 'Custom Modifiers',
'modifiers': customModifiers
}
customModifiersGroupElement = createModifierGroup(customGroup, true)
createCollapsibles(customModifiersGroupElement)
}
}
})
})()

View File

@ -0,0 +1,64 @@
/*
Copyright (c) 2008-2022 Pivotal Labs
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*/
/**
This file starts the process of "booting" Jasmine. It initializes Jasmine,
makes its globals available, and creates the env. This file should be loaded
after `jasmine.js` and `jasmine_html.js`, but before `boot1.js` or any project
source files or spec files are loaded.
*/
(function() {
const jasmineRequire = window.jasmineRequire || require('./jasmine.js');
/**
* ## Require &amp; Instantiate
*
* Require Jasmine's core files. Specifically, this requires and attaches all of Jasmine's code to the `jasmine` reference.
*/
const jasmine = jasmineRequire.core(jasmineRequire),
global = jasmine.getGlobal();
global.jasmine = jasmine;
/**
* Since this is being run in a browser and the results should populate to an HTML page, require the HTML-specific Jasmine code, injecting the same reference.
*/
jasmineRequire.html(jasmine);
/**
* Create the Jasmine environment. This is used to run all specs in a project.
*/
const env = jasmine.getEnv();
/**
* ## The Global Interface
*
* Build up the functions that will be exposed as the Jasmine public interface. A project can customize, rename or alias any of these functions as desired, provided the implementation remains unchanged.
*/
const jasmineInterface = jasmineRequire.interface(jasmine, env);
/**
* Add all of the Jasmine global/public interface to the global scope, so a project can use the public interface directly. For example, calling `describe` in specs instead of `jasmine.getEnv().describe`.
*/
for (const property in jasmineInterface) {
global[property] = jasmineInterface[property];
}
})();

View File

@ -0,0 +1,132 @@
/*
Copyright (c) 2008-2022 Pivotal Labs
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*/
/**
This file finishes 'booting' Jasmine, performing all of the necessary
initialization before executing the loaded environment and all of a project's
specs. This file should be loaded after `boot0.js` but before any project
source files or spec files are loaded. Thus this file can also be used to
customize Jasmine for a project.
If a project is using Jasmine via the standalone distribution, this file can
be customized directly. If you only wish to configure the Jasmine env, you
can load another file that calls `jasmine.getEnv().configure({...})`
after `boot0.js` is loaded and before this file is loaded.
*/
(function() {
const env = jasmine.getEnv();
/**
* ## Runner Parameters
*
* More browser specific code - wrap the query string in an object and to allow for getting/setting parameters from the runner user interface.
*/
const queryString = new jasmine.QueryString({
getWindowLocation: function() {
return window.location;
}
});
const filterSpecs = !!queryString.getParam('spec');
const config = {
stopOnSpecFailure: queryString.getParam('stopOnSpecFailure'),
stopSpecOnExpectationFailure: queryString.getParam(
'stopSpecOnExpectationFailure'
),
hideDisabled: queryString.getParam('hideDisabled')
};
const random = queryString.getParam('random');
if (random !== undefined && random !== '') {
config.random = random;
}
const seed = queryString.getParam('seed');
if (seed) {
config.seed = seed;
}
/**
* ## Reporters
* The `HtmlReporter` builds all of the HTML UI for the runner page. This reporter paints the dots, stars, and x's for specs, as well as all spec names and all failures (if any).
*/
const htmlReporter = new jasmine.HtmlReporter({
env: env,
navigateWithNewParam: function(key, value) {
return queryString.navigateWithNewParam(key, value);
},
addToExistingQueryString: function(key, value) {
return queryString.fullStringWithNewParam(key, value);
},
getContainer: function() {
return document.body;
},
createElement: function() {
return document.createElement.apply(document, arguments);
},
createTextNode: function() {
return document.createTextNode.apply(document, arguments);
},
timer: new jasmine.Timer(),
filterSpecs: filterSpecs
});
/**
* The `jsApiReporter` also receives spec results, and is used by any environment that needs to extract the results from JavaScript.
*/
env.addReporter(jsApiReporter);
env.addReporter(htmlReporter);
/**
* Filter which specs will be run by matching the start of the full name against the `spec` query param.
*/
const specFilter = new jasmine.HtmlSpecFilter({
filterString: function() {
return queryString.getParam('spec');
}
});
config.specFilter = function(spec) {
return specFilter.matches(spec.getFullName());
};
env.configure(config);
/**
* ## Execution
*
* Replace the browser window's `onload`, ensure it's called, and then run all of the loaded specs. This includes initializing the `HtmlReporter` instance and then executing the loaded Jasmine environment. All of this will happen after all of the specs are loaded.
*/
const currentWindowOnload = window.onload;
window.onload = function() {
if (currentWindowOnload) {
currentWindowOnload();
}
htmlReporter.initialize();
env.execute();
};
})();

View File

@ -0,0 +1,964 @@
/*
Copyright (c) 2008-2022 Pivotal Labs
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*/
// eslint-disable-next-line no-var
var jasmineRequire = window.jasmineRequire || require('./jasmine.js');
jasmineRequire.html = function(j$) {
j$.ResultsNode = jasmineRequire.ResultsNode();
j$.HtmlReporter = jasmineRequire.HtmlReporter(j$);
j$.QueryString = jasmineRequire.QueryString();
j$.HtmlSpecFilter = jasmineRequire.HtmlSpecFilter();
};
jasmineRequire.HtmlReporter = function(j$) {
function ResultsStateBuilder() {
this.topResults = new j$.ResultsNode({}, '', null);
this.currentParent = this.topResults;
this.specsExecuted = 0;
this.failureCount = 0;
this.pendingSpecCount = 0;
}
ResultsStateBuilder.prototype.suiteStarted = function(result) {
this.currentParent.addChild(result, 'suite');
this.currentParent = this.currentParent.last();
};
ResultsStateBuilder.prototype.suiteDone = function(result) {
this.currentParent.updateResult(result);
if (this.currentParent !== this.topResults) {
this.currentParent = this.currentParent.parent;
}
if (result.status === 'failed') {
this.failureCount++;
}
};
ResultsStateBuilder.prototype.specStarted = function(result) {};
ResultsStateBuilder.prototype.specDone = function(result) {
this.currentParent.addChild(result, 'spec');
if (result.status !== 'excluded') {
this.specsExecuted++;
}
if (result.status === 'failed') {
this.failureCount++;
}
if (result.status == 'pending') {
this.pendingSpecCount++;
}
};
ResultsStateBuilder.prototype.jasmineDone = function(result) {
if (result.failedExpectations) {
this.failureCount += result.failedExpectations.length;
}
};
function HtmlReporter(options) {
function config() {
return (options.env && options.env.configuration()) || {};
}
const getContainer = options.getContainer;
const createElement = options.createElement;
const createTextNode = options.createTextNode;
const navigateWithNewParam = options.navigateWithNewParam || function() {};
const addToExistingQueryString =
options.addToExistingQueryString || defaultQueryString;
const filterSpecs = options.filterSpecs;
let htmlReporterMain;
let symbols;
const deprecationWarnings = [];
const failures = [];
this.initialize = function() {
clearPrior();
htmlReporterMain = createDom(
'div',
{ className: 'jasmine_html-reporter' },
createDom(
'div',
{ className: 'jasmine-banner' },
createDom('a', {
className: 'jasmine-title',
href: 'http://jasmine.github.io/',
target: '_blank'
}),
createDom('span', { className: 'jasmine-version' }, j$.version)
),
createDom('ul', { className: 'jasmine-symbol-summary' }),
createDom('div', { className: 'jasmine-alert' }),
createDom(
'div',
{ className: 'jasmine-results' },
createDom('div', { className: 'jasmine-failures' })
)
);
getContainer().appendChild(htmlReporterMain);
};
let totalSpecsDefined;
this.jasmineStarted = function(options) {
totalSpecsDefined = options.totalSpecsDefined || 0;
};
const summary = createDom('div', { className: 'jasmine-summary' });
const stateBuilder = new ResultsStateBuilder();
this.suiteStarted = function(result) {
stateBuilder.suiteStarted(result);
};
this.suiteDone = function(result) {
stateBuilder.suiteDone(result);
if (result.status === 'failed') {
failures.push(failureDom(result));
}
addDeprecationWarnings(result, 'suite');
};
this.specStarted = function(result) {
stateBuilder.specStarted(result);
};
this.specDone = function(result) {
stateBuilder.specDone(result);
if (noExpectations(result)) {
const noSpecMsg = "Spec '" + result.fullName + "' has no expectations.";
if (result.status === 'failed') {
console.error(noSpecMsg);
} else {
console.warn(noSpecMsg);
}
}
if (!symbols) {
symbols = find('.jasmine-symbol-summary');
}
symbols.appendChild(
createDom('li', {
className: this.displaySpecInCorrectFormat(result),
id: 'spec_' + result.id,
title: result.fullName
})
);
if (result.status === 'failed') {
failures.push(failureDom(result));
}
addDeprecationWarnings(result, 'spec');
};
this.displaySpecInCorrectFormat = function(result) {
return noExpectations(result) && result.status === 'passed'
? 'jasmine-empty'
: this.resultStatus(result.status);
};
this.resultStatus = function(status) {
if (status === 'excluded') {
return config().hideDisabled
? 'jasmine-excluded-no-display'
: 'jasmine-excluded';
}
return 'jasmine-' + status;
};
this.jasmineDone = function(doneResult) {
stateBuilder.jasmineDone(doneResult);
const banner = find('.jasmine-banner');
const alert = find('.jasmine-alert');
const order = doneResult && doneResult.order;
alert.appendChild(
createDom(
'span',
{ className: 'jasmine-duration' },
'finished in ' + doneResult.totalTime / 1000 + 's'
)
);
banner.appendChild(optionsMenu(config()));
if (stateBuilder.specsExecuted < totalSpecsDefined) {
const skippedMessage =
'Ran ' +
stateBuilder.specsExecuted +
' of ' +
totalSpecsDefined +
' specs - run all';
// include window.location.pathname to fix issue with karma-jasmine-html-reporter in angular: see https://github.com/jasmine/jasmine/issues/1906
const skippedLink =
(window.location.pathname || '') +
addToExistingQueryString('spec', '');
alert.appendChild(
createDom(
'span',
{ className: 'jasmine-bar jasmine-skipped' },
createDom(
'a',
{ href: skippedLink, title: 'Run all specs' },
skippedMessage
)
)
);
}
let statusBarMessage = '';
let statusBarClassName = 'jasmine-overall-result jasmine-bar ';
const globalFailures =
(doneResult && doneResult.failedExpectations) || [];
const failed = stateBuilder.failureCount + globalFailures.length > 0;
if (totalSpecsDefined > 0 || failed) {
statusBarMessage +=
pluralize('spec', stateBuilder.specsExecuted) +
', ' +
pluralize('failure', stateBuilder.failureCount);
if (stateBuilder.pendingSpecCount) {
statusBarMessage +=
', ' + pluralize('pending spec', stateBuilder.pendingSpecCount);
}
}
if (doneResult.overallStatus === 'passed') {
statusBarClassName += ' jasmine-passed ';
} else if (doneResult.overallStatus === 'incomplete') {
statusBarClassName += ' jasmine-incomplete ';
statusBarMessage =
'Incomplete: ' +
doneResult.incompleteReason +
', ' +
statusBarMessage;
} else {
statusBarClassName += ' jasmine-failed ';
}
let seedBar;
if (order && order.random) {
seedBar = createDom(
'span',
{ className: 'jasmine-seed-bar' },
', randomized with seed ',
createDom(
'a',
{
title: 'randomized with seed ' + order.seed,
href: seedHref(order.seed)
},
order.seed
)
);
}
alert.appendChild(
createDom(
'span',
{ className: statusBarClassName },
statusBarMessage,
seedBar
)
);
const errorBarClassName = 'jasmine-bar jasmine-errored';
const afterAllMessagePrefix = 'AfterAll ';
for (let i = 0; i < globalFailures.length; i++) {
alert.appendChild(
createDom(
'span',
{ className: errorBarClassName },
globalFailureMessage(globalFailures[i])
)
);
}
function globalFailureMessage(failure) {
if (failure.globalErrorType === 'load') {
const prefix = 'Error during loading: ' + failure.message;
if (failure.filename) {
return (
prefix + ' in ' + failure.filename + ' line ' + failure.lineno
);
} else {
return prefix;
}
} else if (failure.globalErrorType === 'afterAll') {
return afterAllMessagePrefix + failure.message;
} else {
return failure.message;
}
}
addDeprecationWarnings(doneResult);
for (let i = 0; i < deprecationWarnings.length; i++) {
const children = [];
let context;
switch (deprecationWarnings[i].runnableType) {
case 'spec':
context = '(in spec: ' + deprecationWarnings[i].runnableName + ')';
break;
case 'suite':
context = '(in suite: ' + deprecationWarnings[i].runnableName + ')';
break;
default:
context = '';
}
deprecationWarnings[i].message.split('\n').forEach(function(line) {
children.push(line);
children.push(createDom('br'));
});
children[0] = 'DEPRECATION: ' + children[0];
children.push(context);
if (deprecationWarnings[i].stack) {
children.push(createExpander(deprecationWarnings[i].stack));
}
alert.appendChild(
createDom(
'span',
{ className: 'jasmine-bar jasmine-warning' },
children
)
);
}
const results = find('.jasmine-results');
results.appendChild(summary);
summaryList(stateBuilder.topResults, summary);
if (failures.length) {
alert.appendChild(
createDom(
'span',
{ className: 'jasmine-menu jasmine-bar jasmine-spec-list' },
createDom('span', {}, 'Spec List | '),
createDom(
'a',
{ className: 'jasmine-failures-menu', href: '#' },
'Failures'
)
)
);
alert.appendChild(
createDom(
'span',
{ className: 'jasmine-menu jasmine-bar jasmine-failure-list' },
createDom(
'a',
{ className: 'jasmine-spec-list-menu', href: '#' },
'Spec List'
),
createDom('span', {}, ' | Failures ')
)
);
find('.jasmine-failures-menu').onclick = function() {
setMenuModeTo('jasmine-failure-list');
return false;
};
find('.jasmine-spec-list-menu').onclick = function() {
setMenuModeTo('jasmine-spec-list');
return false;
};
setMenuModeTo('jasmine-failure-list');
const failureNode = find('.jasmine-failures');
for (let i = 0; i < failures.length; i++) {
failureNode.appendChild(failures[i]);
}
}
};
return this;
function failureDom(result) {
const failure = createDom(
'div',
{ className: 'jasmine-spec-detail jasmine-failed' },
failureDescription(result, stateBuilder.currentParent),
createDom('div', { className: 'jasmine-messages' })
);
const messages = failure.childNodes[1];
for (let i = 0; i < result.failedExpectations.length; i++) {
const expectation = result.failedExpectations[i];
messages.appendChild(
createDom(
'div',
{ className: 'jasmine-result-message' },
expectation.message
)
);
messages.appendChild(
createDom(
'div',
{ className: 'jasmine-stack-trace' },
expectation.stack
)
);
}
if (result.failedExpectations.length === 0) {
messages.appendChild(
createDom(
'div',
{ className: 'jasmine-result-message' },
'Spec has no expectations'
)
);
}
if (result.debugLogs) {
messages.appendChild(debugLogTable(result.debugLogs));
}
return failure;
}
function debugLogTable(debugLogs) {
const tbody = createDom('tbody');
debugLogs.forEach(function(entry) {
tbody.appendChild(
createDom(
'tr',
{},
createDom('td', {}, entry.timestamp.toString()),
createDom('td', {}, entry.message)
)
);
});
return createDom(
'div',
{ className: 'jasmine-debug-log' },
createDom(
'div',
{ className: 'jasmine-debug-log-header' },
'Debug logs'
),
createDom(
'table',
{},
createDom(
'thead',
{},
createDom(
'tr',
{},
createDom('th', {}, 'Time (ms)'),
createDom('th', {}, 'Message')
)
),
tbody
)
);
}
function summaryList(resultsTree, domParent) {
let specListNode;
for (let i = 0; i < resultsTree.children.length; i++) {
const resultNode = resultsTree.children[i];
if (filterSpecs && !hasActiveSpec(resultNode)) {
continue;
}
if (resultNode.type === 'suite') {
const suiteListNode = createDom(
'ul',
{ className: 'jasmine-suite', id: 'suite-' + resultNode.result.id },
createDom(
'li',
{
className:
'jasmine-suite-detail jasmine-' + resultNode.result.status
},
createDom(
'a',
{ href: specHref(resultNode.result) },
resultNode.result.description
)
)
);
summaryList(resultNode, suiteListNode);
domParent.appendChild(suiteListNode);
}
if (resultNode.type === 'spec') {
if (domParent.getAttribute('class') !== 'jasmine-specs') {
specListNode = createDom('ul', { className: 'jasmine-specs' });
domParent.appendChild(specListNode);
}
let specDescription = resultNode.result.description;
if (noExpectations(resultNode.result)) {
specDescription = 'SPEC HAS NO EXPECTATIONS ' + specDescription;
}
if (
resultNode.result.status === 'pending' &&
resultNode.result.pendingReason !== ''
) {
specDescription =
specDescription +
' PENDING WITH MESSAGE: ' +
resultNode.result.pendingReason;
}
specListNode.appendChild(
createDom(
'li',
{
className: 'jasmine-' + resultNode.result.status,
id: 'spec-' + resultNode.result.id
},
createDom(
'a',
{ href: specHref(resultNode.result) },
specDescription
)
)
);
}
}
}
function optionsMenu(config) {
const optionsMenuDom = createDom(
'div',
{ className: 'jasmine-run-options' },
createDom('span', { className: 'jasmine-trigger' }, 'Options'),
createDom(
'div',
{ className: 'jasmine-payload' },
createDom(
'div',
{ className: 'jasmine-stop-on-failure' },
createDom('input', {
className: 'jasmine-fail-fast',
id: 'jasmine-fail-fast',
type: 'checkbox'
}),
createDom(
'label',
{ className: 'jasmine-label', for: 'jasmine-fail-fast' },
'stop execution on spec failure'
)
),
createDom(
'div',
{ className: 'jasmine-throw-failures' },
createDom('input', {
className: 'jasmine-throw',
id: 'jasmine-throw-failures',
type: 'checkbox'
}),
createDom(
'label',
{ className: 'jasmine-label', for: 'jasmine-throw-failures' },
'stop spec on expectation failure'
)
),
createDom(
'div',
{ className: 'jasmine-random-order' },
createDom('input', {
className: 'jasmine-random',
id: 'jasmine-random-order',
type: 'checkbox'
}),
createDom(
'label',
{ className: 'jasmine-label', for: 'jasmine-random-order' },
'run tests in random order'
)
),
createDom(
'div',
{ className: 'jasmine-hide-disabled' },
createDom('input', {
className: 'jasmine-disabled',
id: 'jasmine-hide-disabled',
type: 'checkbox'
}),
createDom(
'label',
{ className: 'jasmine-label', for: 'jasmine-hide-disabled' },
'hide disabled tests'
)
)
)
);
const failFastCheckbox = optionsMenuDom.querySelector(
'#jasmine-fail-fast'
);
failFastCheckbox.checked = config.stopOnSpecFailure;
failFastCheckbox.onclick = function() {
navigateWithNewParam('stopOnSpecFailure', !config.stopOnSpecFailure);
};
const throwCheckbox = optionsMenuDom.querySelector(
'#jasmine-throw-failures'
);
throwCheckbox.checked = config.stopSpecOnExpectationFailure;
throwCheckbox.onclick = function() {
navigateWithNewParam(
'stopSpecOnExpectationFailure',
!config.stopSpecOnExpectationFailure
);
};
const randomCheckbox = optionsMenuDom.querySelector(
'#jasmine-random-order'
);
randomCheckbox.checked = config.random;
randomCheckbox.onclick = function() {
navigateWithNewParam('random', !config.random);
};
const hideDisabled = optionsMenuDom.querySelector(
'#jasmine-hide-disabled'
);
hideDisabled.checked = config.hideDisabled;
hideDisabled.onclick = function() {
navigateWithNewParam('hideDisabled', !config.hideDisabled);
};
const optionsTrigger = optionsMenuDom.querySelector('.jasmine-trigger'),
optionsPayload = optionsMenuDom.querySelector('.jasmine-payload'),
isOpen = /\bjasmine-open\b/;
optionsTrigger.onclick = function() {
if (isOpen.test(optionsPayload.className)) {
optionsPayload.className = optionsPayload.className.replace(
isOpen,
''
);
} else {
optionsPayload.className += ' jasmine-open';
}
};
return optionsMenuDom;
}
function failureDescription(result, suite) {
const wrapper = createDom(
'div',
{ className: 'jasmine-description' },
createDom(
'a',
{ title: result.description, href: specHref(result) },
result.description
)
);
let suiteLink;
while (suite && suite.parent) {
wrapper.insertBefore(createTextNode(' > '), wrapper.firstChild);
suiteLink = createDom(
'a',
{ href: suiteHref(suite) },
suite.result.description
);
wrapper.insertBefore(suiteLink, wrapper.firstChild);
suite = suite.parent;
}
return wrapper;
}
function suiteHref(suite) {
const els = [];
while (suite && suite.parent) {
els.unshift(suite.result.description);
suite = suite.parent;
}
// include window.location.pathname to fix issue with karma-jasmine-html-reporter in angular: see https://github.com/jasmine/jasmine/issues/1906
return (
(window.location.pathname || '') +
addToExistingQueryString('spec', els.join(' '))
);
}
function addDeprecationWarnings(result, runnableType) {
if (result && result.deprecationWarnings) {
for (let i = 0; i < result.deprecationWarnings.length; i++) {
const warning = result.deprecationWarnings[i].message;
deprecationWarnings.push({
message: warning,
stack: result.deprecationWarnings[i].stack,
runnableName: result.fullName,
runnableType: runnableType
});
}
}
}
function createExpander(stackTrace) {
const expandLink = createDom('a', { href: '#' }, 'Show stack trace');
const root = createDom(
'div',
{ className: 'jasmine-expander' },
expandLink,
createDom(
'div',
{ className: 'jasmine-expander-contents jasmine-stack-trace' },
stackTrace
)
);
expandLink.addEventListener('click', function(e) {
e.preventDefault();
if (root.classList.contains('jasmine-expanded')) {
root.classList.remove('jasmine-expanded');
expandLink.textContent = 'Show stack trace';
} else {
root.classList.add('jasmine-expanded');
expandLink.textContent = 'Hide stack trace';
}
});
return root;
}
function find(selector) {
return getContainer().querySelector('.jasmine_html-reporter ' + selector);
}
function clearPrior() {
const oldReporter = find('');
if (oldReporter) {
getContainer().removeChild(oldReporter);
}
}
function createDom(type, attrs, childrenArrayOrVarArgs) {
const el = createElement(type);
let children;
if (j$.isArray_(childrenArrayOrVarArgs)) {
children = childrenArrayOrVarArgs;
} else {
children = [];
for (let i = 2; i < arguments.length; i++) {
children.push(arguments[i]);
}
}
for (let i = 0; i < children.length; i++) {
const child = children[i];
if (typeof child === 'string') {
el.appendChild(createTextNode(child));
} else {
if (child) {
el.appendChild(child);
}
}
}
for (const attr in attrs) {
if (attr == 'className') {
el[attr] = attrs[attr];
} else {
el.setAttribute(attr, attrs[attr]);
}
}
return el;
}
function pluralize(singular, count) {
const word = count == 1 ? singular : singular + 's';
return '' + count + ' ' + word;
}
function specHref(result) {
// include window.location.pathname to fix issue with karma-jasmine-html-reporter in angular: see https://github.com/jasmine/jasmine/issues/1906
return (
(window.location.pathname || '') +
addToExistingQueryString('spec', result.fullName)
);
}
function seedHref(seed) {
// include window.location.pathname to fix issue with karma-jasmine-html-reporter in angular: see https://github.com/jasmine/jasmine/issues/1906
return (
(window.location.pathname || '') +
addToExistingQueryString('seed', seed)
);
}
function defaultQueryString(key, value) {
return '?' + key + '=' + value;
}
function setMenuModeTo(mode) {
htmlReporterMain.setAttribute('class', 'jasmine_html-reporter ' + mode);
}
function noExpectations(result) {
const allExpectations =
result.failedExpectations.length + result.passedExpectations.length;
return (
allExpectations === 0 &&
(result.status === 'passed' || result.status === 'failed')
);
}
function hasActiveSpec(resultNode) {
if (resultNode.type == 'spec' && resultNode.result.status != 'excluded') {
return true;
}
if (resultNode.type == 'suite') {
for (let i = 0, j = resultNode.children.length; i < j; i++) {
if (hasActiveSpec(resultNode.children[i])) {
return true;
}
}
}
}
}
return HtmlReporter;
};
jasmineRequire.HtmlSpecFilter = function() {
function HtmlSpecFilter(options) {
const filterString =
options &&
options.filterString() &&
options.filterString().replace(/[-[\]{}()*+?.,\\^$|#\s]/g, '\\$&');
const filterPattern = new RegExp(filterString);
this.matches = function(specName) {
return filterPattern.test(specName);
};
}
return HtmlSpecFilter;
};
jasmineRequire.ResultsNode = function() {
function ResultsNode(result, type, parent) {
this.result = result;
this.type = type;
this.parent = parent;
this.children = [];
this.addChild = function(result, type) {
this.children.push(new ResultsNode(result, type, this));
};
this.last = function() {
return this.children[this.children.length - 1];
};
this.updateResult = function(result) {
this.result = result;
};
}
return ResultsNode;
};
jasmineRequire.QueryString = function() {
function QueryString(options) {
this.navigateWithNewParam = function(key, value) {
options.getWindowLocation().search = this.fullStringWithNewParam(
key,
value
);
};
this.fullStringWithNewParam = function(key, value) {
const paramMap = queryStringToParamMap();
paramMap[key] = value;
return toQueryString(paramMap);
};
this.getParam = function(key) {
return queryStringToParamMap()[key];
};
return this;
function toQueryString(paramMap) {
const qStrPairs = [];
for (const prop in paramMap) {
qStrPairs.push(
encodeURIComponent(prop) + '=' + encodeURIComponent(paramMap[prop])
);
}
return '?' + qStrPairs.join('&');
}
function queryStringToParamMap() {
const paramStr = options.getWindowLocation().search.substring(1);
let params = [];
const paramMap = {};
if (paramStr.length > 0) {
params = paramStr.split('&');
for (let i = 0; i < params.length; i++) {
const p = params[i].split('=');
let value = decodeURIComponent(p[1]);
if (value === 'true' || value === 'false') {
value = JSON.parse(value);
}
paramMap[decodeURIComponent(p[0])] = value;
}
}
return paramMap;
}
}
return QueryString;
};

File diff suppressed because one or more lines are too long

File diff suppressed because it is too large Load Diff

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.5 KiB

View File

@ -0,0 +1,412 @@
"use strict"
const JASMINE_SESSION_ID = `jasmine-${String(Date.now()).slice(8)}`
beforeEach(function () {
jasmine.DEFAULT_TIMEOUT_INTERVAL = 15 * 60 * 1000 // Test timeout after 15 minutes
jasmine.addMatchers({
toBeOneOf: function () {
return {
compare: function (actual, expected) {
return {
pass: expected.includes(actual)
}
}
}
}
})
})
describe('stable-diffusion-ui', function() {
beforeEach(function() {
expect(typeof SD).toBe('object')
expect(typeof SD.serverState).toBe('object')
expect(typeof SD.serverState.status).toBe('string')
})
it('should be able to reach the backend', async function() {
expect(SD.serverState.status).toBe(SD.ServerStates.unavailable)
SD.sessionId = JASMINE_SESSION_ID
await SD.init()
expect(SD.isServerAvailable()).toBeTrue()
})
it('enfore the current task state', function() {
const task = new SD.Task()
expect(task.status).toBe(SD.TaskStatus.init)
expect(task.isPending).toBeTrue()
task._setStatus(SD.TaskStatus.pending)
expect(task.status).toBe(SD.TaskStatus.pending)
expect(task.isPending).toBeTrue()
expect(function() {
task._setStatus(SD.TaskStatus.init)
}).toThrowError()
task._setStatus(SD.TaskStatus.waiting)
expect(task.status).toBe(SD.TaskStatus.waiting)
expect(task.isPending).toBeTrue()
expect(function() {
task._setStatus(SD.TaskStatus.pending)
}).toThrowError()
task._setStatus(SD.TaskStatus.processing)
expect(task.status).toBe(SD.TaskStatus.processing)
expect(task.isPending).toBeTrue()
expect(function() {
task._setStatus(SD.TaskStatus.pending)
}).toThrowError()
task._setStatus(SD.TaskStatus.failed)
expect(task.status).toBe(SD.TaskStatus.failed)
expect(task.isPending).toBeFalse()
expect(function() {
task._setStatus(SD.TaskStatus.processing)
}).toThrowError()
expect(function() {
task._setStatus(SD.TaskStatus.completed)
}).toThrowError()
})
it('should be able to run tasks', async function() {
expect(typeof SD.Task.run).toBe('function')
const promiseGenerator = (function*(val) {
expect(val).toBe('start')
expect(yield 1 + 1).toBe(4)
expect(yield 2 + 2).toBe(8)
yield asyncDelay(500)
expect(yield 3 + 3).toBe(12)
expect(yield 4 + 4).toBe(16)
return 8 + 8
})('start')
const callback = function({value, done}) {
return {value: 2 * value, done}
}
expect(await SD.Task.run(promiseGenerator, {callback})).toBe(32)
})
it('should be able to queue tasks', async function() {
expect(typeof SD.Task.enqueue).toBe('function')
const promiseGenerator = (function*(val) {
expect(val).toBe('start')
expect(yield 1 + 1).toBe(4)
expect(yield 2 + 2).toBe(8)
yield asyncDelay(500)
expect(yield 3 + 3).toBe(12)
expect(yield 4 + 4).toBe(16)
return 8 + 8
})('start')
const callback = function({value, done}) {
return {value: 2 * value, done}
}
const gen = SD.Task.asGenerator({generator: promiseGenerator, callback})
expect(await SD.Task.enqueue(gen)).toBe(32)
})
it('should be able to chain handlers', async function() {
expect(typeof SD.Task.enqueue).toBe('function')
const promiseGenerator = (function*(val) {
expect(val).toBe('start')
expect(yield {test: '1'}).toEqual({test: '1', foo: 'bar'})
expect(yield 2 + 2).toEqual(8)
yield asyncDelay(500)
expect(yield 3 + 3).toEqual(12)
expect(yield {test: 4}).toEqual({test: 8, foo: 'bar'})
return {test: 8}
})('start')
const gen1 = SD.Task.asGenerator({generator: promiseGenerator, callback: function({value, done}) {
if (typeof value === "object") {
value['foo'] = 'bar'
}
return {value, done}
}})
const gen2 = SD.Task.asGenerator({generator: gen1, callback: function({value, done}) {
if (typeof value === 'number') {
value = 2 * value
}
if (typeof value === 'object' && typeof value.test === 'number') {
value.test = 2 * value.test
}
return {value, done}
}})
expect(await SD.Task.enqueue(gen2)).toEqual({test:32, foo: 'bar'})
})
describe('ServiceContainer', function() {
it('should be able to register providers', function() {
const cont = new ServiceContainer(
function foo() {
this.bar = ''
},
function bar() {
return () => 0
},
{ name: 'zero', definition: 0 },
{ name: 'ctx', definition: () => Object.create(null), singleton: true },
{ name: 'test',
definition: (ctx, missing, one, foo) => {
expect(ctx).toEqual({ran: true})
expect(one).toBe(1)
expect(typeof foo).toBe('object')
expect(foo.bar).toBeDefined()
expect(typeof missing).toBe('undefined')
return {foo: 'bar'}
}, dependencies: ['ctx', 'missing', 'one', 'foo']
}
)
const fooObj = cont.get('foo')
expect(typeof fooObj).toBe('object')
fooObj.ran = true
const ctx = cont.get('ctx')
expect(ctx).toEqual({})
ctx.ran = true
const bar = cont.get('bar')
expect(typeof bar).toBe('function')
expect(bar()).toBe(0)
cont.register({name: 'one', definition: 1})
const test = cont.get('test')
expect(typeof test).toBe('object')
expect(test.foo).toBe('bar')
})
})
it('should be able to stream data in chunks', async function() {
expect(SD.isServerAvailable()).toBeTrue()
const nbr_steps = 15
let res = await fetch('/render', {
method: 'POST',
headers: {
'Content-Type': 'application/json'
},
body: JSON.stringify({
"prompt": "a photograph of an astronaut riding a horse",
"negative_prompt": "",
"width": 128,
"height": 128,
"seed": Math.floor(Math.random() * 10000000),
"sampler": "plms",
"use_stable_diffusion_model": "sd-v1-4",
"num_inference_steps": nbr_steps,
"guidance_scale": 7.5,
"numOutputsParallel": 1,
"stream_image_progress": true,
"show_only_filtered_image": true,
"output_format": "jpeg",
"session_id": JASMINE_SESSION_ID,
}),
})
expect(res.ok).toBeTruthy()
const renderRequest = await res.json()
expect(typeof renderRequest.stream).toBe('string')
expect(renderRequest.task).toBeDefined()
// Wait for server status to update.
await SD.waitUntil(() => {
console.log('Waiting for %s to be received...', renderRequest.task)
return (!SD.serverState.tasks || SD.serverState.tasks[String(renderRequest.task)])
}, 250, 10 * 60 * 1000)
// Wait for task to start on server.
await SD.waitUntil(() => {
console.log('Waiting for %s to start...', renderRequest.task)
return !SD.serverState.tasks || SD.serverState.tasks[String(renderRequest.task)] !== 'pending'
}, 250)
const reader = new SD.ChunkedStreamReader(renderRequest.stream)
const parseToString = reader.parse
reader.parse = function(value) {
value = parseToString.call(this, value)
if (!value || value.length <= 0) {
return
}
return reader.readStreamAsJSON(value.join(''))
}
reader.onNext = function({done, value}) {
console.log(value)
if (typeof value === 'object' && 'status' in value) {
done = true
}
return {done, value}
}
let lastUpdate = undefined
let stepCount = 0
let complete = false
//for await (const stepUpdate of reader) {
for await (const stepUpdate of reader.open()) {
console.log('ChunkedStreamReader received ', stepUpdate)
lastUpdate = stepUpdate
if (complete) {
expect(stepUpdate.status).toBe('succeeded')
expect(stepUpdate.output).toHaveSize(1)
} else {
expect(stepUpdate.total_steps).toBe(nbr_steps)
expect(stepUpdate.step).toBe(stepCount)
if (stepUpdate.step === stepUpdate.total_steps) {
complete = true
} else {
stepCount++
}
}
}
for(let i=1; i <= 5; ++i) {
res = await fetch(renderRequest.stream)
expect(res.ok).toBeTruthy()
const cachedResponse = await res.json()
console.log('Cache test %s received %o', i, cachedResponse)
expect(lastUpdate).toEqual(cachedResponse)
}
})
describe('should be able to make renders', function() {
beforeEach(function() {
expect(SD.isServerAvailable()).toBeTrue()
})
it('basic inline request', async function() {
let stepCount = 0
let complete = false
const result = await SD.render({
"prompt": "a photograph of an astronaut riding a horse",
"width": 128,
"height": 128,
"num_inference_steps": 10,
"show_only_filtered_image": false,
//"use_face_correction": 'GFPGANv1.3',
"use_upscale": "RealESRGAN_x4plus",
"session_id": JASMINE_SESSION_ID,
}, function(event) {
console.log(this, event)
if ('update' in event) {
const stepUpdate = event.update
if (complete || (stepUpdate.status && stepUpdate.step === stepUpdate.total_steps)) {
expect(stepUpdate.status).toBe('succeeded')
expect(stepUpdate.output).toHaveSize(2)
} else {
expect(stepUpdate.step).toBe(stepCount)
if (stepUpdate.step === stepUpdate.total_steps) {
complete = true
} else {
stepCount++
}
}
}
})
console.log(result)
expect(result.status).toBe('succeeded')
expect(result.output).toHaveSize(2)
})
it('post and reader request', async function() {
const renderTask = new SD.RenderTask({
"prompt": "a photograph of an astronaut riding a horse",
"width": 128,
"height": 128,
"seed": SD.MAX_SEED_VALUE,
"num_inference_steps": 10,
"session_id": JASMINE_SESSION_ID,
})
expect(renderTask.status).toBe(SD.TaskStatus.init)
const timeout = -1
const renderRequest = await renderTask.post(timeout)
expect(typeof renderRequest.stream).toBe('string')
expect(renderTask.status).toBe(SD.TaskStatus.waiting)
expect(renderTask.streamUrl).toBe(renderRequest.stream)
await renderTask.waitUntil({state: SD.TaskStatus.processing, callback: () => console.log('Waiting for render task to start...') })
expect(renderTask.status).toBe(SD.TaskStatus.processing)
let stepCount = 0
let complete = false
//for await (const stepUpdate of renderTask.reader) {
for await (const stepUpdate of renderTask.reader.open()) {
console.log(stepUpdate)
if (complete || (stepUpdate.status && stepUpdate.step === stepUpdate.total_steps)) {
expect(stepUpdate.status).toBe('succeeded')
expect(stepUpdate.output).toHaveSize(1)
} else {
expect(stepUpdate.step).toBe(stepCount)
if (stepUpdate.step === stepUpdate.total_steps) {
complete = true
} else {
stepCount++
}
}
}
expect(renderTask.status).toBe(SD.TaskStatus.completed)
expect(renderTask.result.status).toBe('succeeded')
expect(renderTask.result.output).toHaveSize(1)
})
it('queued request', async function() {
let stepCount = 0
let complete = false
const renderTask = new SD.RenderTask({
"prompt": "a photograph of an astronaut riding a horse",
"width": 128,
"height": 128,
"num_inference_steps": 10,
"show_only_filtered_image": false,
//"use_face_correction": 'GFPGANv1.3',
"use_upscale": "RealESRGAN_x4plus",
"session_id": JASMINE_SESSION_ID,
})
await renderTask.enqueue(function(event) {
console.log(this, event)
if ('update' in event) {
const stepUpdate = event.update
if (complete || (stepUpdate.status && stepUpdate.step === stepUpdate.total_steps)) {
expect(stepUpdate.status).toBe('succeeded')
expect(stepUpdate.output).toHaveSize(2)
} else {
expect(stepUpdate.step).toBe(stepCount)
if (stepUpdate.step === stepUpdate.total_steps) {
complete = true
} else {
stepCount++
}
}
}
})
console.log(renderTask.result)
expect(renderTask.result.status).toBe('succeeded')
expect(renderTask.result.output).toHaveSize(2)
})
})
describe('# Special cases', function() {
it('should throw an exception on set for invalid sessionId', function() {
expect(function() {
SD.sessionId = undefined
}).toThrowError("Can't set sessionId to undefined.")
})
})
})
const loadCompleted = window.onload
let loadEvent = undefined
window.onload = function(evt) {
loadEvent = evt
}
if (!PLUGINS.SELFTEST) {
PLUGINS.SELFTEST = {}
}
loadUIPlugins().then(function() {
console.log('loadCompleted', loadEvent)
describe('@Plugins', function() {
it('exposes hooks to overide', function() {
expect(typeof PLUGINS.IMAGE_INFO_BUTTONS).toBe('object')
expect(typeof PLUGINS.TASK_CREATE).toBe('object')
})
describe('supports selftests', function() { // Hook to allow plugins to define tests.
const pluginsTests = Object.keys(PLUGINS.SELFTEST).filter((key) => PLUGINS.SELFTEST.hasOwnProperty(key))
if (!pluginsTests || pluginsTests.length <= 0) {
it('but nothing loaded...', function() {
expect(true).toBeTruthy()
})
return
}
for (const pTest of pluginsTests) {
describe(pTest, function() {
const testFn = PLUGINS.SELFTEST[pTest]
return Promise.resolve(testFn.call(jasmine, pTest))
})
}
})
})
loadCompleted.call(window, loadEvent)
})

View File

@ -0,0 +1,53 @@
(function () {
"use strict"
var styleSheet = document.createElement("style");
styleSheet.textContent = `
.modifier-card-tiny.modifier-toggle-inactive {
background: transparent;
border: 2px dashed red;
opacity:0.2;
}
`;
document.head.appendChild(styleSheet);
// observe for changes in tag list
var observer = new MutationObserver(function (mutations) {
// mutations.forEach(function (mutation) {
if (editorModifierTagsList.childNodes.length > 0) {
ModifierToggle()
}
// })
})
observer.observe(editorModifierTagsList, {
childList: true
})
function ModifierToggle() {
let overlays = document.querySelector('#editor-inputs-tags-list').querySelectorAll('.modifier-card-overlay')
overlays.forEach (i => {
i.oncontextmenu = (e) => {
e.preventDefault()
if (i.parentElement.classList.contains('modifier-toggle-inactive')) {
i.parentElement.classList.remove('modifier-toggle-inactive')
}
else
{
i.parentElement.classList.add('modifier-toggle-inactive')
}
// refresh activeTags
let modifierName = i.parentElement.getElementsByClassName('modifier-card-label')[0].getElementsByTagName("p")[0].innerText
activeTags = activeTags.map(obj => {
if (obj.name === modifierName) {
return {...obj, inactive: (obj.element.classList.contains('modifier-toggle-inactive'))};
}
return obj;
});
console.log(activeTags)
}
})
}
})()

View File

@ -1,11 +1,21 @@
(function() { (function() {
document.querySelector('#tab-container').insertAdjacentHTML('beforeend', ` // Register selftests when loaded by jasmine.
if (typeof PLUGINS?.SELFTEST === 'object') {
PLUGINS.SELFTEST["release-notes"] = function() {
it('should be able to fetch CHANGES.md', async function() {
let releaseNotes = await fetch(`https://raw.githubusercontent.com/cmdr2/stable-diffusion-ui/main/CHANGES.md`)
expect(releaseNotes.status).toBe(200)
})
}
}
document.querySelector('#tab-container')?.insertAdjacentHTML('beforeend', `
<span id="tab-news" class="tab"> <span id="tab-news" class="tab">
<span><i class="fa fa-bolt icon"></i> What's new?</span> <span><i class="fa fa-bolt icon"></i> What's new?</span>
</span> </span>
`) `)
document.querySelector('#tab-content-wrapper').insertAdjacentHTML('beforeend', ` document.querySelector('#tab-content-wrapper')?.insertAdjacentHTML('beforeend', `
<div id="tab-content-news" class="tab-content"> <div id="tab-content-news" class="tab-content">
<div id="news" class="tab-content-inner"> <div id="news" class="tab-content-inner">
Loading.. Loading..
@ -13,6 +23,16 @@
</div> </div>
`) `)
const tabNews = document.querySelector('#tab-news')
if (tabNews) {
linkTabContents(tabNews)
}
const news = document.querySelector('#news')
if (!news) {
// news tab not found, dont exec plugin code.
return
}
document.querySelector('body').insertAdjacentHTML('beforeend', ` document.querySelector('body').insertAdjacentHTML('beforeend', `
<style> <style>
#tab-content-news .tab-content-inner { #tab-content-news .tab-content-inner {
@ -23,25 +43,22 @@
</style> </style>
`) `)
linkTabContents(document.querySelector('#tab-news')) loadScript('/media/js/marked.min.js').then(async function() {
let markedScript = document.createElement('script')
markedScript.src = '/media/js/marked.min.js'
markedScript.onload = async function() {
let appConfig = await fetch('/get/app_config') let appConfig = await fetch('/get/app_config')
if (!appConfig.ok) {
console.error('[release-notes] Failed to get app_config.')
return
}
appConfig = await appConfig.json() appConfig = await appConfig.json()
let updateBranch = appConfig.update_branch || 'main' const updateBranch = appConfig.update_branch || 'main'
let news = document.querySelector('#news')
let releaseNotes = await fetch(`https://raw.githubusercontent.com/cmdr2/stable-diffusion-ui/${updateBranch}/CHANGES.md`) let releaseNotes = await fetch(`https://raw.githubusercontent.com/cmdr2/stable-diffusion-ui/${updateBranch}/CHANGES.md`)
if (releaseNotes.status != 200) { if (!releaseNotes.ok) {
console.error('[release-notes] Failed to get CHANGES.md.')
return return
} }
releaseNotes = await releaseNotes.text() releaseNotes = await releaseNotes.text()
news.innerHTML = marked.parse(releaseNotes) news.innerHTML = marked.parse(releaseNotes)
} })
document.querySelector('body').appendChild(markedScript)
})() })()

View File

@ -0,0 +1,25 @@
/* SD-UI Selftest Plugin.js
*/
(function() { "use strict"
const ID_PREFIX = "selftest-plugin"
const links = document.getElementById("community-links")
if (!links) {
console.error('%s the ID "community-links" cannot be found.', ID_PREFIX)
return
}
// Add link to Jasmine SpecRunner
const pluginLink = document.createElement('li')
const options = {
'stopSpecOnExpectationFailure': "true",
'stopOnSpecFailure': 'false',
'random': 'false',
'hideDisabled': 'false'
}
const optStr = Object.entries(options).map(([key, val]) => `${key}=${val}`).join('&')
pluginLink.innerHTML = `<a id="${ID_PREFIX}-starttest" href="${location.protocol}/plugins/core/SpecRunner.html?${optStr}" target="_blank"><i class="fa-solid fa-vial-circle-check"></i> Start SelfTest</a>`
links.appendChild(pluginLink)
console.log('%s loaded!', ID_PREFIX)
})()

View File

@ -1,108 +0,0 @@
import json
class Request:
session_id: str = "session"
prompt: str = ""
negative_prompt: str = ""
init_image: str = None # base64
mask: str = None # base64
num_outputs: int = 1
num_inference_steps: int = 50
guidance_scale: float = 7.5
width: int = 512
height: int = 512
seed: int = 42
prompt_strength: float = 0.8
sampler: str = None # "ddim", "plms", "heun", "euler", "euler_a", "dpm2", "dpm2_a", "lms"
# allow_nsfw: bool = False
precision: str = "autocast" # or "full"
save_to_disk_path: str = None
turbo: bool = True
use_full_precision: bool = False
use_face_correction: str = None # or "GFPGANv1.3"
use_upscale: str = None # or "RealESRGAN_x4plus" or "RealESRGAN_x4plus_anime_6B"
use_stable_diffusion_model: str = "sd-v1-4"
use_vae_model: str = None
show_only_filtered_image: bool = False
output_format: str = "jpeg" # or "png"
stream_progress_updates: bool = False
stream_image_progress: bool = False
def json(self):
return {
"session_id": self.session_id,
"prompt": self.prompt,
"negative_prompt": self.negative_prompt,
"num_outputs": self.num_outputs,
"num_inference_steps": self.num_inference_steps,
"guidance_scale": self.guidance_scale,
"width": self.width,
"height": self.height,
"seed": self.seed,
"prompt_strength": self.prompt_strength,
"sampler": self.sampler,
"use_face_correction": self.use_face_correction,
"use_upscale": self.use_upscale,
"use_stable_diffusion_model": self.use_stable_diffusion_model,
"use_vae_model": self.use_vae_model,
"output_format": self.output_format,
}
def __str__(self):
return f'''
session_id: {self.session_id}
prompt: {self.prompt}
negative_prompt: {self.negative_prompt}
seed: {self.seed}
num_inference_steps: {self.num_inference_steps}
sampler: {self.sampler}
guidance_scale: {self.guidance_scale}
w: {self.width}
h: {self.height}
precision: {self.precision}
save_to_disk_path: {self.save_to_disk_path}
turbo: {self.turbo}
use_full_precision: {self.use_full_precision}
use_face_correction: {self.use_face_correction}
use_upscale: {self.use_upscale}
use_stable_diffusion_model: {self.use_stable_diffusion_model}
use_vae_model: {self.use_vae_model}
show_only_filtered_image: {self.show_only_filtered_image}
output_format: {self.output_format}
stream_progress_updates: {self.stream_progress_updates}
stream_image_progress: {self.stream_image_progress}'''
class Image:
data: str # base64
seed: int
is_nsfw: bool
path_abs: str = None
def __init__(self, data, seed):
self.data = data
self.seed = seed
def json(self):
return {
"data": self.data,
"seed": self.seed,
"path_abs": self.path_abs,
}
class Response:
request: Request
images: list
def json(self):
res = {
"status": 'succeeded',
"request": self.request.json(),
"output": [],
}
for image in self.images:
res["output"].append(image.json())
return res

View File

@ -1,332 +0,0 @@
diff --git a/optimizedSD/ddpm.py b/optimizedSD/ddpm.py
index b967b55..35ef520 100644
--- a/optimizedSD/ddpm.py
+++ b/optimizedSD/ddpm.py
@@ -22,7 +22,7 @@ from ldm.util import exists, default, instantiate_from_config
from ldm.modules.diffusionmodules.util import make_beta_schedule
from ldm.modules.diffusionmodules.util import make_ddim_sampling_parameters, make_ddim_timesteps, noise_like
from ldm.modules.diffusionmodules.util import make_beta_schedule, extract_into_tensor, noise_like
-from samplers import CompVisDenoiser, get_ancestral_step, to_d, append_dims,linear_multistep_coeff
+from .samplers import CompVisDenoiser, get_ancestral_step, to_d, append_dims,linear_multistep_coeff
def disabled_train(self):
"""Overwrite model.train with this function to make sure train/eval mode
@@ -506,6 +506,8 @@ class UNet(DDPM):
x_latent = noise if x0 is None else x0
# sampling
+ if sampler in ('ddim', 'dpm2', 'heun', 'dpm2_a', 'lms') and not hasattr(self, 'ddim_timesteps'):
+ self.make_schedule(ddim_num_steps=S, ddim_eta=eta, verbose=False)
if sampler == "plms":
self.make_schedule(ddim_num_steps=S, ddim_eta=eta, verbose=False)
@@ -528,39 +530,46 @@ class UNet(DDPM):
elif sampler == "ddim":
samples = self.ddim_sampling(x_latent, conditioning, S, unconditional_guidance_scale=unconditional_guidance_scale,
unconditional_conditioning=unconditional_conditioning,
- mask = mask,init_latent=x_T,use_original_steps=False)
+ mask = mask,init_latent=x_T,use_original_steps=False,
+ callback=callback, img_callback=img_callback)
elif sampler == "euler":
self.make_schedule(ddim_num_steps=S, ddim_eta=eta, verbose=False)
samples = self.euler_sampling(self.alphas_cumprod,x_latent, S, conditioning, unconditional_conditioning=unconditional_conditioning,
- unconditional_guidance_scale=unconditional_guidance_scale)
+ unconditional_guidance_scale=unconditional_guidance_scale,
+ img_callback=img_callback)
elif sampler == "euler_a":
self.make_schedule(ddim_num_steps=S, ddim_eta=eta, verbose=False)
samples = self.euler_ancestral_sampling(self.alphas_cumprod,x_latent, S, conditioning, unconditional_conditioning=unconditional_conditioning,
- unconditional_guidance_scale=unconditional_guidance_scale)
+ unconditional_guidance_scale=unconditional_guidance_scale,
+ img_callback=img_callback)
elif sampler == "dpm2":
samples = self.dpm_2_sampling(self.alphas_cumprod,x_latent, S, conditioning, unconditional_conditioning=unconditional_conditioning,
- unconditional_guidance_scale=unconditional_guidance_scale)
+ unconditional_guidance_scale=unconditional_guidance_scale,
+ img_callback=img_callback)
elif sampler == "heun":
samples = self.heun_sampling(self.alphas_cumprod,x_latent, S, conditioning, unconditional_conditioning=unconditional_conditioning,
- unconditional_guidance_scale=unconditional_guidance_scale)
+ unconditional_guidance_scale=unconditional_guidance_scale,
+ img_callback=img_callback)
elif sampler == "dpm2_a":
samples = self.dpm_2_ancestral_sampling(self.alphas_cumprod,x_latent, S, conditioning, unconditional_conditioning=unconditional_conditioning,
- unconditional_guidance_scale=unconditional_guidance_scale)
+ unconditional_guidance_scale=unconditional_guidance_scale,
+ img_callback=img_callback)
elif sampler == "lms":
samples = self.lms_sampling(self.alphas_cumprod,x_latent, S, conditioning, unconditional_conditioning=unconditional_conditioning,
- unconditional_guidance_scale=unconditional_guidance_scale)
+ unconditional_guidance_scale=unconditional_guidance_scale,
+ img_callback=img_callback)
+
+ yield from samples
if(self.turbo):
self.model1.to("cpu")
self.model2.to("cpu")
- return samples
-
@torch.no_grad()
def plms_sampling(self, cond,b, img,
ddim_use_original_steps=False,
@@ -599,10 +608,10 @@ class UNet(DDPM):
old_eps.append(e_t)
if len(old_eps) >= 4:
old_eps.pop(0)
- if callback: callback(i)
- if img_callback: img_callback(pred_x0, i)
+ if callback: yield from callback(i)
+ if img_callback: yield from img_callback(pred_x0, i)
- return img
+ yield from img_callback(img, len(iterator)-1)
@torch.no_grad()
def p_sample_plms(self, x, c, t, index, repeat_noise=False, use_original_steps=False, quantize_denoised=False,
@@ -706,7 +715,8 @@ class UNet(DDPM):
@torch.no_grad()
def ddim_sampling(self, x_latent, cond, t_start, unconditional_guidance_scale=1.0, unconditional_conditioning=None,
- mask = None,init_latent=None,use_original_steps=False):
+ mask = None,init_latent=None,use_original_steps=False,
+ callback=None, img_callback=None):
timesteps = self.ddim_timesteps
timesteps = timesteps[:t_start]
@@ -730,10 +740,13 @@ class UNet(DDPM):
unconditional_guidance_scale=unconditional_guidance_scale,
unconditional_conditioning=unconditional_conditioning)
+ if callback: yield from callback(i)
+ if img_callback: yield from img_callback(x_dec, i)
+
if mask is not None:
- return x0 * mask + (1. - mask) * x_dec
+ x_dec = x0 * mask + (1. - mask) * x_dec
- return x_dec
+ yield from img_callback(x_dec, len(iterator)-1)
@torch.no_grad()
@@ -779,13 +792,16 @@ class UNet(DDPM):
@torch.no_grad()
- def euler_sampling(self, ac, x, S, cond, unconditional_conditioning = None, unconditional_guidance_scale = 1,extra_args=None,callback=None, disable=None, s_churn=0., s_tmin=0., s_tmax=float('inf'), s_noise=1.):
+ def euler_sampling(self, ac, x, S, cond, unconditional_conditioning = None, unconditional_guidance_scale = 1,extra_args=None,callback=None, disable=None, s_churn=0., s_tmin=0., s_tmax=float('inf'), s_noise=1.,
+ img_callback=None):
"""Implements Algorithm 2 (Euler steps) from Karras et al. (2022)."""
extra_args = {} if extra_args is None else extra_args
cvd = CompVisDenoiser(ac)
sigmas = cvd.get_sigmas(S)
x = x*sigmas[0]
+ print(f"Running Euler Sampling with {len(sigmas) - 1} timesteps")
+
s_in = x.new_ones([x.shape[0]]).half()
for i in trange(len(sigmas) - 1, disable=disable):
gamma = min(s_churn / (len(sigmas) - 1), 2 ** 0.5 - 1) if s_tmin <= sigmas[i] <= s_tmax else 0.
@@ -807,13 +823,18 @@ class UNet(DDPM):
d = to_d(x, sigma_hat, denoised)
if callback is not None:
callback({'x': x, 'i': i, 'sigma': sigmas[i], 'sigma_hat': sigma_hat, 'denoised': denoised})
+
+ if img_callback: yield from img_callback(x, i)
+
dt = sigmas[i + 1] - sigma_hat
# Euler method
x = x + d * dt
- return x
+
+ yield from img_callback(x, len(sigmas)-1)
@torch.no_grad()
- def euler_ancestral_sampling(self,ac,x, S, cond, unconditional_conditioning = None, unconditional_guidance_scale = 1,extra_args=None, callback=None, disable=None):
+ def euler_ancestral_sampling(self,ac,x, S, cond, unconditional_conditioning = None, unconditional_guidance_scale = 1,extra_args=None, callback=None, disable=None,
+ img_callback=None):
"""Ancestral sampling with Euler method steps."""
extra_args = {} if extra_args is None else extra_args
@@ -822,6 +843,8 @@ class UNet(DDPM):
sigmas = cvd.get_sigmas(S)
x = x*sigmas[0]
+ print(f"Running Euler Ancestral Sampling with {len(sigmas) - 1} timesteps")
+
s_in = x.new_ones([x.shape[0]]).half()
for i in trange(len(sigmas) - 1, disable=disable):
@@ -837,17 +860,22 @@ class UNet(DDPM):
sigma_down, sigma_up = get_ancestral_step(sigmas[i], sigmas[i + 1])
if callback is not None:
callback({'x': x, 'i': i, 'sigma': sigmas[i], 'sigma_hat': sigmas[i], 'denoised': denoised})
+
+ if img_callback: yield from img_callback(x, i)
+
d = to_d(x, sigmas[i], denoised)
# Euler method
dt = sigma_down - sigmas[i]
x = x + d * dt
x = x + torch.randn_like(x) * sigma_up
- return x
+
+ yield from img_callback(x, len(sigmas)-1)
@torch.no_grad()
- def heun_sampling(self, ac, x, S, cond, unconditional_conditioning = None, unconditional_guidance_scale = 1, extra_args=None, callback=None, disable=None, s_churn=0., s_tmin=0., s_tmax=float('inf'), s_noise=1.):
+ def heun_sampling(self, ac, x, S, cond, unconditional_conditioning = None, unconditional_guidance_scale = 1, extra_args=None, callback=None, disable=None, s_churn=0., s_tmin=0., s_tmax=float('inf'), s_noise=1.,
+ img_callback=None):
"""Implements Algorithm 2 (Heun steps) from Karras et al. (2022)."""
extra_args = {} if extra_args is None else extra_args
@@ -855,6 +883,8 @@ class UNet(DDPM):
sigmas = cvd.get_sigmas(S)
x = x*sigmas[0]
+ print(f"Running Heun Sampling with {len(sigmas) - 1} timesteps")
+
s_in = x.new_ones([x.shape[0]]).half()
for i in trange(len(sigmas) - 1, disable=disable):
@@ -876,6 +906,9 @@ class UNet(DDPM):
d = to_d(x, sigma_hat, denoised)
if callback is not None:
callback({'x': x, 'i': i, 'sigma': sigmas[i], 'sigma_hat': sigma_hat, 'denoised': denoised})
+
+ if img_callback: yield from img_callback(x, i)
+
dt = sigmas[i + 1] - sigma_hat
if sigmas[i + 1] == 0:
# Euler method
@@ -895,11 +928,13 @@ class UNet(DDPM):
d_2 = to_d(x_2, sigmas[i + 1], denoised_2)
d_prime = (d + d_2) / 2
x = x + d_prime * dt
- return x
+
+ yield from img_callback(x, len(sigmas)-1)
@torch.no_grad()
- def dpm_2_sampling(self,ac,x, S, cond, unconditional_conditioning = None, unconditional_guidance_scale = 1,extra_args=None, callback=None, disable=None, s_churn=0., s_tmin=0., s_tmax=float('inf'), s_noise=1.):
+ def dpm_2_sampling(self,ac,x, S, cond, unconditional_conditioning = None, unconditional_guidance_scale = 1,extra_args=None, callback=None, disable=None, s_churn=0., s_tmin=0., s_tmax=float('inf'), s_noise=1.,
+ img_callback=None):
"""A sampler inspired by DPM-Solver-2 and Algorithm 2 from Karras et al. (2022)."""
extra_args = {} if extra_args is None else extra_args
@@ -907,6 +942,8 @@ class UNet(DDPM):
sigmas = cvd.get_sigmas(S)
x = x*sigmas[0]
+ print(f"Running DPM2 Sampling with {len(sigmas) - 1} timesteps")
+
s_in = x.new_ones([x.shape[0]]).half()
for i in trange(len(sigmas) - 1, disable=disable):
gamma = min(s_churn / (len(sigmas) - 1), 2 ** 0.5 - 1) if s_tmin <= sigmas[i] <= s_tmax else 0.
@@ -924,7 +961,7 @@ class UNet(DDPM):
e_t_uncond, e_t = (x_in + eps * c_out).chunk(2)
denoised = e_t_uncond + unconditional_guidance_scale * (e_t - e_t_uncond)
-
+ if img_callback: yield from img_callback(x, i)
d = to_d(x, sigma_hat, denoised)
# Midpoint method, where the midpoint is chosen according to a rho=3 Karras schedule
@@ -945,11 +982,13 @@ class UNet(DDPM):
d_2 = to_d(x_2, sigma_mid, denoised_2)
x = x + d_2 * dt_2
- return x
+
+ yield from img_callback(x, len(sigmas)-1)
@torch.no_grad()
- def dpm_2_ancestral_sampling(self,ac,x, S, cond, unconditional_conditioning = None, unconditional_guidance_scale = 1, extra_args=None, callback=None, disable=None):
+ def dpm_2_ancestral_sampling(self,ac,x, S, cond, unconditional_conditioning = None, unconditional_guidance_scale = 1, extra_args=None, callback=None, disable=None,
+ img_callback=None):
"""Ancestral sampling with DPM-Solver inspired second-order steps."""
extra_args = {} if extra_args is None else extra_args
@@ -957,6 +996,8 @@ class UNet(DDPM):
sigmas = cvd.get_sigmas(S)
x = x*sigmas[0]
+ print(f"Running DPM2 Ancestral Sampling with {len(sigmas) - 1} timesteps")
+
s_in = x.new_ones([x.shape[0]]).half()
for i in trange(len(sigmas) - 1, disable=disable):
@@ -973,6 +1014,9 @@ class UNet(DDPM):
sigma_down, sigma_up = get_ancestral_step(sigmas[i], sigmas[i + 1])
if callback is not None:
callback({'x': x, 'i': i, 'sigma': sigmas[i], 'sigma_hat': sigmas[i], 'denoised': denoised})
+
+ if img_callback: yield from img_callback(x, i)
+
d = to_d(x, sigmas[i], denoised)
# Midpoint method, where the midpoint is chosen according to a rho=3 Karras schedule
sigma_mid = ((sigmas[i] ** (1 / 3) + sigma_down ** (1 / 3)) / 2) ** 3
@@ -993,11 +1037,13 @@ class UNet(DDPM):
d_2 = to_d(x_2, sigma_mid, denoised_2)
x = x + d_2 * dt_2
x = x + torch.randn_like(x) * sigma_up
- return x
+
+ yield from img_callback(x, len(sigmas)-1)
@torch.no_grad()
- def lms_sampling(self,ac,x, S, cond, unconditional_conditioning = None, unconditional_guidance_scale = 1, extra_args=None, callback=None, disable=None, order=4):
+ def lms_sampling(self,ac,x, S, cond, unconditional_conditioning = None, unconditional_guidance_scale = 1, extra_args=None, callback=None, disable=None, order=4,
+ img_callback=None):
extra_args = {} if extra_args is None else extra_args
s_in = x.new_ones([x.shape[0]])
@@ -1005,6 +1051,8 @@ class UNet(DDPM):
sigmas = cvd.get_sigmas(S)
x = x*sigmas[0]
+ print(f"Running LMS Sampling with {len(sigmas) - 1} timesteps")
+
ds = []
for i in trange(len(sigmas) - 1, disable=disable):
@@ -1017,6 +1065,7 @@ class UNet(DDPM):
e_t_uncond, e_t = (x_in + eps * c_out).chunk(2)
denoised = e_t_uncond + unconditional_guidance_scale * (e_t - e_t_uncond)
+ if img_callback: yield from img_callback(x, i)
d = to_d(x, sigmas[i], denoised)
ds.append(d)
@@ -1027,4 +1076,5 @@ class UNet(DDPM):
cur_order = min(i + 1, order)
coeffs = [linear_multistep_coeff(cur_order, sigmas.cpu(), i, j) for j in range(cur_order)]
x = x + sum(coeff * d for coeff, d in zip(coeffs, reversed(ds)))
- return x
+
+ yield from img_callback(x, len(sigmas)-1)
diff --git a/optimizedSD/openaimodelSplit.py b/optimizedSD/openaimodelSplit.py
index abc3098..7a32ffe 100644
--- a/optimizedSD/openaimodelSplit.py
+++ b/optimizedSD/openaimodelSplit.py
@@ -13,7 +13,7 @@ from ldm.modules.diffusionmodules.util import (
normalization,
timestep_embedding,
)
-from splitAttention import SpatialTransformer
+from .splitAttention import SpatialTransformer
class AttentionPool2d(nn.Module):

View File

@ -1,13 +0,0 @@
diff --git a/environment.yaml b/environment.yaml
index 7f25da8..306750f 100644
--- a/environment.yaml
+++ b/environment.yaml
@@ -23,6 +23,8 @@ dependencies:
- torch-fidelity==0.3.0
- transformers==4.19.2
- torchmetrics==0.6.0
+ - pywavelets==1.3.0
+ - pandas==1.4.4
- kornia==0.6
- -e git+https://github.com/CompVis/taming-transformers.git@master#egg=taming-transformers
- -e git+https://github.com/openai/CLIP.git@main#egg=clip

View File

@ -1,797 +0,0 @@
"""runtime.py: torch device owned by a thread.
Notes:
Avoid device switching, transfering all models will get too complex.
To use a diffrent device signal the current render device to exit
And then start a new clean thread for the new device.
"""
import json
import os, re
import traceback
import torch
import numpy as np
from gc import collect as gc_collect
from omegaconf import OmegaConf
from PIL import Image, ImageOps
from tqdm import tqdm, trange
from itertools import islice
from einops import rearrange
import time
from pytorch_lightning import seed_everything
from torch import autocast
from contextlib import nullcontext
from einops import rearrange, repeat
from ldm.util import instantiate_from_config
from optimizedSD.optimUtils import split_weighted_subprompts
from transformers import logging
from gfpgan import GFPGANer
from basicsr.archs.rrdbnet_arch import RRDBNet
from realesrgan import RealESRGANer
import uuid
logging.set_verbosity_error()
# consts
config_yaml = "optimizedSD/v1-inference.yaml"
filename_regex = re.compile('[^a-zA-Z0-9]')
force_gfpgan_to_cuda0 = True # workaround: gfpgan currently works only on cuda:0
# api stuff
from sd_internal import device_manager
from . import Request, Response, Image as ResponseImage
import base64
from io import BytesIO
#from colorama import Fore
from threading import local as LocalThreadVars
thread_data = LocalThreadVars()
def thread_init(device):
# Thread bound properties
thread_data.stop_processing = False
thread_data.temp_images = {}
thread_data.ckpt_file = None
thread_data.vae_file = None
thread_data.gfpgan_file = None
thread_data.real_esrgan_file = None
thread_data.model = None
thread_data.modelCS = None
thread_data.modelFS = None
thread_data.model_gfpgan = None
thread_data.model_real_esrgan = None
thread_data.model_is_half = False
thread_data.model_fs_is_half = False
thread_data.device = None
thread_data.device_name = None
thread_data.unet_bs = 1
thread_data.precision = 'autocast'
thread_data.sampler_plms = None
thread_data.sampler_ddim = None
thread_data.turbo = False
thread_data.force_full_precision = False
thread_data.reduced_memory = True
device_manager.device_init(thread_data, device)
def load_model_ckpt():
if not thread_data.ckpt_file: raise ValueError(f'Thread ckpt_file is undefined.')
if not os.path.exists(thread_data.ckpt_file + '.ckpt'): raise FileNotFoundError(f'Cannot find {thread_data.ckpt_file}.ckpt')
if not thread_data.precision:
thread_data.precision = 'full' if thread_data.force_full_precision else 'autocast'
if not thread_data.unet_bs:
thread_data.unet_bs = 1
if thread_data.device == 'cpu':
thread_data.precision = 'full'
print('loading', thread_data.ckpt_file + '.ckpt', 'to device', thread_data.device, 'using precision', thread_data.precision)
sd = load_model_from_config(thread_data.ckpt_file + '.ckpt')
li, lo = [], []
for key, value in sd.items():
sp = key.split(".")
if (sp[0]) == "model":
if "input_blocks" in sp:
li.append(key)
elif "middle_block" in sp:
li.append(key)
elif "time_embed" in sp:
li.append(key)
else:
lo.append(key)
for key in li:
sd["model1." + key[6:]] = sd.pop(key)
for key in lo:
sd["model2." + key[6:]] = sd.pop(key)
config = OmegaConf.load(f"{config_yaml}")
model = instantiate_from_config(config.modelUNet)
_, _ = model.load_state_dict(sd, strict=False)
model.eval()
model.cdevice = torch.device(thread_data.device)
model.unet_bs = thread_data.unet_bs
model.turbo = thread_data.turbo
# if thread_data.device != 'cpu':
# model.to(thread_data.device)
#if thread_data.reduced_memory:
#model.model1.to("cpu")
#model.model2.to("cpu")
thread_data.model = model
modelCS = instantiate_from_config(config.modelCondStage)
_, _ = modelCS.load_state_dict(sd, strict=False)
modelCS.eval()
modelCS.cond_stage_model.device = torch.device(thread_data.device)
# if thread_data.device != 'cpu':
# if thread_data.reduced_memory:
# modelCS.to('cpu')
# else:
# modelCS.to(thread_data.device) # Preload on device if not already there.
thread_data.modelCS = modelCS
modelFS = instantiate_from_config(config.modelFirstStage)
_, _ = modelFS.load_state_dict(sd, strict=False)
if thread_data.vae_file is not None:
try:
loaded = False
for model_extension in ['.ckpt', '.vae.pt']:
if os.path.exists(thread_data.vae_file + model_extension):
print(f"Loading VAE weights from: {thread_data.vae_file}{model_extension}")
vae_ckpt = torch.load(thread_data.vae_file + model_extension, map_location="cpu")
vae_dict = {k: v for k, v in vae_ckpt["state_dict"].items() if k[0:4] != "loss"}
modelFS.first_stage_model.load_state_dict(vae_dict, strict=False)
loaded = True
break
if not loaded:
print(f'Cannot find VAE: {thread_data.vae_file}')
thread_data.vae_file = None
except:
print(traceback.format_exc())
print(f'Could not load VAE: {thread_data.vae_file}')
thread_data.vae_file = None
modelFS.eval()
# if thread_data.device != 'cpu':
# if thread_data.reduced_memory:
# modelFS.to('cpu')
# else:
# modelFS.to(thread_data.device) # Preload on device if not already there.
thread_data.modelFS = modelFS
del sd
if thread_data.device != "cpu" and thread_data.precision == "autocast":
thread_data.model.half()
thread_data.modelCS.half()
thread_data.modelFS.half()
thread_data.model_is_half = True
thread_data.model_fs_is_half = True
else:
thread_data.model_is_half = False
thread_data.model_fs_is_half = False
print(f'''loaded model
model file: {thread_data.ckpt_file}.ckpt
model.device: {model.device}
modelCS.device: {modelCS.cond_stage_model.device}
modelFS.device: {thread_data.modelFS.device}
using precision: {thread_data.precision}''')
def unload_filters():
if thread_data.model_gfpgan is not None:
if thread_data.device != 'cpu': thread_data.model_gfpgan.gfpgan.to('cpu')
del thread_data.model_gfpgan
thread_data.model_gfpgan = None
if thread_data.model_real_esrgan is not None:
if thread_data.device != 'cpu': thread_data.model_real_esrgan.model.to('cpu')
del thread_data.model_real_esrgan
thread_data.model_real_esrgan = None
gc()
def unload_models():
if thread_data.model is not None:
print('Unloading models...')
if thread_data.device != 'cpu':
thread_data.modelFS.to('cpu')
thread_data.modelCS.to('cpu')
thread_data.model.model1.to("cpu")
thread_data.model.model2.to("cpu")
del thread_data.model
del thread_data.modelCS
del thread_data.modelFS
thread_data.model = None
thread_data.modelCS = None
thread_data.modelFS = None
gc()
# def wait_model_move_to(model, target_device): # Send to target_device and wait until complete.
# if thread_data.device == target_device: return
# start_mem = torch.cuda.memory_allocated(thread_data.device) / 1e6
# if start_mem <= 0: return
# model_name = model.__class__.__name__
# print(f'Device {thread_data.device} - Sending model {model_name} to {target_device} | Memory transfer starting. Memory Used: {round(start_mem)}Mb')
# start_time = time.time()
# model.to(target_device)
# time_step = start_time
# WARNING_TIMEOUT = 1.5 # seconds - Show activity in console after timeout.
# last_mem = start_mem
# is_transfering = True
# while is_transfering:
# time.sleep(0.5) # 500ms
# mem = torch.cuda.memory_allocated(thread_data.device) / 1e6
# is_transfering = bool(mem > 0 and mem < last_mem) # still stuff loaded, but less than last time.
# last_mem = mem
# if not is_transfering:
# break;
# if time.time() - time_step > WARNING_TIMEOUT: # Long delay, print to console to show activity.
# print(f'Device {thread_data.device} - Waiting for Memory transfer. Memory Used: {round(mem)}Mb, Transfered: {round(start_mem - mem)}Mb')
# time_step = time.time()
# print(f'Device {thread_data.device} - {model_name} Moved: {round(start_mem - last_mem)}Mb in {round(time.time() - start_time, 3)} seconds to {target_device}')
def move_to_cpu(model):
if thread_data.device != "cpu":
d = torch.device(thread_data.device)
mem = torch.cuda.memory_allocated(d) / 1e6
model.to("cpu")
while torch.cuda.memory_allocated(d) / 1e6 >= mem:
time.sleep(1)
def load_model_gfpgan():
if thread_data.gfpgan_file is None: raise ValueError(f'Thread gfpgan_file is undefined.')
# hack for a bug in facexlib: https://github.com/xinntao/facexlib/pull/19/files
from facexlib.detection import retinaface
retinaface.device = torch.device(thread_data.device)
print('forced retinaface.device to', thread_data.device)
model_path = thread_data.gfpgan_file + ".pth"
thread_data.model_gfpgan = GFPGANer(device=torch.device(thread_data.device), model_path=model_path, upscale=1, arch='clean', channel_multiplier=2, bg_upsampler=None)
print('loaded', thread_data.gfpgan_file, 'to', thread_data.model_gfpgan.device, 'precision', thread_data.precision)
def load_model_real_esrgan():
if thread_data.real_esrgan_file is None: raise ValueError(f'Thread real_esrgan_file is undefined.')
model_path = thread_data.real_esrgan_file + ".pth"
RealESRGAN_models = {
'RealESRGAN_x4plus': RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32, scale=4),
'RealESRGAN_x4plus_anime_6B': RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=6, num_grow_ch=32, scale=4)
}
model_to_use = RealESRGAN_models[thread_data.real_esrgan_file]
if thread_data.device == 'cpu':
thread_data.model_real_esrgan = RealESRGANer(device=torch.device(thread_data.device), scale=2, model_path=model_path, model=model_to_use, pre_pad=0, half=False) # cpu does not support half
#thread_data.model_real_esrgan.device = torch.device(thread_data.device)
thread_data.model_real_esrgan.model.to('cpu')
else:
thread_data.model_real_esrgan = RealESRGANer(device=torch.device(thread_data.device), scale=2, model_path=model_path, model=model_to_use, pre_pad=0, half=thread_data.model_is_half)
thread_data.model_real_esrgan.model.name = thread_data.real_esrgan_file
print('loaded ', thread_data.real_esrgan_file, 'to', thread_data.model_real_esrgan.device, 'precision', thread_data.precision)
def get_session_out_path(disk_path, session_id):
if disk_path is None: return None
if session_id is None: return None
session_out_path = os.path.join(disk_path, filename_regex.sub('_',session_id))
os.makedirs(session_out_path, exist_ok=True)
return session_out_path
def get_base_path(disk_path, session_id, prompt, img_id, ext, suffix=None):
if disk_path is None: return None
if session_id is None: return None
if ext is None: raise Exception('Missing ext')
session_out_path = get_session_out_path(disk_path, session_id)
prompt_flattened = filename_regex.sub('_', prompt)[:50]
if suffix is not None:
return os.path.join(session_out_path, f"{prompt_flattened}_{img_id}_{suffix}.{ext}")
return os.path.join(session_out_path, f"{prompt_flattened}_{img_id}.{ext}")
def apply_filters(filter_name, image_data, model_path=None):
print(f'Applying filter {filter_name}...')
gc() # Free space before loading new data.
if isinstance(image_data, torch.Tensor):
image_data.to(thread_data.device)
if filter_name == 'gfpgan':
if model_path is not None and model_path != thread_data.gfpgan_file:
thread_data.gfpgan_file = model_path
load_model_gfpgan()
elif not thread_data.model_gfpgan:
load_model_gfpgan()
if thread_data.model_gfpgan is None: raise Exception('Model "gfpgan" not loaded.')
print('enhance with', thread_data.gfpgan_file, 'on', thread_data.model_gfpgan.device, 'precision', thread_data.precision)
_, _, output = thread_data.model_gfpgan.enhance(image_data[:,:,::-1], has_aligned=False, only_center_face=False, paste_back=True)
image_data = output[:,:,::-1]
if filter_name == 'real_esrgan':
if model_path is not None and model_path != thread_data.real_esrgan_file:
thread_data.real_esrgan_file = model_path
load_model_real_esrgan()
elif not thread_data.model_real_esrgan:
load_model_real_esrgan()
if thread_data.model_real_esrgan is None: raise Exception('Model "gfpgan" not loaded.')
print('enhance with', thread_data.real_esrgan_file, 'on', thread_data.model_real_esrgan.device, 'precision', thread_data.precision)
output, _ = thread_data.model_real_esrgan.enhance(image_data[:,:,::-1])
image_data = output[:,:,::-1]
return image_data
def mk_img(req: Request):
try:
yield from do_mk_img(req)
except Exception as e:
print(traceback.format_exc())
if thread_data.device != 'cpu':
thread_data.modelFS.to('cpu')
thread_data.modelCS.to('cpu')
thread_data.model.model1.to("cpu")
thread_data.model.model2.to("cpu")
gc() # Release from memory.
yield json.dumps({
"status": 'failed',
"detail": str(e)
})
def update_temp_img(req, x_samples):
partial_images = []
for i in range(req.num_outputs):
x_sample_ddim = thread_data.modelFS.decode_first_stage(x_samples[i].unsqueeze(0))
x_sample = torch.clamp((x_sample_ddim + 1.0) / 2.0, min=0.0, max=1.0)
x_sample = 255.0 * rearrange(x_sample[0].cpu().numpy(), "c h w -> h w c")
x_sample = x_sample.astype(np.uint8)
img = Image.fromarray(x_sample)
buf = BytesIO()
img.save(buf, format='JPEG')
buf.seek(0)
del img, x_sample, x_sample_ddim
# don't delete x_samples, it is used in the code that called this callback
thread_data.temp_images[str(req.session_id) + '/' + str(i)] = buf
partial_images.append({'path': f'/image/tmp/{req.session_id}/{i}'})
return partial_images
# Build and return the apropriate generator for do_mk_img
def get_image_progress_generator(req, extra_props=None):
if not req.stream_progress_updates:
def empty_callback(x_samples, i): return x_samples
return empty_callback
thread_data.partial_x_samples = None
last_callback_time = -1
def img_callback(x_samples, i):
nonlocal last_callback_time
thread_data.partial_x_samples = x_samples
step_time = time.time() - last_callback_time if last_callback_time != -1 else -1
last_callback_time = time.time()
progress = {"step": i, "step_time": step_time}
if extra_props is not None:
progress.update(extra_props)
if req.stream_image_progress and i % 5 == 0:
progress['output'] = update_temp_img(req, x_samples)
yield json.dumps(progress)
if thread_data.stop_processing:
raise UserInitiatedStop("User requested that we stop processing")
return img_callback
def do_mk_img(req: Request):
thread_data.stop_processing = False
res = Response()
res.request = req
res.images = []
thread_data.temp_images.clear()
# custom model support:
# the req.use_stable_diffusion_model needs to be a valid path
# to the ckpt file (without the extension).
if not os.path.exists(req.use_stable_diffusion_model + '.ckpt'): raise FileNotFoundError(f'Cannot find {req.use_stable_diffusion_model}.ckpt')
needs_model_reload = False
if not thread_data.model or thread_data.ckpt_file != req.use_stable_diffusion_model or thread_data.vae_file != req.use_vae_model:
thread_data.ckpt_file = req.use_stable_diffusion_model
thread_data.vae_file = req.use_vae_model
needs_model_reload = True
if thread_data.device != 'cpu':
if (thread_data.precision == 'autocast' and (req.use_full_precision or not thread_data.model_is_half)) or \
(thread_data.precision == 'full' and not req.use_full_precision and not thread_data.force_full_precision):
thread_data.precision = 'full' if req.use_full_precision else 'autocast'
needs_model_reload = True
if needs_model_reload:
unload_models()
unload_filters()
load_model_ckpt()
if thread_data.turbo != req.turbo:
thread_data.turbo = req.turbo
thread_data.model.turbo = req.turbo
# Start by cleaning memory, loading and unloading things can leave memory allocated.
gc()
opt_prompt = req.prompt
opt_seed = req.seed
opt_n_iter = 1
opt_C = 4
opt_f = 8
opt_ddim_eta = 0.0
print(req, '\n device', torch.device(thread_data.device), "as", thread_data.device_name)
print('\n\n Using precision:', thread_data.precision)
seed_everything(opt_seed)
batch_size = req.num_outputs
prompt = opt_prompt
assert prompt is not None
data = [batch_size * [prompt]]
if thread_data.precision == "autocast" and thread_data.device != "cpu":
precision_scope = autocast
else:
precision_scope = nullcontext
mask = None
if req.init_image is None:
handler = _txt2img
init_latent = None
t_enc = None
else:
handler = _img2img
init_image = load_img(req.init_image, req.width, req.height)
init_image = init_image.to(thread_data.device)
if thread_data.device != "cpu" and thread_data.precision == "autocast":
init_image = init_image.half()
thread_data.modelFS.to(thread_data.device)
init_image = repeat(init_image, '1 ... -> b ...', b=batch_size)
init_latent = thread_data.modelFS.get_first_stage_encoding(thread_data.modelFS.encode_first_stage(init_image)) # move to latent space
if req.mask is not None:
mask = load_mask(req.mask, req.width, req.height, init_latent.shape[2], init_latent.shape[3], True).to(thread_data.device)
mask = mask[0][0].unsqueeze(0).repeat(4, 1, 1).unsqueeze(0)
mask = repeat(mask, '1 ... -> b ...', b=batch_size)
if thread_data.device != "cpu" and thread_data.precision == "autocast":
mask = mask.half()
# Send to CPU and wait until complete.
# wait_model_move_to(thread_data.modelFS, 'cpu')
move_to_cpu(thread_data.modelFS)
assert 0. <= req.prompt_strength <= 1., 'can only work with strength in [0.0, 1.0]'
t_enc = int(req.prompt_strength * req.num_inference_steps)
print(f"target t_enc is {t_enc} steps")
if req.save_to_disk_path is not None:
session_out_path = get_session_out_path(req.save_to_disk_path, req.session_id)
else:
session_out_path = None
with torch.no_grad():
for n in trange(opt_n_iter, desc="Sampling"):
for prompts in tqdm(data, desc="data"):
with precision_scope("cuda"):
if thread_data.reduced_memory:
thread_data.modelCS.to(thread_data.device)
uc = None
if req.guidance_scale != 1.0:
uc = thread_data.modelCS.get_learned_conditioning(batch_size * [req.negative_prompt])
if isinstance(prompts, tuple):
prompts = list(prompts)
subprompts, weights = split_weighted_subprompts(prompts[0])
if len(subprompts) > 1:
c = torch.zeros_like(uc)
totalWeight = sum(weights)
# normalize each "sub prompt" and add it
for i in range(len(subprompts)):
weight = weights[i]
# if not skip_normalize:
weight = weight / totalWeight
c = torch.add(c, thread_data.modelCS.get_learned_conditioning(subprompts[i]), alpha=weight)
else:
c = thread_data.modelCS.get_learned_conditioning(prompts)
if thread_data.reduced_memory:
thread_data.modelFS.to(thread_data.device)
n_steps = req.num_inference_steps if req.init_image is None else t_enc
img_callback = get_image_progress_generator(req, {"total_steps": n_steps})
# run the handler
try:
print('Running handler...')
if handler == _txt2img:
x_samples = _txt2img(req.width, req.height, req.num_outputs, req.num_inference_steps, req.guidance_scale, None, opt_C, opt_f, opt_ddim_eta, c, uc, opt_seed, img_callback, mask, req.sampler)
else:
x_samples = _img2img(init_latent, t_enc, batch_size, req.guidance_scale, c, uc, req.num_inference_steps, opt_ddim_eta, opt_seed, img_callback, mask)
if req.stream_progress_updates:
yield from x_samples
if hasattr(thread_data, 'partial_x_samples'):
if thread_data.partial_x_samples is not None:
x_samples = thread_data.partial_x_samples
del thread_data.partial_x_samples
except UserInitiatedStop:
if not hasattr(thread_data, 'partial_x_samples'):
continue
if thread_data.partial_x_samples is None:
del thread_data.partial_x_samples
continue
x_samples = thread_data.partial_x_samples
del thread_data.partial_x_samples
print("decoding images")
img_data = [None] * batch_size
for i in range(batch_size):
x_samples_ddim = thread_data.modelFS.decode_first_stage(x_samples[i].unsqueeze(0))
x_sample = torch.clamp((x_samples_ddim + 1.0) / 2.0, min=0.0, max=1.0)
x_sample = 255.0 * rearrange(x_sample[0].cpu().numpy(), "c h w -> h w c")
x_sample = x_sample.astype(np.uint8)
img_data[i] = x_sample
del x_samples, x_samples_ddim, x_sample
print("saving images")
for i in range(batch_size):
img = Image.fromarray(img_data[i])
img_id = base64.b64encode(int(time.time()+i).to_bytes(8, 'big')).decode() # Generate unique ID based on time.
img_id = img_id.translate({43:None, 47:None, 61:None})[-8:] # Remove + / = and keep last 8 chars.
has_filters = (req.use_face_correction is not None and req.use_face_correction.startswith('GFPGAN')) or \
(req.use_upscale is not None and req.use_upscale.startswith('RealESRGAN'))
return_orig_img = not has_filters or not req.show_only_filtered_image
if thread_data.stop_processing:
return_orig_img = True
if req.save_to_disk_path is not None:
if return_orig_img:
img_out_path = get_base_path(req.save_to_disk_path, req.session_id, prompts[0], img_id, req.output_format)
save_image(img, img_out_path)
meta_out_path = get_base_path(req.save_to_disk_path, req.session_id, prompts[0], img_id, 'txt')
save_metadata(meta_out_path, req, prompts[0], opt_seed)
if return_orig_img:
img_str = img_to_base64_str(img, req.output_format)
res_image_orig = ResponseImage(data=img_str, seed=opt_seed)
res.images.append(res_image_orig)
if req.save_to_disk_path is not None:
res_image_orig.path_abs = img_out_path
del img
if has_filters and not thread_data.stop_processing:
filters_applied = []
if req.use_face_correction:
img_data[i] = apply_filters('gfpgan', img_data[i], req.use_face_correction)
filters_applied.append(req.use_face_correction)
if req.use_upscale:
img_data[i] = apply_filters('real_esrgan', img_data[i], req.use_upscale)
filters_applied.append(req.use_upscale)
if (len(filters_applied) > 0):
filtered_image = Image.fromarray(img_data[i])
filtered_img_data = img_to_base64_str(filtered_image, req.output_format)
response_image = ResponseImage(data=filtered_img_data, seed=opt_seed)
res.images.append(response_image)
if req.save_to_disk_path is not None:
filtered_img_out_path = get_base_path(req.save_to_disk_path, req.session_id, prompts[0], img_id, req.output_format, "_".join(filters_applied))
save_image(filtered_image, filtered_img_out_path)
response_image.path_abs = filtered_img_out_path
del filtered_image
# Filter Applied, move to next seed
opt_seed += 1
# if thread_data.reduced_memory:
# unload_filters()
move_to_cpu(thread_data.modelFS)
del img_data
gc()
if thread_data.device != 'cpu':
print(f'memory_final = {round(torch.cuda.memory_allocated(thread_data.device) / 1e6, 2)}Mb')
print('Task completed')
yield json.dumps(res.json())
def save_image(img, img_out_path):
try:
img.save(img_out_path)
except:
print('could not save the file', traceback.format_exc())
def save_metadata(meta_out_path, req, prompt, opt_seed):
metadata = f'''{prompt}
Width: {req.width}
Height: {req.height}
Seed: {opt_seed}
Steps: {req.num_inference_steps}
Guidance Scale: {req.guidance_scale}
Prompt Strength: {req.prompt_strength}
Use Face Correction: {req.use_face_correction}
Use Upscaling: {req.use_upscale}
Sampler: {req.sampler}
Negative Prompt: {req.negative_prompt}
Stable Diffusion model: {req.use_stable_diffusion_model + '.ckpt'}
VAE model: {req.use_vae_model}
'''
try:
with open(meta_out_path, 'w', encoding='utf-8') as f:
f.write(metadata)
except:
print('could not save the file', traceback.format_exc())
def _txt2img(opt_W, opt_H, opt_n_samples, opt_ddim_steps, opt_scale, start_code, opt_C, opt_f, opt_ddim_eta, c, uc, opt_seed, img_callback, mask, sampler_name):
shape = [opt_n_samples, opt_C, opt_H // opt_f, opt_W // opt_f]
# Send to CPU and wait until complete.
# wait_model_move_to(thread_data.modelCS, 'cpu')
move_to_cpu(thread_data.modelCS)
if sampler_name == 'ddim':
thread_data.model.make_schedule(ddim_num_steps=opt_ddim_steps, ddim_eta=opt_ddim_eta, verbose=False)
samples_ddim = thread_data.model.sample(
S=opt_ddim_steps,
conditioning=c,
seed=opt_seed,
shape=shape,
verbose=False,
unconditional_guidance_scale=opt_scale,
unconditional_conditioning=uc,
eta=opt_ddim_eta,
x_T=start_code,
img_callback=img_callback,
mask=mask,
sampler = sampler_name,
)
yield from samples_ddim
def _img2img(init_latent, t_enc, batch_size, opt_scale, c, uc, opt_ddim_steps, opt_ddim_eta, opt_seed, img_callback, mask):
# encode (scaled latent)
z_enc = thread_data.model.stochastic_encode(
init_latent,
torch.tensor([t_enc] * batch_size).to(thread_data.device),
opt_seed,
opt_ddim_eta,
opt_ddim_steps,
)
x_T = None if mask is None else init_latent
# decode it
samples_ddim = thread_data.model.sample(
t_enc,
c,
z_enc,
unconditional_guidance_scale=opt_scale,
unconditional_conditioning=uc,
img_callback=img_callback,
mask=mask,
x_T=x_T,
sampler = 'ddim'
)
yield from samples_ddim
def gc():
gc_collect()
if thread_data.device == 'cpu':
return
torch.cuda.empty_cache()
torch.cuda.ipc_collect()
# internal
def chunk(it, size):
it = iter(it)
return iter(lambda: tuple(islice(it, size)), ())
def load_model_from_config(ckpt, verbose=False):
print(f"Loading model from {ckpt}")
pl_sd = torch.load(ckpt, map_location="cpu")
if "global_step" in pl_sd:
print(f"Global Step: {pl_sd['global_step']}")
sd = pl_sd["state_dict"]
return sd
# utils
class UserInitiatedStop(Exception):
pass
def load_img(img_str, w0, h0):
image = base64_str_to_img(img_str).convert("RGB")
w, h = image.size
print(f"loaded input image of size ({w}, {h}) from base64")
if h0 is not None and w0 is not None:
h, w = h0, w0
w, h = map(lambda x: x - x % 64, (w, h)) # resize to integer multiple of 64
image = image.resize((w, h), resample=Image.Resampling.LANCZOS)
image = np.array(image).astype(np.float32) / 255.0
image = image[None].transpose(0, 3, 1, 2)
image = torch.from_numpy(image)
return 2.*image - 1.
def load_mask(mask_str, h0, w0, newH, newW, invert=False):
image = base64_str_to_img(mask_str).convert("RGB")
w, h = image.size
print(f"loaded input mask of size ({w}, {h})")
if invert:
print("inverted")
image = ImageOps.invert(image)
# where_0, where_1 = np.where(image == 0), np.where(image == 255)
# image[where_0], image[where_1] = 255, 0
if h0 is not None and w0 is not None:
h, w = h0, w0
w, h = map(lambda x: x - x % 64, (w, h)) # resize to integer multiple of 64
print(f"New mask size ({w}, {h})")
image = image.resize((newW, newH), resample=Image.Resampling.LANCZOS)
image = np.array(image)
image = image.astype(np.float32) / 255.0
image = image[None].transpose(0, 3, 1, 2)
image = torch.from_numpy(image)
return image
# https://stackoverflow.com/a/61114178
def img_to_base64_str(img, output_format="PNG"):
buffered = BytesIO()
img.save(buffered, format=output_format)
buffered.seek(0)
img_byte = buffered.getvalue()
mime_type = "image/png" if output_format.lower() == "png" else "image/jpeg"
img_str = f"data:{mime_type};base64," + base64.b64encode(img_byte).decode()
return img_str
def base64_str_to_buffer(img_str):
mime_type = "image/png" if img_str.startswith("data:image/png;") else "image/jpeg"
img_str = img_str[len(f"data:{mime_type};base64,"):]
data = base64.b64decode(img_str)
buffered = BytesIO(data)
return buffered
def base64_str_to_img(img_str):
buffered = base64_str_to_buffer(img_str)
img = Image.open(buffered)
return img

View File

@ -1,461 +0,0 @@
"""server.py: FastAPI SD-UI Web Host.
Notes:
async endpoints always run on the main thread. Without they run on the thread pool.
"""
import json
import traceback
import sys
import os
import picklescan.scanner
import rich
SD_DIR = os.getcwd()
print('started in ', SD_DIR)
SD_UI_DIR = os.getenv('SD_UI_PATH', None)
sys.path.append(os.path.dirname(SD_UI_DIR))
CONFIG_DIR = os.path.abspath(os.path.join(SD_UI_DIR, '..', 'scripts'))
MODELS_DIR = os.path.abspath(os.path.join(SD_DIR, '..', 'models'))
USER_UI_PLUGINS_DIR = os.path.abspath(os.path.join(SD_DIR, '..', 'plugins', 'ui'))
CORE_UI_PLUGINS_DIR = os.path.abspath(os.path.join(SD_UI_DIR, 'plugins', 'ui'))
UI_PLUGINS_SOURCES = ((CORE_UI_PLUGINS_DIR, 'core'), (USER_UI_PLUGINS_DIR, 'user'))
STABLE_DIFFUSION_MODEL_EXTENSIONS = ['.ckpt']
VAE_MODEL_EXTENSIONS = ['.vae.pt', '.ckpt']
OUTPUT_DIRNAME = "Stable Diffusion UI" # in the user's home folder
TASK_TTL = 15 * 60 # Discard last session's task timeout
APP_CONFIG_DEFAULTS = {
# auto: selects the cuda device with the most free memory, cuda: use the currently active cuda device.
'render_devices': 'auto', # valid entries: 'auto', 'cpu' or 'cuda:N' (where N is a GPU index)
'update_branch': 'main',
'ui': {
'open_browser_on_start': True,
},
}
APP_CONFIG_DEFAULT_MODELS = [
# needed to support the legacy installations
'custom-model', # Check if user has a custom model, use it first.
'sd-v1-4', # Default fallback.
]
from fastapi import FastAPI, HTTPException
from fastapi.staticfiles import StaticFiles
from starlette.responses import FileResponse, JSONResponse, StreamingResponse
from pydantic import BaseModel
import logging
#import queue, threading, time
from typing import Any, Generator, Hashable, List, Optional, Union
from sd_internal import Request, Response, task_manager
app = FastAPI()
modifiers_cache = None
outpath = os.path.join(os.path.expanduser("~"), OUTPUT_DIRNAME)
os.makedirs(USER_UI_PLUGINS_DIR, exist_ok=True)
# don't show access log entries for URLs that start with the given prefix
ACCESS_LOG_SUPPRESS_PATH_PREFIXES = ['/ping', '/image', '/modifier-thumbnails']
NOCACHE_HEADERS={"Cache-Control": "no-cache, no-store, must-revalidate", "Pragma": "no-cache", "Expires": "0"}
class NoCacheStaticFiles(StaticFiles):
def is_not_modified(self, response_headers, request_headers) -> bool:
if 'content-type' in response_headers and ('javascript' in response_headers['content-type'] or 'css' in response_headers['content-type']):
response_headers.update(NOCACHE_HEADERS)
return False
return super().is_not_modified(response_headers, request_headers)
app.mount('/media', NoCacheStaticFiles(directory=os.path.join(SD_UI_DIR, 'media')), name="media")
for plugins_dir, dir_prefix in UI_PLUGINS_SOURCES:
app.mount(f'/plugins/{dir_prefix}', NoCacheStaticFiles(directory=plugins_dir), name=f"plugins-{dir_prefix}")
def getConfig(default_val=APP_CONFIG_DEFAULTS):
try:
config_json_path = os.path.join(CONFIG_DIR, 'config.json')
if not os.path.exists(config_json_path):
return default_val
with open(config_json_path, 'r', encoding='utf-8') as f:
config = json.load(f)
if 'net' not in config:
config['net'] = {}
if os.getenv('SD_UI_BIND_PORT') is not None:
config['net']['listen_port'] = int(os.getenv('SD_UI_BIND_PORT'))
if os.getenv('SD_UI_BIND_IP') is not None:
config['net']['listen_to_network'] = ( os.getenv('SD_UI_BIND_IP') == '0.0.0.0' )
return config
except Exception as e:
print(str(e))
print(traceback.format_exc())
return default_val
def setConfig(config):
print( json.dumps(config) )
try: # config.json
config_json_path = os.path.join(CONFIG_DIR, 'config.json')
with open(config_json_path, 'w', encoding='utf-8') as f:
json.dump(config, f)
except:
print(traceback.format_exc())
try: # config.bat
config_bat_path = os.path.join(CONFIG_DIR, 'config.bat')
config_bat = []
if 'update_branch' in config:
config_bat.append(f"@set update_branch={config['update_branch']}")
config_bat.append(f"@set SD_UI_BIND_PORT={config['net']['listen_port']}")
bind_ip = '0.0.0.0' if config['net']['listen_to_network'] else '127.0.0.1'
config_bat.append(f"@set SD_UI_BIND_IP={bind_ip}")
if len(config_bat) > 0:
with open(config_bat_path, 'w', encoding='utf-8') as f:
f.write('\r\n'.join(config_bat))
except:
print(traceback.format_exc())
try: # config.sh
config_sh_path = os.path.join(CONFIG_DIR, 'config.sh')
config_sh = ['#!/bin/bash']
if 'update_branch' in config:
config_sh.append(f"export update_branch={config['update_branch']}")
config_sh.append(f"export SD_UI_BIND_PORT={config['net']['listen_port']}")
bind_ip = '0.0.0.0' if config['net']['listen_to_network'] else '127.0.0.1'
config_sh.append(f"export SD_UI_BIND_IP={bind_ip}")
if len(config_sh) > 1:
with open(config_sh_path, 'w', encoding='utf-8') as f:
f.write('\n'.join(config_sh))
except:
print(traceback.format_exc())
def resolve_model_to_use(model_name:str, model_type:str, model_dir:str, model_extensions:list, default_models=[]):
model_dirs = [os.path.join(MODELS_DIR, model_dir), SD_DIR]
if not model_name: # When None try user configured model.
config = getConfig()
if 'model' in config and model_type in config['model']:
model_name = config['model'][model_type]
if model_name:
# Check models directory
models_dir_path = os.path.join(MODELS_DIR, model_dir, model_name)
for model_extension in model_extensions:
if os.path.exists(models_dir_path + model_extension):
return models_dir_path
if os.path.exists(model_name + model_extension):
# Direct Path to file
model_name = os.path.abspath(model_name)
return model_name
# Default locations
if model_name in default_models:
default_model_path = os.path.join(SD_DIR, model_name)
for model_extension in model_extensions:
if os.path.exists(default_model_path + model_extension):
return default_model_path
# Can't find requested model, check the default paths.
for default_model in default_models:
for model_dir in model_dirs:
default_model_path = os.path.join(model_dir, default_model)
for model_extension in model_extensions:
if os.path.exists(default_model_path + model_extension):
if model_name is not None:
print(f'Could not find the configured custom model {model_name}{model_extension}. Using the default one: {default_model_path}{model_extension}')
return default_model_path
raise Exception('No valid models found.')
def resolve_ckpt_to_use(model_name:str=None):
return resolve_model_to_use(model_name, model_type='stable-diffusion', model_dir='stable-diffusion', model_extensions=STABLE_DIFFUSION_MODEL_EXTENSIONS, default_models=APP_CONFIG_DEFAULT_MODELS)
def resolve_vae_to_use(model_name:str=None):
try:
return resolve_model_to_use(model_name, model_type='vae', model_dir='vae', model_extensions=VAE_MODEL_EXTENSIONS, default_models=[])
except:
return None
class SetAppConfigRequest(BaseModel):
update_branch: str = None
render_devices: Union[List[str], List[int], str, int] = None
model_vae: str = None
ui_open_browser_on_start: bool = None
listen_to_network: bool = None
listen_port: int = None
@app.post('/app_config')
async def setAppConfig(req : SetAppConfigRequest):
config = getConfig()
if req.update_branch is not None:
config['update_branch'] = req.update_branch
if req.render_devices is not None:
update_render_devices_in_config(config, req.render_devices)
if req.ui_open_browser_on_start is not None:
if 'ui' not in config:
config['ui'] = {}
config['ui']['open_browser_on_start'] = req.ui_open_browser_on_start
if req.listen_to_network is not None:
if 'net' not in config:
config['net'] = {}
config['net']['listen_to_network'] = bool(req.listen_to_network)
if req.listen_port is not None:
if 'net' not in config:
config['net'] = {}
config['net']['listen_port'] = int(req.listen_port)
try:
setConfig(config)
if req.render_devices:
update_render_threads()
return JSONResponse({'status': 'OK'}, headers=NOCACHE_HEADERS)
except Exception as e:
print(traceback.format_exc())
raise HTTPException(status_code=500, detail=str(e))
def is_malicious_model(file_path):
try:
scan_result = picklescan.scanner.scan_file_path(file_path)
if scan_result.issues_count > 0 or scan_result.infected_files > 0:
rich.print(":warning: [bold red]Scan %s: %d scanned, %d issue, %d infected.[/bold red]" % (file_path, scan_result.scanned_files, scan_result.issues_count, scan_result.infected_files))
return True
else:
rich.print("Scan %s: [green]%d scanned, %d issue, %d infected.[/green]" % (file_path, scan_result.scanned_files, scan_result.issues_count, scan_result.infected_files))
return False
except Exception as e:
print('error while scanning', file_path, 'error:', e)
return False
def getModels():
models = {
'active': {
'stable-diffusion': 'sd-v1-4',
'vae': '',
},
'options': {
'stable-diffusion': ['sd-v1-4'],
'vae': [],
},
}
def listModels(models_dirname, model_type, model_extensions):
models_dir = os.path.join(MODELS_DIR, models_dirname)
if not os.path.exists(models_dir):
os.makedirs(models_dir)
for file in os.listdir(models_dir):
for model_extension in model_extensions:
if not file.endswith(model_extension):
continue
if is_malicious_model(os.path.join(models_dir, file)):
models['scan-error'] = file
return
model_name = file[:-len(model_extension)]
models['options'][model_type].append(model_name)
models['options'][model_type] = [*set(models['options'][model_type])] # remove duplicates
models['options'][model_type].sort()
# custom models
listModels(models_dirname='stable-diffusion', model_type='stable-diffusion', model_extensions=STABLE_DIFFUSION_MODEL_EXTENSIONS)
listModels(models_dirname='vae', model_type='vae', model_extensions=VAE_MODEL_EXTENSIONS)
# legacy
custom_weight_path = os.path.join(SD_DIR, 'custom-model.ckpt')
if os.path.exists(custom_weight_path):
models['options']['stable-diffusion'].append('custom-model')
return models
def getUIPlugins():
plugins = []
for plugins_dir, dir_prefix in UI_PLUGINS_SOURCES:
for file in os.listdir(plugins_dir):
if file.endswith('.plugin.js'):
plugins.append(f'/plugins/{dir_prefix}/{file}')
return plugins
@app.get('/get/{key:path}')
def read_web_data(key:str=None):
if not key: # /get without parameters, stable-diffusion easter egg.
raise HTTPException(status_code=418, detail="StableDiffusion is drawing a teapot!") # HTTP418 I'm a teapot
elif key == 'app_config':
config = getConfig(default_val=None)
if config is None:
config = APP_CONFIG_DEFAULTS
return JSONResponse(config, headers=NOCACHE_HEADERS)
elif key == 'devices':
config = getConfig()
devices = task_manager.get_devices()
devices['config'] = config.get('render_devices', "auto")
return JSONResponse(devices, headers=NOCACHE_HEADERS)
elif key == 'models':
return JSONResponse(getModels(), headers=NOCACHE_HEADERS)
elif key == 'modifiers': return FileResponse(os.path.join(SD_UI_DIR, 'modifiers.json'), headers=NOCACHE_HEADERS)
elif key == 'output_dir': return JSONResponse({ 'output_dir': outpath }, headers=NOCACHE_HEADERS)
elif key == 'ui_plugins': return JSONResponse(getUIPlugins(), headers=NOCACHE_HEADERS)
else:
raise HTTPException(status_code=404, detail=f'Request for unknown {key}') # HTTP404 Not Found
@app.get('/ping') # Get server and optionally session status.
def ping(session_id:str=None):
if task_manager.is_alive() <= 0: # Check that render threads are alive.
if task_manager.current_state_error: raise HTTPException(status_code=500, detail=str(task_manager.current_state_error))
raise HTTPException(status_code=500, detail='Render thread is dead.')
if task_manager.current_state_error and not isinstance(task_manager.current_state_error, StopAsyncIteration): raise HTTPException(status_code=500, detail=str(task_manager.current_state_error))
# Alive
response = {'status': str(task_manager.current_state)}
if session_id:
task = task_manager.get_cached_task(session_id, update_ttl=True)
if task:
response['task'] = id(task)
if task.lock.locked():
response['session'] = 'running'
elif isinstance(task.error, StopAsyncIteration):
response['session'] = 'stopped'
elif task.error:
response['session'] = 'error'
elif not task.buffer_queue.empty():
response['session'] = 'buffer'
elif task.response:
response['session'] = 'completed'
else:
response['session'] = 'pending'
response['devices'] = task_manager.get_devices()
return JSONResponse(response, headers=NOCACHE_HEADERS)
def save_model_to_config(ckpt_model_name, vae_model_name):
config = getConfig()
if 'model' not in config:
config['model'] = {}
config['model']['stable-diffusion'] = ckpt_model_name
config['model']['vae'] = vae_model_name
if vae_model_name is None or vae_model_name == "":
del config['model']['vae']
setConfig(config)
def update_render_devices_in_config(config, render_devices):
if render_devices not in ('cpu', 'auto') and not render_devices.startswith('cuda:'):
raise HTTPException(status_code=400, detail=f'Invalid render device requested: {render_devices}')
if render_devices.startswith('cuda:'):
render_devices = render_devices.split(',')
config['render_devices'] = render_devices
@app.post('/render')
def render(req : task_manager.ImageRequest):
try:
save_model_to_config(req.use_stable_diffusion_model, req.use_vae_model)
req.use_stable_diffusion_model = resolve_ckpt_to_use(req.use_stable_diffusion_model)
req.use_vae_model = resolve_vae_to_use(req.use_vae_model)
new_task = task_manager.render(req)
response = {
'status': str(task_manager.current_state),
'queue': len(task_manager.tasks_queue),
'stream': f'/image/stream/{req.session_id}/{id(new_task)}',
'task': id(new_task)
}
return JSONResponse(response, headers=NOCACHE_HEADERS)
except ChildProcessError as e: # Render thread is dead
raise HTTPException(status_code=500, detail=f'Rendering thread has died.') # HTTP500 Internal Server Error
except ConnectionRefusedError as e: # Unstarted task pending, deny queueing more than one.
raise HTTPException(status_code=503, detail=f'Session {req.session_id} has an already pending task.') # HTTP503 Service Unavailable
except Exception as e:
raise HTTPException(status_code=500, detail=str(e))
@app.get('/image/stream/{session_id:str}/{task_id:int}')
def stream(session_id:str, task_id:int):
#TODO Move to WebSockets ??
task = task_manager.get_cached_task(session_id, update_ttl=True)
if not task: raise HTTPException(status_code=410, detail='No request received.') # HTTP410 Gone
if (id(task) != task_id): raise HTTPException(status_code=409, detail=f'Wrong task id received. Expected:{id(task)}, Received:{task_id}') # HTTP409 Conflict
if task.buffer_queue.empty() and not task.lock.locked():
if task.response:
#print(f'Session {session_id} sending cached response')
return JSONResponse(task.response, headers=NOCACHE_HEADERS)
raise HTTPException(status_code=425, detail='Too Early, task not started yet.') # HTTP425 Too Early
#print(f'Session {session_id} opened live render stream {id(task.buffer_queue)}')
return StreamingResponse(task.read_buffer_generator(), media_type='application/json')
@app.get('/image/stop')
def stop(session_id:str=None):
if not session_id:
if task_manager.current_state == task_manager.ServerStates.Online or task_manager.current_state == task_manager.ServerStates.Unavailable:
raise HTTPException(status_code=409, detail='Not currently running any tasks.') # HTTP409 Conflict
task_manager.current_state_error = StopAsyncIteration('')
return {'OK'}
task = task_manager.get_cached_task(session_id, update_ttl=False)
if not task: raise HTTPException(status_code=404, detail=f'Session {session_id} has no active task.') # HTTP404 Not Found
if isinstance(task.error, StopAsyncIteration): raise HTTPException(status_code=409, detail=f'Session {session_id} task is already stopped.') # HTTP409 Conflict
task.error = StopAsyncIteration('')
return {'OK'}
@app.get('/image/tmp/{session_id}/{img_id:int}')
def get_image(session_id, img_id):
task = task_manager.get_cached_task(session_id, update_ttl=True)
if not task: raise HTTPException(status_code=410, detail=f'Session {session_id} has not submitted a task.') # HTTP410 Gone
if not task.temp_images[img_id]: raise HTTPException(status_code=425, detail='Too Early, task data is not available yet.') # HTTP425 Too Early
try:
img_data = task.temp_images[img_id]
img_data.seek(0)
return StreamingResponse(img_data, media_type='image/jpeg')
except KeyError as e:
raise HTTPException(status_code=500, detail=str(e))
@app.get('/')
def read_root():
return FileResponse(os.path.join(SD_UI_DIR, 'index.html'), headers=NOCACHE_HEADERS)
@app.on_event("shutdown")
def shutdown_event(): # Signal render thread to close on shutdown
task_manager.current_state_error = SystemExit('Application shutting down.')
# don't log certain requests
class LogSuppressFilter(logging.Filter):
def filter(self, record: logging.LogRecord) -> bool:
path = record.getMessage()
for prefix in ACCESS_LOG_SUPPRESS_PATH_PREFIXES:
if path.find(prefix) != -1:
return False
return True
logging.getLogger('uvicorn.access').addFilter(LogSuppressFilter())
# Start the task_manager
task_manager.default_model_to_load = resolve_ckpt_to_use()
task_manager.default_vae_to_load = resolve_vae_to_use()
def update_render_threads():
config = getConfig()
render_devices = config.get('render_devices', 'auto')
active_devices = task_manager.get_devices()['active'].keys()
print('requesting for render_devices', render_devices)
task_manager.update_render_threads(render_devices, active_devices)
update_render_threads()
# start the browser ui
def open_browser():
config = getConfig()
ui = config.get('ui', {})
net = config.get('net', {'listen_port':9000})
port = net.get('listen_port', 9000)
if ui.get('open_browser_on_start', True):
import webbrowser; webbrowser.open(f"http://localhost:{port}")
open_browser()