Compare commits

...

296 Commits

Author SHA1 Message Date
fb0c9405cf changelog 2023-01-09 19:40:17 +05:30
a17a9044ad Check whether the browser supports performance.measure/mark before calling them. Fixes https://github.com/cmdr2/stable-diffusion-ui/pull/757 2023-01-09 19:33:23 +05:30
64ced3b3f6 Tag v2.4.23, to be able to revert back incase of an emergency 2022-12-29 13:04:44 +05:30
493526c478 If downgrading to 2.4 (from 2.5), move the default models back to the legacy location 2022-12-29 13:00:57 +05:30
8cedeb349d Changes to allow rolling back from the upcoming sdkit-based system 2022-12-26 23:04:45 +05:30
72b3598687 Merge pull request #703 from JeLuF/patch-5
Bring back Linux download link
2022-12-26 17:36:43 +05:30
b1a2d36c2d Bring back Linux download link 2022-12-26 10:16:43 +01:00
e636dd3649 Merge pull request #694 from cmdr2/beta
Beta
2022-12-24 19:18:28 +05:30
9137f3793e Merge pull request #693 from madrang/mobile-fixes
Add a debounce delay to allow mobile to bouble tap.
2022-12-24 15:53:31 +05:30
04d67a24b6 Don't allow the results to be collapsed when clicking draghandle 2022-12-24 04:55:28 -05:00
55049ba9d2 Add a debounce delay to allow mobile to bouble tap. 2022-12-24 04:42:43 -05:00
c5d343750c Merge pull request #691 from JeLuF/patch-4
Avoid guidance scale "1.0"
2022-12-23 17:55:41 +05:30
09b76dcd93 Avoid guidance scale "1.0"
Using a guidance scale of 1.0 will cause an exception in the renderer and return a very confusing error message.
https://discord.com/channels/1014774730907209781/1028195513377509376
2022-12-23 13:18:08 +01:00
b87bc033f5 Merge pull request #690 from cmdr2/beta
Update CHANGES.md
2022-12-23 11:26:42 +05:30
fb95d76e34 Update CHANGES.md 2022-12-23 11:26:14 +05:30
4e765a7948 Merge pull request #689 from cmdr2/beta
Speed up image creation, by removing a delay (regression) of 4-5 seconds between clicking Make Image and calling the server
2022-12-23 11:25:14 +05:30
cf2408013e Measure the click-to-render-request latency, only if the click button was used 2022-12-23 10:54:40 +05:30
5cb24f992c Bump version 2022-12-22 15:23:07 +05:30
21394b7d45 Reduce the delay between clicking 'Make Image' and making the render call to the server. Was nearly 4-5 seconds, now it's about 250-300ms. This is a hacky workaround until a better solution is found 2022-12-22 15:22:25 +05:30
6d08082693 Merge branch 'beta' 2022-12-22 13:43:50 +05:30
768fb2583a Merge branch 'beta' of github.com:cmdr2/stable-diffusion-ui into beta 2022-12-22 13:43:09 +05:30
6e07b2354f Fix an unnecessary error when a task header is clicked 2022-12-22 13:42:47 +05:30
00597879bc Merge pull request #688 from cmdr2/beta
Update CHANGES.md
2022-12-22 13:26:15 +05:30
0cd0d6aadf Update CHANGES.md 2022-12-22 13:25:57 +05:30
9d201f82f1 Merge pull request #687 from cmdr2/beta
Undo/redo buttons in the image editor, Drag handle to reorder tasks, Pause button to pause all the tasks
2022-12-22 13:23:50 +05:30
d6c535c45c Merge branch 'main' into beta 2022-12-22 13:23:07 +05:30
babdb5b718 Prompt Matrix is in main 2022-12-22 12:26:32 +05:30
0ea8d038be Merge pull request #679 from SpecificKnot/main
Changes to Front Docs
2022-12-22 12:25:48 +05:30
4d7f6e4236 Change version number in beta 2022-12-22 10:32:40 +05:30
6036ccdc1c Style Adjustments
Made a few adjustments to fit the needs of the project for new users.
2022-12-20 12:44:48 +00:00
bacf266f0d Merge pull request #651 from madrang/release-notes
Update 'release-notes' to use loadScript
2022-12-20 10:21:07 +05:30
ba5c54043b Merge pull request #680 from AssassinJN/beta
Drag and Drop Styles
2022-12-20 10:19:30 +05:30
e33c858829 Merge pull request #1 from JeLuF/AJNdrag
Only activate the dragOver event listener when dragging tasks
2022-12-19 14:39:27 -05:00
e47e54de3f Only activate the dragOver event listener when dragging tasks 2022-12-19 20:34:06 +01:00
54f9e9bfe9 adding drag and drop styles
Add functions required for adding styles to imageTaskContainer to show where images will be dropped.
2022-12-19 13:45:42 -05:00
e1875c872c classes for drag and drop
Added classes for drag and drop.
2022-12-19 13:44:15 -05:00
27b8e173e8 Changes to Front Docs 2022-12-19 14:28:05 +00:00
af090cb289 Update README.md 2022-12-19 12:24:19 +05:30
9bbb25f16c Update README.md 2022-12-19 12:23:32 +05:30
3007f00c9b Update README.md 2022-12-19 12:22:27 +05:30
352dcfbe30 Update README.md 2022-12-19 12:20:46 +05:30
60b181a545 Update README.md 2022-12-19 12:11:01 +05:30
600482e2d7 Update README.md 2022-12-19 12:10:15 +05:30
39ccbbd72e Update README.md 2022-12-19 12:09:13 +05:30
6e69cbcdaf Merge pull request #674 from SpecificKnot/main
Simplified README
2022-12-19 11:55:04 +05:30
bf6c222a3b Merge pull request #641 from JeLuF/pause
Pause button
2022-12-19 11:52:55 +05:30
6afcf7570a Merge pull request #671 from patriceac/allow-empty-prompts
Allow empty prompts (image modifiers only)
2022-12-19 11:50:18 +05:30
c3126f7b4d Merge pull request #673 from jsuelwald/patch-1
Change time display on job
2022-12-19 11:48:38 +05:30
cb3b542363 Merge pull request #675 from JeLuF/drag
Add drag handle
2022-12-19 09:36:44 +05:30
1a5e15608c Merge pull request #676 from JeLuF/ipfix
Return empty list if hostname lookup fails
2022-12-19 09:14:40 +05:30
64a751ad79 Merge branch 'beta' into pause 2022-12-19 00:55:56 +01:00
57efe31959 Return empty list if hostname lookup fails 2022-12-19 00:42:48 +01:00
39350d554b Remove old code 2022-12-19 00:32:13 +01:00
8f4e03550c Add drag handle 2022-12-19 00:14:57 +01:00
d03823fb20 Last minute changes 2022-12-18 21:16:32 +00:00
00ec2b9d6f README Updates
Updates to README to make it easier to follow along.
2022-12-18 21:13:12 +00:00
70e4bc4582 Update README.md 2022-12-18 20:52:38 +00:00
5e56a437ef Update README.md 2022-12-18 20:48:19 +00:00
22ffd25619 Change time display on job
Change "Processed 1 image in 150.65 seconds" to "Processed 1 Image in 2 minutes 30 seconds" to be consistent with the approx. time remaining while rendering
2022-12-18 07:20:42 +01:00
127949c56b Allow empty prompts (image modifiers only)
Allows empty prompts as long as there are image modifiers. This allows the user to craft prompts just by using image modifiers if they so wish.
2022-12-17 17:06:07 -08:00
cdfef16a0e Merge pull request #670 from patriceac/collapsible-toggle-event
Fire an event when a collapsible is toggled
2022-12-17 16:49:51 +05:30
1cae39b105 Fire an event when a collapsible is toggled
Need an event to know that a collapsible got toggled to be able to resize the panels accordingly. Thanks!
2022-12-17 03:05:43 -08:00
c240d6932a Update CHANGES.md 2022-12-17 10:13:23 +05:30
c4548d9396 Merge pull request #669 from JeLuF/hover
CSS only initimg hover, 'use as input' button
2022-12-17 09:50:46 +05:30
aea70e3dd4 Merge pull request #668 from JeLuF/imgedit
Fix img resize issues, add redo/undo buttons
2022-12-17 09:50:07 +05:30
3b01e65e11 CSS only initimg hover, 'use as input' button 2022-12-17 01:30:30 +01:00
341c810bbb Fix img resize issues, add redo/undo buttons 2022-12-17 00:29:54 +01:00
85fd2dfaaa Merge pull request #664 from patriceac/tab-change-trigger
Fire an event upon tab change
2022-12-16 18:24:10 +05:30
bf4bc38c6c Merge pull request #662 from JeLuF/patch-7
Linux uses .zip, not .tar.xz (Fixes #657)
2022-12-16 18:23:44 +05:30
62553dc0fa Fire an event upon tab change
Fire an event upon tab change.
2022-12-16 01:45:58 -08:00
ef7e1575bd Linux uses .zip, not .tar.xz (Fixes #657) 2022-12-15 16:44:43 +01:00
7eb29fa91b Fix: errors were overwritten by the time taken in the UI 2022-12-14 16:52:46 +05:30
34c00fb77f Fix: errors were overwritten by the time taken in the UI 2022-12-14 16:51:30 +05:30
7965318d9f Update task_manager.py 2022-12-14 16:49:59 +05:30
e73a514e29 Revert a recent change to task error reporting, seems unstable 2022-12-14 16:37:45 +05:30
35571eb14d Don't hang the task if something other than the renderer fails (e.g. model loading) 2022-12-13 12:03:34 +05:30
8e6102ad9a removeTask() 2022-12-13 12:03:30 +05:30
80bc80dc2c removeTask() 2022-12-13 12:02:43 +05:30
f00e1a92d8 Don't hang the task if something other than the renderer fails (e.g. model loading) 2022-12-13 11:44:20 +05:30
a289945e8e Merge pull request #654 from jsuelwald/beta
The exception should also mention dpm2
2022-12-12 21:05:03 +05:30
b750c0d7c3 The exception should also mention dpm2 2022-12-12 16:24:03 +01:00
0307114c8e Merge pull request #653 from cmdr2/beta
Don't collapse the task entry if 'Stop Task' is pressed
2022-12-12 19:56:49 +05:30
92030a3917 Don't collapse the task entry if 'Stop Task' is pressed 2022-12-12 19:56:27 +05:30
73ace121a4 Merge pull request #652 from cmdr2/beta
Beta
2022-12-12 19:49:21 +05:30
44d5809e46 Changelog 2022-12-12 19:46:13 +05:30
5c4e6f7e96 Tweak editor width 2022-12-12 19:42:43 +05:30
8c032579b8 Hide the hypernetwork strength slider if no hypernetwork model is selected; Support drag-n-drop for hypernetwork models 2022-12-12 19:31:59 +05:30
b53935bfd4 Revert "Scrolling panes (#632)"
This reverts commit e3184622e8.
2022-12-12 19:03:16 +05:30
d4db027cfa Move the hypernetwork options below the sampler settings; Whitespace fixes 2022-12-12 19:02:34 +05:30
1f44a283b3 Update 'release-notes' to use loadScript 2022-12-12 02:47:42 -05:00
9947c3bcfb Start timer to IDLE_COOLDOWN before idleEventPromise completes. (#649) 2022-12-12 11:12:11 +05:30
8faf6b9f52 Don't allow to make zero images, make at least one. (#647) 2022-12-12 11:11:33 +05:30
bd1bc78953 Use onIdle(), move pause button, quick resume without using the promise 2022-12-11 14:57:01 +01:00
e6346775e7 Merge branch 'beta' into pause 2022-12-11 11:19:48 +01:00
af5c68051a Fix for the tooltips being cutoff (#636) 2022-12-11 12:59:23 +05:30
5b7cd11de8 Added support for Async events (#643)
* Added support for async events callbacks

* Don't fire IDLE event if the first callback hasn't completed execution.
2022-12-11 11:22:52 +05:30
d3c3496e55 Merge pull request #639 from madrang/newEngine
Check if window is defined. Not all JS execution environments have it.
2022-12-11 11:19:11 +05:30
c08c8b2789 Merge pull request #638 from JeLuF/initimg
show initimg in task list
2022-12-11 11:18:10 +05:30
069315e434 Merge pull request #642 from patriceac/patch-5
Fixing a typo
2022-12-11 11:16:24 +05:30
7e4ad83a1c Merge pull request #637 from madrang/mainjs_fixes
Fix (typeof stepUpdate !== 'object') not completing the task on stop.
2022-12-11 11:15:31 +05:30
400f9fd680 Merge pull request #635 from patriceac/patch-4
Store the auto-scroll checkbox setting in localStorage instead of using the auto-save framework
2022-12-11 11:06:19 +05:30
38951f5581 Pause button - check whether function is defined before calling it 2022-12-11 02:49:49 +01:00
b5329ee93d Fixing a typo
Yeah, I know... What can I say? I have my OCD too. 👀
2022-12-10 17:45:14 -08:00
c568bca69e Pause button 2022-12-11 02:31:23 +01:00
7b2be12587 Check if window is defined. Not all JS execution environments have it. 2022-12-10 18:26:48 -05:00
099fde2652 show initimg in task list 2022-12-10 17:17:37 +01:00
83e5410945 Fix (typeof stepUpdate !== 'object') not completing the task on stop. 2022-12-10 00:52:27 -05:00
b330c34b29 Fix auto-scroll setting management
After thinking about it, the auto-save toggle is meant for the *Editor* fields listed behind the Configure button. The auto-scroll toggle is not part of the Editor, and is more akin to a system setting, although it's placed in the main UI for convenience reasons related to its nature. As such, and especially considering it's a plugin, I lean towards decoupling auto-scroll from the auto-save settings, and just storing it independently.
2022-12-09 19:34:41 -08:00
e3184622e8 Scrolling panes (#632)
Decouple the editor and the preview panes. Scrollbars color updated as well as requested.
2022-12-09 23:11:39 +05:30
28f822afe0 Fix tags not being properly applied to prompt matrix (#610)
There is an issue on the beta where if you use pipe ( | ) in the prompt to make a prompt matrix, the optional prompts are only applied when the last prompt in the matrix is used.
2022-12-09 23:04:25 +05:30
854e3d3576 Fix reading value from undefined. (#631) 2022-12-09 16:34:59 +05:30
ba2c966329 First draft of multi-task in a single session. (#622) 2022-12-08 11:12:46 +05:30
f8dee7e25f Add test sample to one of the plugin. (#626)
* Added test example from a plugin.

* Only load style if #news was created.
2022-12-08 10:57:50 +05:30
a8151176d7 SD 2.1 2022-12-08 10:04:33 +05:30
9ee0b7fe2e SD 2.1 2022-12-08 10:04:14 +05:30
bfdf487d52 SD2 models no longer need to be prefixed with 'sd2_' . The model loader now checks for a key that only SD2 models seem to have, to deduce which config file to use 2022-12-07 16:19:46 +05:30
b7aac1501d Don't show prompt strength when the app starts 2022-12-07 13:12:35 +05:30
273525e6f9 Merge branch 'beta' of github.com:cmdr2/stable-diffusion-ui into beta 2022-12-07 13:12:02 +05:30
064a4938c1 Don't show prompt strength when the app starts 2022-12-07 13:11:49 +05:30
182236e742 Hypernets mergefixes (#625)
* Add hypernetwork args definition in the engine.

* Add the values to reqBody

* Don't load hypernetwork.py with SD2 until it's compatible.
2022-12-07 12:35:36 +05:30
75cb052cca Paint editor - translucent mask, more brush size options 2022-12-07 12:28:28 +05:30
d4a378827f Paint editor - translucent mask, more brush size options 2022-12-07 12:27:40 +05:30
592d5e8c40 Merge branch 'beta' of github.com:cmdr2/stable-diffusion-ui into beta 2022-12-07 11:41:52 +05:30
733150111d Changelog 2022-12-07 11:41:36 +05:30
cbe91251ac Hypernetwork support (#619)
* Update README.md

* Update README.md

* Make on_sd_start.sh executable

* Merge pull request #542 from patriceac/patch-1

Fix restoration of model and VAE

* Merge pull request #541 from patriceac/patch-2

Fix restoration of parallel output setting

* Hypernetwork support

Adds support for hypernetworks. Hypernetworks are stored in /models/hypernetworks

* forgot to remove unused code

Co-authored-by: cmdr2 <secondary.cmdr2@gmail.com>
2022-12-07 11:24:16 +05:30
1283c6483d Use the reqBody exposed to events to allow plugins to change the request. (#620) 2022-12-07 09:34:04 +05:30
f24d3d69af Fix download pictures (#616)
Old link was broken. Apparently the "develop" branch was deleted.
2022-12-07 09:33:11 +05:30
7984327d81 Fixed tasks buttons by replacing the error with a warning when setting properties to undefined. (#618) 2022-12-06 21:49:05 +05:30
ef90832aea engine.js (#615)
* New engine.js first draft.

* Small fixes...

* Bump version for cache...

* Improved cancellation code.

* Cleaning

* Wrong argument used in Task.waitUntil

* session_id needs to always match SD.sessionId

* Removed passing explicit Session ID from UI.
Use SD.sessionID to replace.

* Cleaning... Removed a disabled line and a hardcoded value.

* Fix return if tasks are still waiting.

* Added checkbox to reverse processing order.

* Fixed progress not displaying properly.

* Renamed reverse label.

* Only hide progress bar inside onCompleted.

* Thanks to rbertus2000 for helping testing and debugging!

* Resolve async promises when used optionally.

* when removed var should have used let, not const.

* Renamed getTaskErrorHandler to onTaskErrorHandler to better reflect actual implementation.

* Switched to the unsafer and less git friendly end of lines comma as requested in review.

* Raised SERVER_STATE_VALIDITY_DURATION to 90 seconds to match the changes to Beta.

* Added logging.

* Added one more hook before those inside the SD engine.

* Added selftest.plugin.js as part of core.

* Removed a tests that wasn't yet implemented...

* Groupped task stopping and abort in single function.

* Added optional test for plugins.

* Allow prompt text to be selected.

* Added comment.

* Improved isServerAvailable for better mobile usage and added comments for easier debugging.

* Comments...

* Normalized EVENT_STATUS_CHANGED to follow the same pattern as the other events.

* Disable plugins if editorModifierTagsList is not defined.

* Adds a new ServiceContainer to register IOC handlers.

* Added expect test for a missing dependency in a ServiceContainer

* Moved all event code in it's own sub class for easier reuse.

* Removed forgotten unused var...

* Allow getPrompts to be reused be plugins.

* Renamed EventSource to GenericEventSource to avoid redefining an existing class name.

* Added missing time argument to debounce

* Added output_quality to engine.js

* output_quality need to be an int.

* Fixed typo.

* Replaced the default euler_a by dpm2 to work with both SD1.# and SD2

* Remove generic completed tasks from plugins on generator complete.

* dpm2 starts at step 2, replaced with plms to start at step 1.

* Merge error

* Merge error

* changelog

Co-authored-by: Marc-Andre Ferland <madrang@gmail.com>
2022-12-06 17:04:08 +05:30
9571b8addc Merge pull request #614 from cmdr2/beta
Beta
2022-12-06 16:18:24 +05:30
9601f304a5 Merge branch 'beta' of github.com:cmdr2/stable-diffusion-ui into beta 2022-12-06 16:17:55 +05:30
ff43dac2a7 Open the color box when the custom label is clicked 2022-12-06 16:17:45 +05:30
0a43305455 Merge pull request #613 from cmdr2/beta
Beta
2022-12-06 16:11:38 +05:30
54d8224de2 Update CHANGES.md 2022-12-06 16:09:58 +05:30
c9e34457cd Tweak text in editor 2022-12-06 15:39:27 +05:30
47c8eb304f Revert the button styling 2022-12-06 15:36:52 +05:30
2dd39fa218 Disable auto-save for the auto-scroll toggle, until a better way to save it is figured out. It currently breaks a few UI fields, since it calls initSettings() a second time 2022-12-06 15:20:31 +05:30
cb618efb98 Image Editor Updates (#612)
* fixed tools for image editor to be more modular and made cursor an actual cursor change

* fixed eraser cursor positioning

* updated opacity to not have a 100 option

* separated clear into an actions section

* added history support for image editor. ctrl-z and ctrl-y both work now

* removed extra console log debugging stuff

* updated buttons style

* updated the button ui on the main page as requested

* updated with a bunch of bugfixes
2022-12-06 13:56:51 +05:30
e7ca8090fd Make JPEG Output quality user controllable (#607)
Add a slider to the image options for the JPEG quality
For PNG images, the slider is hidden.
2022-12-05 11:02:33 +05:30
7861c57317 Safetensor support (Fixes #599) (#608)
* safetensors support
Add support for checkpoints in safetensors format: https://github.com/huggingface/safetensors

This format shall be safer than pickle files

* pip install safetensors
2022-12-05 10:59:48 +05:30
f701b8dc29 Simplify onUpscaleClick (#602)
* Simplified onUpscaleClick code.

* Updated fix with comment as to what it's fixing.

* Move the fix to enqueueImageVariationTask
2022-12-05 10:46:10 +05:30
bd10a850fa Fix upscaling when a source image is set (#593)
* Fix upscaling when a source image is set

If you have an image selected (img2img) then clicking Upscale on another unrelated image, the image for img2img is used and you get something very unexpected.

* Fix for img2img and mask gens
2022-12-03 22:25:14 +05:30
0f96688a54 Highlight artist modifiers when clicked (#596)
Artist modifiers, with the exception of Artstation (the first one), don't have the outline when selected. All the other modifiers, above or below, seem to work as intended

https://discord.com/channels/1014774730907209781/1014774732018683927/1048343258775949322
2022-12-03 22:18:57 +05:30
8eeca90d55 Fix weird scrolling when using a pen (#588)
With a pen, typing on a browser page, waiting a short moment, and then moving the pen scrolls the page.
Call event.preventDefault() to disable this default behaviour for events in the canvas area.
2022-12-02 14:40:21 +05:30
367e7f7065 Add dpm2 (#592)
* Move cond_stage_model to the right device

* Removed unused vars.

* Added 'dpm2'
2022-12-02 12:58:00 +05:30
ee19eaae62 Fix for RuntimeError, missing lines. (#591)
* Move cond_stage_model to the right device

* Removed unused vars.
2022-12-02 12:57:26 +05:30
8eb3a3536b Update on_sd_start.bat 2022-12-02 12:06:41 +05:30
cfd50231e1 Update on_sd_start.sh 2022-12-02 12:06:39 +05:30
1c8ab9e1b4 Temporarily set the display: flex style only on the image editor buttons 2022-12-01 16:59:12 +05:30
6094cd8578 Fix the 'load from file' button that had moved to the next line' 2022-12-01 16:10:20 +05:30
353c49a40b Bump version 2022-12-01 16:05:35 +05:30
277140f218 Image Editor (#574)
* started implementing hamunii's image editor, and added a hamunii theme

* fixed so active tab is main tab

* added some testing stuff for image ediotr

* re-implemented canvas drawing myself. just need to add layer stuff now

* moved everything to an image editor class and implement it so it actually works nicely now

* fixed a couple weird bugs and cleaned up the background image and sharpness stuff

* cleaned up a lot of stuff about the editor, added tools, buttons, made it mostly work in the current ui

* added inpainting support

* updated with more nice changes/updates to the inpainting and drawing editor

* made some more fixes and touchups to the image editor

* removed a bunch of semicolons

* remove old image inpainting system

* updated to work properly on mobile

* made a minor bugfix

* fixed img_size_box alignment

* Update index.html

Co-authored-by: cmdr2 <secondary.cmdr2@gmail.com>
Co-authored-by: cmdr2 <shashank.shekhar.global@gmail.com>
2022-12-01 16:01:09 +05:30
ca9413ccf4 Toggle image modifiers plugin (#558)
* Toggle image modifiers plugin

Right-click on image modifiers to temporarily turn them off without removing them. To quickly iterate and experiment with various combinations.

Please note this plugin required a minor tweak in getPrompts() to add support for image modifier inactive state.

* Fix tag matching

Co-authored-by: cmdr2 <secondary.cmdr2@gmail.com>
2022-12-01 15:10:36 +05:30
c9a0d090cb Merge pull request #569 from patriceac/Fix-seed-behavior
Tweak the seed behavior
2022-12-01 15:03:21 +05:30
1cd783d3a3 Merge pull request #534 from patriceac/Custom-modifiers-as-a-plugin
Custom modifiers as a plugin
2022-12-01 14:59:29 +05:30
1ead764a02 Merge branch 'beta' into Custom-modifiers-as-a-plugin 2022-12-01 14:57:39 +05:30
45f7b35954 Merge pull request #581 from patriceac/Hotfix-for-repeat-image-modifiers-handling
Hotfix for repeat image modifiers
2022-12-01 14:47:15 +05:30
6a41540749 Remove unused scripts from the previous installer 2022-12-01 14:44:20 +05:30
5b47da67f6 Merge pull request #582 from cmdr2/beta
Beta
2022-12-01 13:59:13 +05:30
292f68ff97 Typo in css path 2022-12-01 13:57:38 +05:30
3b554d881a Styling changes for the confirm dialog 2022-12-01 13:54:49 +05:30
40ebf468d3 Hotfix for repeat image modifiers
As per Discord conversation, this PR fixed the image modifiers behavior when a modifier appears more than once, and also fixes a regression introduced by ((weighted modifiers)).
2022-11-30 22:13:13 -08:00
4bc6e51862 Merge pull request #580 from JeLuF/patch-5
Add Quadro T2000 to force_full_precision list.
2022-12-01 10:35:32 +05:30
427861cf13 Add Quadro T2000 to force_full_precision list. 2022-12-01 00:59:12 +01:00
da3e7a2eb8 Fix the broken image close button 2022-11-30 21:14:18 +05:30
2979f04c82 Use socket.gethostname() instead of socket.getfqdn() 2022-11-30 20:17:18 +05:30
1949d8a50c Tweak modifiers help msg 2022-11-30 16:32:43 +05:30
ee66c799e0 Merge pull request #563 from patriceac/Mouse-wheel-behavior-fixes
Improved Mouse Wheel UX with Image Modifiers
2022-11-30 16:24:05 +05:30
7c50b8bf94 Merge branch 'beta' into Mouse-wheel-behavior-fixes 2022-11-30 16:22:45 +05:30
141ff74ece Merge pull request #557 from madrang/webmanifest
Added web manifest to allow installing the Url as a web app.
2022-11-30 16:19:04 +05:30
321e5f1ed6 Merge pull request #564 from patriceac/Fix-UI-display-when-removing-the-last-task
Fix UI display when removing the last task
2022-11-30 16:14:59 +05:30
6d131d9d8e Merge branch 'beta' into Fix-UI-display-when-removing-the-last-task 2022-11-30 16:14:28 +05:30
7e69b8eb31 Merge branch 'beta' of github.com:cmdr2/stable-diffusion-ui into beta 2022-11-30 16:11:22 +05:30
4e0b33e6a4 Merge pull request #566 from patriceac/Visual-feedback-on-buttons
Visual feedback on button click
2022-11-30 16:11:08 +05:30
54f7e6fcb8 SD2 fix - register buffer on the correct device 2022-11-30 16:05:06 +05:30
529169c4da Merge pull request #541 from patriceac/patch-2
Fix restoration of parallel output setting
2022-11-30 15:54:04 +05:30
a2c8c99215 Merge pull request #541 from patriceac/patch-2
Fix restoration of parallel output setting
2022-11-30 15:53:30 +05:30
e8bf3fd009 Merge pull request #542 from patriceac/patch-1
Fix restoration of model and VAE
2022-11-30 15:52:26 +05:30
465676e9ea Merge pull request #542 from patriceac/patch-1
Fix restoration of model and VAE
2022-11-30 15:51:31 +05:30
af53b57047 Changelog 2022-11-30 15:49:47 +05:30
54b5f75905 Rename auto-scroll to reflect its purpose 2022-11-30 15:47:24 +05:30
4348333497 Don't register listeners for an autosave setting, if they've already been registered 2022-11-30 15:45:30 +05:30
cc31110bcf Merge pull request #537 from patriceac/Generate-screen-layout
Auto-scroll plugin
2022-11-30 15:44:31 +05:30
f7c04bf7a6 bump version 2022-11-30 14:34:42 +05:30
029509ebad Unify IP info with devices, into a system_info table 2022-11-30 14:34:24 +05:30
65102bb64d Merge pull request #536 from JeLuF/serverip
Show network addresses in system settings
2022-11-30 14:00:18 +05:30
b96b55c5ce Merge branch 'beta' into serverip 2022-11-30 14:00:12 +05:30
1f5aba010e Merge branch 'beta' of https://github.com/cmdr2/stable-diffusion-ui.git into webmanifest
# Conflicts:
#	ui/index.html
2022-11-30 03:29:46 -05:00
f0b3bea4e3 Also confirm before the 'Stop All' button acts; Tweak wording of confirm dialog 2022-11-30 13:54:42 +05:30
84fae2d9e0 Merge pull request #531 from JeLuF/confirm
Confirm 'Clear All' and 'Stop Task'
2022-11-30 13:48:14 +05:30
0b96fa112d Merge branch 'beta' into confirm 2022-11-30 13:47:08 +05:30
c64bcd23d3 Picklescanner is mandatory 2022-11-30 13:38:22 +05:30
efd9a22bb5 Merge pull request #530 from madrang/list-models
Scan model once as start, then only if changed.
2022-11-30 13:37:27 +05:30
159c3edfe3 Simplify the logic for toggling modifier cards, no need to loop through the cards, since we already have the card object in hand 2022-11-30 13:33:20 +05:30
f74fa8657b Merge pull request #518 from patriceac/patch-6
Fix duplicate custom modifiers activation states
2022-11-30 13:27:14 +05:30
648b142a4b Merge pull request #571 from madrang/tabs-css
Add a new css rule for screens smaller than 500px.
2022-11-30 13:24:38 +05:30
426f92595e Merge pull request #520 from madrang/fix-gfpgan
Fix the gfpgan fix for multi-gpu
2022-11-30 13:08:10 +05:30
82a8d9b644 Merge pull request #577 from cmdr2/beta
Beta
2022-11-30 12:19:44 +05:30
ff9430b8a2 Tabs to 4 spaces 2022-11-30 12:18:34 +05:30
2e69ffcb5e Merge pull request #576 from cmdr2/beta
v2.4.16 - Remove the use of git-apply
2022-11-30 12:12:17 +05:30
0ea38db7ef Show the SD 2.0 setting only to beta users 2022-11-30 12:05:46 +05:30
a69d4c279e Make seed field behavior deterministic
Copying the image settings while 'Random' is enabled would cause the seed to be randomized. This was misleading as what I see wasn't what I would get.
2022-11-29 19:04:42 -08:00
2706149399 Tweak left padding of editor panel 2022-11-29 15:27:13 +05:30
3d0cdc1cb6 Bump version 2022-11-29 13:32:29 +05:30
ac605e9352 Typos and minor fixes for sd 2 2022-11-29 13:30:08 +05:30
5432297691 Default to sd-v1-4 when trying to use a SD2 model with SD 1.4, and warn the user. This will eventually be unnecessary 2022-11-29 13:14:58 +05:30
e37be0f954 Remove the need to use yield in the core loop for streaming results. This removes the need to patch the Stable Diffusion code, which can be fragile 2022-11-29 13:03:57 +05:30
a99209b674 Add a new css rule for screens smaller than 500px. 2022-11-28 20:23:17 -05:00
cb02b5ba18 Merge pull request #567 from madrang/tabs-css
Improved tabs flow on small screens.
2022-11-28 18:27:29 +05:30
69f14edd80 Tweak the seed behavior
Update the seed *before* starting the processing, so interrupting the processing retains the seed being used for the batch being currently processed.

The idea behind that is that if I like the gen I'm currently seeing and want to build on top of it, I can create a new task with the same seed without having to wait for the current task to complete.
2022-11-28 01:19:31 -08:00
14714b950d Slight improvement of detection logic 2022-11-28 00:14:12 -08:00
13654cb8c0 Make on_sd_start.sh executable 2022-11-28 13:00:02 +05:30
00276228cf Make on_sd_start.sh executable 2022-11-28 12:59:33 +05:30
8583bb8d7b Improved tabs flow on small screens. 2022-11-27 20:37:20 -05:00
d48951fe00 Visual feedback on button click
When there are too many tasks and the top of the list is not visible, there is no visual feedback that a task has been successfully added to the queue.

Adding a subtle visual feedback on buttons upon click to reflect that the mouse event was taken into account.
2022-11-27 16:26:01 -08:00
99bdcfa0a5 Set theme-color from the current selected theme. 2022-11-27 15:49:54 -05:00
e64e1a92e6 Fix UI display when removing the last task
Clear All button properly shows the "welcome message", but Remove the last task would just result in a blank Preview pane.
2022-11-27 12:42:51 -08:00
e278e639a3 Fix removal of image modifiers with non-zero weights
Properly handles removal of image modifiers that had (((modifiers))) or [[[modifiers]]] updated at runtime.
2022-11-27 03:00:19 -08:00
c4bad5c454 Conciseness
Shortening the sentence.
2022-11-27 01:42:39 -08:00
da41a74efc Require Ctrl+Mouse Wheel for modifier weight adjustment
The current behavior is just too annoying, and scrolling the page is a much more frequent activity than tweaking the weights.
2022-11-27 01:35:47 -08:00
0dc970562a Reverting this unnecessary change 2022-11-26 18:27:04 -08:00
2d8401473d Revert "Update custom-modifiers.plugin.js"
This reverts commit e5c11ea214.
2022-11-26 16:57:54 -08:00
9c91f57b19 Added web manifest to allow installing the Url as a web app. 2022-11-26 15:51:26 -05:00
f14afcd129 Update README.md 2022-11-26 12:44:19 +05:30
5c1a3d82d7 Update README.md 2022-11-26 12:23:03 +05:30
e02a917569 Improved logic for auto-scroll toggle insertion
Updating the insertion logic to prepare for future UI improvements.
2022-11-25 22:51:49 -08:00
347fa0fda1 Update on_sd_start.bat 2022-11-26 01:50:30 +05:30
6510d4cb02 Merge pull request #553 from cmdr2/revert-552-beta
Revert "Patching patch again"
2022-11-26 01:45:57 +05:30
91e4ccf6f8 Update on_sd_start.bat 2022-11-26 01:43:41 +05:30
36249874bc Revert "Patching patch again" 2022-11-26 01:42:16 +05:30
d2b5d6cce9 Merge pull request #552 from jsuelwald/beta
Patching patch again
2022-11-26 01:40:02 +05:30
b2922741c9 Patching patch again 2022-11-25 21:06:19 +01:00
300f3e27db Merge pull request #551 from cmdr2/revert-549-patch-2
Revert "Update ddim_callback_sd2.patch"
2022-11-26 01:25:17 +05:30
d7330b80a9 Revert "Update ddim_callback_sd2.patch" 2022-11-26 01:22:35 +05:30
acdd7667b7 Merge pull request #549 from jsuelwald/patch-2
Update ddim_callback_sd2.patch
2022-11-26 01:19:08 +05:30
8114fa3f5d Update ddim_callback_sd2.patch 2022-11-25 20:46:24 +01:00
4bc5508f38 Rollback 2022-11-26 01:07:55 +05:30
e503c6092e Ddim decode for img2img 2022-11-26 00:55:39 +05:30
6a8985d8dd Update ddim_callback_sd2.patch 2022-11-26 00:49:15 +05:30
bee67fd883 Shape 2022-11-25 23:54:08 +05:30
a1d75d40aa Update runtime.py 2022-11-25 23:36:43 +05:30
29484867ca Typo 2022-11-25 23:32:56 +05:30
7fa983b971 Img2img sd2 attempt 2 2022-11-25 23:28:31 +05:30
617a8b2814 Fix for make_schedule error in sd2 2022-11-25 23:15:22 +05:30
b924d323d4 img2img attempt for sd2 2022-11-25 22:36:02 +05:30
a2efda41d3 Cleaning up the code 2022-11-25 03:50:47 -08:00
642c114501 Working txt2img 2022-11-25 14:29:24 +05:30
02dd3e457d Tweaks to load sd1 models in sd2 code, typos 2022-11-25 13:57:15 +05:30
ea7b28c9d5 Placeholder changes for SD 2.0 support, haven't tested yet 2022-11-25 12:17:44 +05:30
472ab4a9ce Fix restoration of parallel output setting 2022-11-24 14:15:27 -08:00
fca84e3edf Fix restoration of model and VAE
😅
2022-11-24 13:47:35 -08:00
b70235ff92 Set the PYTHONPATH in the developer console, before the prompt shows up 2022-11-24 11:48:27 +05:30
6eff591df7 System settings to disable the 'Are you sure?'-dialogs 2022-11-23 23:05:30 +01:00
d0b2bf736e Auto-scroll off by default 2022-11-23 03:23:51 -08:00
e5c11ea214 Update custom-modifiers.plugin.js
Removing the redundant initialization of the array.
2022-11-23 03:00:19 -08:00
6b6443406d Create Autoscroll.plugin.js 2022-11-23 02:57:07 -08:00
3452d7852a Merge branch 'beta' into serverip 2022-11-23 11:28:05 +01:00
f1fa10badd Show network addresses in system settings
Users sometimes struggle to get the IP address of their PC. This PR adds a button to the system settings pane that will list the server's IP
addresses.
2022-11-23 11:25:36 +01:00
1267621424 Merge pull request #535 from cmdr2/beta
Switch to new custom backend
2022-11-23 15:09:09 +05:30
8a0ec95fe1 Merge branch 'main' into beta 2022-11-23 15:08:34 +05:30
ba30a63407 Update custom-modifiers.plugin.js
Add a carriage return at the end
2022-11-22 23:07:44 -08:00
c56a2adbcb Custom modifiers as a plugin 2022-11-22 19:04:20 -08:00
2de96d4dc9 Scan model once as start, then only if changed. 2022-11-22 20:41:08 -05:00
a486f20892 Merge branch 'beta' into confirm 2022-11-22 21:33:18 +01:00
49535deb2e Confirm 'Clear All' and 'Stop Task'
Ask for a confimation before clearing the results pane or stopping a render task. The dialog can be skipped by holding down the shift key while clicking on the button.
2022-11-22 21:27:36 +01:00
7cbf62cf12 Revert whitespace fix 2022-11-22 23:30:03 +05:30
3b0ace3410 Revert whitespace fix 2022-11-22 23:27:46 +05:30
5a9c8e1d87 Warn but don't fix whitespaces in a patch 2022-11-22 23:21:11 +05:30
daaa65dc0a Warn but don't fix whitespaces in a patch 2022-11-22 23:20:24 +05:30
ab4e371524 Fix whitespace during git apply 2022-11-22 22:25:36 +05:30
927fd304b0 Merge branch 'beta' of github.com:cmdr2/stable-diffusion-ui into beta 2022-11-22 22:22:07 +05:30
5af84b8e90 Fix whitespace during git apply 2022-11-22 22:21:54 +05:30
d425dac499 Merge pull request #529 from madrang/dragNdrop
Fixing file drag and drop.
2022-11-22 21:57:34 +05:30
d056459e76 Merge pull request #529 from madrang/dragNdrop
Fixing file drag and drop.
2022-11-22 21:56:07 +05:30
3169485f33 Fixing file drag and drop. 2022-11-22 11:11:06 -05:00
d9b9f80a93 diffusion-kit upgrade 2022-11-22 17:39:51 +05:30
d429505b71 Update version of diffusion-kit 2022-11-22 17:14:20 +05:30
72ee708917 Remove the need to install realesrgan, gfpgan and certain specific package versions, since the new backend should install them directly 2022-11-22 16:50:10 +05:30
93bbfac29a Change the backend to a custom fork of SD, since basujindal's fork is no longer under development. This fork is intended to include the common models/tools used like RealESRGAN, GFPGAN, Codeformer etc, and is meant to be a community-developed project 2022-11-22 16:38:39 +05:30
040d7a6563 Merge pull request #528 from patriceac/patch-1
Add support for custom modifiers to d&d and clipboard
2022-11-22 16:06:26 +05:30
e8dd930a50 Add support for custom modifiers to d&d and clipboard
Add support for custom modifiers to d&d and clipboard and remove now-redundant code in restoreTaskToUI.
2022-11-22 00:06:43 -08:00
31c049ebfe Version css 2022-11-22 11:09:01 +05:30
d343a37fb2 Merge branch 'beta' of github.com:cmdr2/stable-diffusion-ui into beta 2022-11-22 11:08:07 +05:30
7097175c6f CSS tweak for logo and version 2022-11-22 11:07:50 +05:30
8e57c49043 Merge pull request #527 from cmdr2/beta
Beta
2022-11-22 11:00:25 +05:30
9f036ceefd Merge branch 'main' into beta 2022-11-22 10:59:51 +05:30
ff3ca8b36b link to new downloads 2022-11-22 10:48:43 +05:30
87a7b70a27 Shell error code check 2022-11-22 10:40:20 +05:30
9c71c966ca Shell error code check 2022-11-22 10:39:47 +05:30
6dc99e676e Reduce the width of the editor sidebar, regression 2022-11-21 18:45:37 +05:30
3bf5e11f94 Nowarn for fresh installation (git apply whitespace) 2022-11-21 17:19:55 +05:30
eef9af2266 Typo 2022-11-21 17:14:54 +05:30
8316a002da Don't warn about whitespace in the git patch application 2022-11-21 17:11:38 +05:30
c3bf767024 Merge pull request #525 from cmdr2/beta
Beta
2022-11-21 14:08:47 +05:30
0a21a69a9f Updated facexlib fix for usage on multi-gpu. 2022-11-20 13:04:22 -05:00
cbc48e31e1 Fix duplicate custom modifiers activation states
Fixing activation state for custom modifier cards sharing the same tag where only one of the cards gets (de)activated.
2022-11-19 19:25:28 -08:00
59 changed files with 17932 additions and 1733 deletions

27
3rd-PARTY-LICENSES Normal file
View File

@ -0,0 +1,27 @@
jquery-confirm
==============
https://craftpip.github.io/jquery-confirm/
jquery-confirm is licensed under the MIT license:
The MIT License (MIT)
Copyright (c) 2019 Boniface Pereira
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@ -2,7 +2,10 @@
## v2.4
### Major Changes
- **Automatic scanning for malicious model files** - using `picklescan`. Thanks @JeLuf
- **Allow reordering the task queue** (by dragging and dropping tasks). Thanks @madrang
- **Automatic scanning for malicious model files** - using `picklescan`, and support for `safetensor` model format. Thanks @JeLuf
- **Image Editor** - for drawing simple images for guiding the AI. Thanks @mdiller
- **Use pre-trained hypernetworks** - for improving the quality of images. Thanks @C0bra5
- **Support for custom VAE models**. You can place your VAE files in the `models/vae` folder, and refresh the browser page to use them. More info: https://github.com/cmdr2/stable-diffusion-ui/wiki/VAE-Variational-Auto-Encoder
- **Experimental support for multiple GPUs!** It should work automatically. Just open one browser tab per GPU, and spread your tasks across your GPUs. For e.g. open our UI in two browser tabs if you have two GPUs. You can customize which GPUs it should use in the "Settings" tab, otherwise let it automatically pick the best GPUs. Thanks @madrang . More info: https://github.com/cmdr2/stable-diffusion-ui/wiki/Run-on-Multiple-GPUs
- **Cleaner UI design** - Show settings and help in new tabs, instead of dropdown popups (which were buggy). Thanks @mdiller
@ -19,8 +22,34 @@
- Configuration to prevent the browser from opening on startup
- Lots of minor bug fixes
- A `What's New?` tab in the UI
- Ask for a confimation before clearing the results pane or stopping a render task. The dialog can be skipped by holding down the shift key while clicking on the button.
- Show the network addresses of the server in the systems setting dialog
- Support loading models in the safetensor format, for improved safety
### Detailed changelog
* 2.4.24 - 9 Jan 2022 - Urgent fix for failures on old/long-term-support browsers. Thanks @JeLuf.
* 2.4.23/22 - 29 Dec 2022 - Allow rolling back from the upcoming v2.5 change (in beta).
* 2.4.21 - 23 Dec 2022 - Speed up image creation, by removing a delay (regression) of 4-5 seconds between clicking the `Make Image` button and calling the server.
* 2.4.20 - 22 Dec 2022 - `Pause All` button to pause all the pending tasks. Thanks @JeLuf
* 2.4.20 - 22 Dec 2022 - `Undo`/`Redo` buttons in the image editor. Thanks @JeLuf
* 2.4.20 - 22 Dec 2022 - Drag handle to reorder the tasks. This fixed a bug where the metadata was no longer selectable (for copying). Thanks @JeLuf
* 2.4.19 - 17 Dec 2022 - Add Undo/Redo buttons in the Image Editor. Thanks @JeLuf
* 2.4.19 - 10 Dec 2022 - Show init img in task list
* 2.4.19 - 7 Dec 2022 - Use pre-trained hypernetworks while generating images. Thanks @C0bra5
* 2.4.19 - 6 Dec 2022 - Allow processing new tasks first. Thanks @madrang
* 2.4.19 - 6 Dec 2022 - Allow reordering the task queue (by dragging tasks). Thanks @madrang
* 2.4.19 - 6 Dec 2022 - Re-organize the code, to make it easier to write user plugins. Thanks @madrang
* 2.4.18 - 5 Dec 2022 - Make JPEG Output quality user controllable. Thanks @JeLuf
* 2.4.18 - 5 Dec 2022 - Support loading models in the safetensor format, for improved safety. Thanks @JeLuf
* 2.4.18 - 1 Dec 2022 - Image Editor, for drawing simple images for guiding the AI. Thanks @mdiller
* 2.4.18 - 1 Dec 2022 - Disable an image modifier temporarily by right-clicking it. Thanks @patriceac
* 2.4.17 - 30 Nov 2022 - Scroll to generated image. Thanks @patriceac
* 2.4.17 - 30 Nov 2022 - Show the network addresses of the server in the systems setting dialog. Thanks @JeLuf
* 2.4.17 - 30 Nov 2022 - Fix a bug where GFPGAN wouldn't work properly when multiple GPUs tried to run it at the same time. Thanks @madrang
* 2.4.17 - 30 Nov 2022 - Confirm before stopping or clearing all the tasks. Thanks @JeLuf
* 2.4.16 - 29 Nov 2022 - Bug fixes for SD 2.0 - remove the need for patching, default to SD 1.4 model if trying to load an SD2 model in SD1.4.
* 2.4.15 - 25 Nov 2022 - Experimental support for SD 2.0. Uses lots of memory, not optimized, probably GPU-only.
* 2.4.14 - 22 Nov 2022 - Change the backend to a custom fork of Stable Diffusion
* 2.4.13 - 21 Nov 2022 - Change the modifier weight via mouse wheel, drag to reorder selected modifiers, and some more modifier-related fixes. Thanks @patriceac
* 2.4.12 - 21 Nov 2022 - Another fix for improving how long images take to generate. Reduces the time taken for an enqueued task to start processing.
* 2.4.11 - 21 Nov 2022 - Installer improvements: avoid crashing if the username contains a space or special characters, allow moving/renaming the folder after installation on Windows, whitespace fix on git apply

View File

@ -6,7 +6,7 @@ Thanks
# For developers:
If you would like to contribute to this project, there is a discord for dicussion:
If you would like to contribute to this project, there is a discord for discussion:
[![Discord Server](https://badgen.net/badge/icon/discord?icon=discord&label)](https://discord.com/invite/u9yhsFmEkB)
## Development environment for UI (frontend and server) changes

137
README.md
View File

@ -1,66 +1,107 @@
# Stable Diffusion UI
### Easiest way to install and use [Stable Diffusion](https://github.com/CompVis/stable-diffusion) on your own computer. No dependencies or technical knowledge required. 1-click install, powerful features.
### The easiest way to install and use [Stable Diffusion](https://github.com/CompVis/stable-diffusion) on your own computer. Does not require technical knowledge, does not require pre-installed software. 1-click install, powerful features, friendly community.
[![Discord Server](https://img.shields.io/discord/1014774730907209781?label=Discord)](https://discord.com/invite/u9yhsFmEkB) (for support, and development discussion) | [Troubleshooting guide for common problems](Troubleshooting.md)
[![Discord Server](https://img.shields.io/discord/1014774730907209781?label=Discord)](https://discord.com/invite/u9yhsFmEkB) (for support, and development discussion) | [Troubleshooting guide for common problems](https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting)
### New:
Experimental support for Stable Diffusion 2.0 is available in beta!
----
## Step 1: Download the installer
# Step 1: Download and prepare the installer
Click the download button for your operating system:
<p float="left">
<a href="#installation"><img src="https://github.com/cmdr2/stable-diffusion-ui/raw/develop/media/download-win.png" width="200" /></a>
<a href="#installation"><img src="https://github.com/cmdr2/stable-diffusion-ui/raw/develop/media/download-linux.png" width="200" /></a>
<a href="https://github.com/cmdr2/stable-diffusion-ui/releases/download/v2.4.13/stable-diffusion-ui-windows.zip"><img src="https://github.com/cmdr2/stable-diffusion-ui/raw/main/media/download-win.png" width="200" /></a>
<a href="https://github.com/cmdr2/stable-diffusion-ui/releases/download/v2.4.13/stable-diffusion-ui-linux.zip"><img src="https://github.com/cmdr2/stable-diffusion-ui/raw/main/media/download-linux.png" width="200" /></a>
</p>
## Step 2: Run the program
- On Windows: Double-click `Start Stable Diffusion UI.cmd`
- On Linux: Run `./start.sh` in a terminal
## On Windows:
1. Unzip/extract the folder `stable-diffusion-ui` which should be in your downloads folder, unless you changed your default downloads destination.
2. Move the `stable-diffusion-ui` folder to your `C:` drive (or any other drive like `D:`, at the top root level). `C:\stable-diffusion-ui` or `D:\stable-diffusion-ui` as examples. This will avoid a common problem with Windows (file path length limits).
## On Linux:
1. Unzip/extract the folder `stable-diffusion-ui` which should be in your downloads folder, unless you changed your default downloads destination.
2. Open a terminal window, and navigate to the `stable-diffusion-ui` directory.
## Step 3: There is no step 3!
It's simple to get started. You don't need to install or struggle with Python, Anaconda, Docker etc.
# Step 2: Run the program
## On Windows:
Double-click `Start Stable Diffusion UI.cmd`.
If Windows SmartScreen prevents you from running the program click `More info` and then `Run anyway`.
## On Linux:
Run `./start.sh` (or `bash start.sh`) in a terminal.
The installer will take care of whatever is needed. A friendly [Discord community](https://discord.com/invite/u9yhsFmEkB) will help you if you face any problems.
The installer will take care of whatever is needed. If you face any problems, you can join the friendly [Discord community](https://discord.com/invite/u9yhsFmEkB) and ask for assistance.
# Step 3: There is no Step 3. It's that simple!
**To Uninstall:** Just delete the `stable-diffusion-ui` folder to uninstall all the downloaded packages.
----
# Easy for new users, powerful features for advanced users
### Features:
- **No Dependencies or Technical Knowledge Required**: 1-click install for Windows 10/11 and Linux. *No dependencies*, no need for WSL or Docker or Conda or technical setup. Just download and run!
- **Clutter-free UI**: a friendly and simple UI, while providing a lot of powerful features
- Supports "*Text to Image*" and "*Image to Image*"
- **Custom Models**: Use your own `.ckpt` file, by placing it inside the `models/stable-diffusion` folder!
- **Live Preview**: See the image as the AI is drawing it
- **Task Queue**: Queue up all your ideas, without waiting for the current task to finish
- **In-Painting**: Specify areas of your image to paint into
- **Face Correction (GFPGAN) and Upscaling (RealESRGAN)**
- **Image Modifiers**: A library of *modifier tags* like *"Realistic"*, *"Pencil Sketch"*, *"ArtStation"* etc. Experiment with various styles quickly.
- **Loopback**: Use the output image as the input image for the next img2img task
## Features:
### User experience
- **Hassle-free installation**: Does not require technical knowledge, does not require pre-installed software. Just download and run!
- **Clutter-free UI**: A friendly and simple UI, while providing a lot of powerful features.
### Image generation
- **Supports**: "*Text to Image*" and "*Image to Image*".
- **In-Painting**: Specify areas of your image to paint into.
- **Simple Drawing Tool**: Draw basic images to guide the AI, without needing an external drawing program.
- **Face Correction (GFPGAN)**
- **Upscaling (RealESRGAN)**
- **Loopback**: Use the output image as the input image for the next img2img task.
- **Negative Prompt**: Specify aspects of the image to *remove*.
- **Attention/Emphasis:** () in the prompt increases the model's attention to enclosed words, and [] decreases it
- **Weighted Prompts:** Use weights for specific words in your prompt to change their importance, e.g. `red:2.4 dragon:1.2`
- **Prompt Matrix:** (in beta) Quickly create multiple variations of your prompt, e.g. `a photograph of an astronaut riding a horse | illustration | cinematic lighting`
- **Lots of Samplers:** ddim, plms, heun, euler, euler_a, dpm2, dpm2_a, lms
- **Multiple Prompts File:** Queue multiple prompts by entering one prompt per line, or by running a text file
- **NSFW Setting**: A setting in the UI to control *NSFW content*
- **JPEG/PNG output**
- **Save generated images to disk**
- **Attention/Emphasis**: () in the prompt increases the model's attention to enclosed words, and [] decreases it.
- **Weighted Prompts**: Use weights for specific words in your prompt to change their importance, e.g. `red:2.4 dragon:1.2`.
- **Prompt Matrix**: Quickly create multiple variations of your prompt, e.g. `a photograph of an astronaut riding a horse | illustration | cinematic lighting`.
- **Lots of Samplers**: ddim, plms, heun, euler, euler_a, dpm2, dpm2_a, lms.
- **1-click Upscale/Face Correction**: Upscale or correct an image after it has been generated.
- **Make Similar Images**: Click to generate multiple variations of a generated image.
- **NSFW Setting**: A setting in the UI to control *NSFW content*.
- **JPEG/PNG output**: Multiple file formats.
### Advanced features
- **Custom Models**: Use your own `.ckpt` or `.safetensors` file, by placing it inside the `models/stable-diffusion` folder!
- **Stable Diffusion 2.0 support (experimental)**: available in beta channel.
- **Use custom VAE models**
- **Use pre-trained Hypernetworks**
- **UI Plugins**: Choose from a growing list of [community-generated UI plugins](https://github.com/cmdr2/stable-diffusion-ui/wiki/UI-Plugins), or write your own plugin to add features to the project!
### Performance and security
- **Low Memory Usage**: Creates 512x512 images with less than 4GB of GPU RAM!
- **Use CPU setting**: If you don't have a compatible graphics card, but still want to run it on your CPU.
- **Multi-GPU support**: Automatically spreads your tasks across multiple GPUs (if available), for faster performance!
- **Auto scan for malicious models**: Uses picklescan to prevent malicious models.
- **Safetensors support**: Support loading models in the safetensor format, for improved safety.
- **Auto-updater**: Gets you the latest improvements and bug-fixes to a rapidly evolving project.
- **Low Memory Usage**: Creates 512x512 images with less than 4GB of VRAM!
- **Developer Console**: A developer-mode for those who want to modify their Stable Diffusion code, and edit the conda environment.
### Easy for new users:
### Usability:
- **Live Preview**: See the image as the AI is drawing it.
- **Task Queue**: Queue up all your ideas, without waiting for the current task to finish.
- **Image Modifiers**: A library of *modifier tags* like *"Realistic"*, *"Pencil Sketch"*, *"ArtStation"* etc. Experiment with various styles quickly.
- **Multiple Prompts File**: Queue multiple prompts by entering one prompt per line, or by running a text file.
- **Save generated images to disk**: Save your images to your PC!
- **UI Themes**: Customize the program to your liking.
**(and a lot more)**
----
## Easy for new users:
![Screenshot of the initial UI](media/shot-v10-simple.jpg?raw=true)
### Powerful features for advanced users:
## Powerful features for advanced users:
![Screenshot of advanced settings](media/shot-v10.jpg?raw=true)
### Live Preview
## Live Preview
Useful for judging (and stopping) an image quickly, without waiting for it to finish rendering.
![live-512](https://user-images.githubusercontent.com/844287/192097249-729a0a1e-a677-485e-9ccc-16a9e848fabe.gif)
### Task Queue
## Task Queue
![Screenshot of task queue](media/task-queue-v1.jpg?raw=true)
# System Requirements
@ -70,23 +111,10 @@ Useful for judging (and stopping) an image quickly, without waiting for it to fi
You don't need to install or struggle with Python, Anaconda, Docker etc. The installer will take care of whatever is needed.
# Installation
1. **Download** [for Windows](https://github.com/cmdr2/stable-diffusion-ui/releases/download/v2.3.5/stable-diffusion-ui-windows.zip) or [for Linux](https://github.com/cmdr2/stable-diffusion-ui/releases/download/v2.3.5/stable-diffusion-ui-linux.zip).
2. **Extract**:
- For Windows: After unzipping the file, please move the `stable-diffusion-ui` folder to your `C:` (or any drive like D:, at the top root level), e.g. `C:\stable-diffusion-ui`. This will avoid a common problem with Windows (file path length limits).
- For Linux: After extracting the .tar.xz file, please open a terminal, and go to the `stable-diffusion-ui` directory.
3. **Run**:
- For Windows: `Start Stable Diffusion UI.cmd` by double-clicking it.
- For Linux: In the terminal, run `./start.sh` (or `bash start.sh`)
This will automatically install Stable Diffusion, set it up, and start the interface. No additional steps are needed.
**To Uninstall:** Just delete the `stable-diffusion-ui` folder to uninstall all the downloaded packages.
----
# How to use?
Please use our [guide](https://github.com/cmdr2/stable-diffusion-ui/wiki/How-to-Use) to understand how to use the features in this UI.
Please refer to our [guide](https://github.com/cmdr2/stable-diffusion-ui/wiki/How-to-Use) to understand how to use the features in this UI.
# Bugs reports and code contributions welcome
If there are any problems or suggestions, please feel free to ask on the [discord server](https://discord.com/invite/u9yhsFmEkB) or [file an issue](https://github.com/cmdr2/stable-diffusion-ui/issues).
@ -102,4 +130,11 @@ If you have any code contributions in mind, please feel free to say Hi to us on
# Disclaimer
The authors of this project are not responsible for any content generated using this interface.
The license of this software forbids you from sharing any content that violates any laws, produce any harm to a person, disseminate any personal information that would be meant for harm, spread misinformation, or target vulnerable groups. For the full list of restrictions please read [the license](LICENSE). You agree to these terms by using this software.
The license of this software forbids you from sharing any content that:
- Violates any laws.
- Produces any harm to a person or persons.
- Disseminates (spreads) any personal information that would be meant for harm.
- Spreads misinformation.
- Target vulnerable groups.
For the full list of restrictions please read [the License](LICENSE). You agree to these terms by using this software.

View File

@ -1 +0,0 @@
Moved to https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting

View File

@ -29,6 +29,18 @@ call conda activate .\stable-diffusion\env
call where python
call python --version
@rem set the PYTHONPATH
cd stable-diffusion
set SD_DIR=%cd%
cd env\lib\site-packages
set PYTHONPATH=%SD_DIR%;%cd%
cd ..\..\..
echo PYTHONPATH=%PYTHONPATH%
cd ..
@rem done
echo.
cmd /k

View File

@ -42,11 +42,11 @@ if "%PACKAGES_TO_INSTALL%" NEQ "" (
mkdir "%MAMBA_ROOT_PREFIX%"
call curl -Lk "%MICROMAMBA_DOWNLOAD_URL%" > "%MAMBA_ROOT_PREFIX%\micromamba.exe"
@REM if "%ERRORLEVEL%" NEQ "0" (
@REM echo "There was a problem downloading micromamba. Cannot continue."
@REM pause
@REM exit /b
@REM )
if "%ERRORLEVEL%" NEQ "0" (
echo "There was a problem downloading micromamba. Cannot continue."
pause
exit /b
)
mkdir "%APPDATA%"
mkdir "%USERPROFILE%"

View File

@ -35,6 +35,15 @@ if [ "$0" == "bash" ]; then
which python
python --version
# set the PYTHONPATH
cd stable-diffusion
SD_PATH=`pwd`
export PYTHONPATH="$SD_PATH:$SD_PATH/env/lib/python3.8/site-packages"
echo "PYTHONPATH=$PYTHONPATH"
cd ..
# done
echo ""
else
file_name=$(basename "${BASH_SOURCE[0]}")

View File

@ -16,35 +16,42 @@ if exist "%cd%\profile" (
@rem activate the installer env
call conda activate
@rem @if "%ERRORLEVEL%" NEQ "0" (
@rem @echo. & echo "Error activating conda for Stable Diffusion. Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo.
@rem pause
@rem exit /b
@rem )
@if "%ERRORLEVEL%" NEQ "0" (
@echo. & echo "Error activating conda for Stable Diffusion. Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo.
pause
exit /b
)
@REM remove the old version of the dev console script, if it's still present
if exist "Open Developer Console.cmd" del "Open Developer Console.cmd"
@call python -c "import os; import shutil; frm = 'sd-ui-files\\ui\\hotfix\\9c24e6cd9f499d02c4f21a033736dabd365962dc80fe3aeb57a8f85ea45a20a3.26fead7ea4f0f843f6eb4055dfd25693f1a71f3c6871b184042d4b126244e142'; dst = os.path.join(os.path.expanduser('~'), '.cache', 'huggingface', 'transformers', '9c24e6cd9f499d02c4f21a033736dabd365962dc80fe3aeb57a8f85ea45a20a3.26fead7ea4f0f843f6eb4055dfd25693f1a71f3c6871b184042d4b126244e142'); shutil.copyfile(frm, dst) if os.path.exists(dst) else print(''); print('Hotfixed broken JSON file from OpenAI');"
if NOT DEFINED test_sd2 set test_sd2=N
@>nul findstr /m "sd_git_cloned" scripts\install_status.txt
@if "%ERRORLEVEL%" EQU "0" (
@echo "Stable Diffusion's git repository was already installed. Updating.."
@cd stable-diffusion
@call git remote set-url origin https://github.com/easydiffusion/diffusion-kit.git
@call git reset --hard
@call git pull
@call git -c advice.detachedHead=false checkout f6cfebffa752ee11a7b07497b8529d5971de916c
@call git apply --whitespace=nowarn ..\ui\sd_internal\ddim_callback.patch
@call git apply --whitespace=nowarn ..\ui\sd_internal\env_yaml.patch
if "%test_sd2%" == "N" (
@call git -c advice.detachedHead=false checkout 7f32368ed1030a6e710537047bacd908adea183a
)
if "%test_sd2%" == "Y" (
@call git -c advice.detachedHead=false checkout 733a1f6f9cae9b9a9b83294bf3281b123378cb1f
)
@cd ..
) else (
@echo. & echo "Downloading Stable Diffusion.." & echo.
@call git clone https://github.com/basujindal/stable-diffusion.git && (
@call git clone https://github.com/easydiffusion/diffusion-kit.git stable-diffusion && (
@echo sd_git_cloned >> scripts\install_status.txt
) || (
@echo "Error downloading Stable Diffusion. Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!"
@ -53,10 +60,7 @@ if exist "Open Developer Console.cmd" del "Open Developer Console.cmd"
)
@cd stable-diffusion
@call git -c advice.detachedHead=false checkout f6cfebffa752ee11a7b07497b8529d5971de916c
@call git apply --whitespace=nowarn ..\ui\sd_internal\ddim_callback.patch
@call git apply --whitespace=nowarn ..\ui\sd_internal\env_yaml.patch
@call git -c advice.detachedHead=false checkout 7f32368ed1030a6e710537047bacd908adea183a
@cd ..
)
@ -88,12 +92,6 @@ if exist "Open Developer Console.cmd" del "Open Developer Console.cmd"
@call conda activate .\env
@call conda install -c conda-forge -y --prefix env antlr4-python3-runtime=4.8 || (
@echo. & echo "Error installing antlr4-python3-runtime for Stable Diffusion. Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo.
pause
exit /b
)
for /f "tokens=*" %%a in ('python -c "import torch; import ldm; import transformers; import numpy; import antlr4; print(42)"') do if "%%a" NEQ "42" (
@echo. & echo "Dependency test failed! Error installing the packages necessary for Stable Diffusion. Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo.
pause
@ -103,6 +101,19 @@ if exist "Open Developer Console.cmd" del "Open Developer Console.cmd"
@echo conda_sd_env_created >> ..\scripts\install_status.txt
)
@rem allow rolling back the sdkit-based changes
if exist "src-old" (
if not exist "src" (
rename "src-old" "src"
if exist "ldm-old" (
rd /s /q "ldm-old"
)
call pip uninstall -y sdkit stable-diffusion-sdkit
)
)
set PATH=C:\Windows\System32;%PATH%
@>nul findstr /m "conda_sd_gfpgan_deps_installed" ..\scripts\install_status.txt
@ -117,18 +128,6 @@ set PATH=C:\Windows\System32;%PATH%
set PYTHONPATH=%cd%;%cd%\env\lib\site-packages
@call pip install -e git+https://github.com/TencentARC/GFPGAN#egg=GFPGAN || (
@echo. & echo "Error installing the packages necessary for GFPGAN (Face Correction). Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo.
pause
exit /b
)
@call pip install basicsr==1.4.2 || (
@echo. & echo "Error installing the basicsr package necessary for GFPGAN (Face Correction). Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo.
pause
exit /b
)
for /f "tokens=*" %%a in ('python -c "from gfpgan import GFPGANer; print(42)"') do if "%%a" NEQ "42" (
@echo. & echo "Dependency test failed! Error installing the packages necessary for GFPGAN (Face Correction). Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo.
pause
@ -150,12 +149,6 @@ set PATH=C:\Windows\System32;%PATH%
set PYTHONPATH=%cd%;%cd%\env\lib\site-packages
@call pip install -e git+https://github.com/xinntao/Real-ESRGAN#egg=realesrgan || (
@echo. & echo "Error installing the packages necessary for ESRGAN (Resolution Upscaling). Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo.
pause
exit /b
)
for /f "tokens=*" %%a in ('python -c "from basicsr.archs.rrdbnet_arch import RRDBNet; from realesrgan import RealESRGANer; print(42)"') do if "%%a" NEQ "42" (
@echo. & echo "Dependency test failed! Error installing the packages necessary for ESRGAN (Resolution Upscaling). Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo.
pause
@ -202,6 +195,16 @@ call WHERE uvicorn > .tmp
)
)
@>nul 2>nul call python -c "import safetensors"
@if "%ERRORLEVEL%" NEQ "0" (
@echo. & echo SafeTensors not found. Installing
@call pip install safetensors || (
echo "Error installing the safetensors package necessary for Stable Diffusion UI. Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!"
pause
exit /b
)
)
@>nul findstr /m "conda_sd_ui_deps_installed" ..\scripts\install_status.txt
@if "%ERRORLEVEL%" NEQ "0" (
@echo conda_sd_ui_deps_installed >> ..\scripts\install_status.txt
@ -211,8 +214,16 @@ call WHERE uvicorn > .tmp
if not exist "..\models\stable-diffusion" mkdir "..\models\stable-diffusion"
if not exist "..\models\vae" mkdir "..\models\vae"
if not exist "..\models\hypernetwork" mkdir "..\models\hypernetwork"
echo. > "..\models\stable-diffusion\Put your custom ckpt files here.txt"
echo. > "..\models\vae\Put your VAE files here.txt"
echo. > "..\models\hypernetwork\Put your hypernetwork files here.txt"
@rem if downgrading from v2.5, migrate the models to the legacy path (if already downloaded)
if exist "..\models\stable-diffusion\sd-v1-4.ckpt" if not exist "sd-v1-4.ckpt" move ..\models\stable-diffusion\sd-v1-4.ckpt .
if exist "..\models\gfpgan\GFPGANv1.3.pth" if not exist "GFPGANv1.3.pth" move ..\models\gfpgan\GFPGANv1.3.pth .
if exist "..\models\realesrgan\RealESRGAN_x4plus.pth" if not exist "RealESRGAN_x4plus.pth" move ..\models\realesrgan\RealESRGAN_x4plus.pth .
if exist "..\models\realesrgan\RealESRGAN_x4plus_anime_6B.pth" if not exist "RealESRGAN_x4plus_anime_6B.pth" move ..\models\realesrgan\RealESRGAN_x4plus_anime_6B.pth .
@if exist "sd-v1-4.ckpt" (
for %%I in ("sd-v1-4.ckpt") do if "%%~zI" EQU "4265380512" (
@ -370,7 +381,9 @@ echo. > "..\models\vae\Put your VAE files here.txt"
)
)
if "%test_sd2%" == "Y" (
@call pip install open_clip_torch==2.0.2
)
@>nul findstr /m "sd_install_complete" ..\scripts\install_status.txt
@if "%ERRORLEVEL%" NEQ "0" (

View File

@ -21,33 +21,38 @@ python -c "import os; import shutil; frm = 'sd-ui-files/ui/hotfix/9c24e6cd9f499d
# Caution, this file will make your eyes and brain bleed. It's such an unholy mess.
# Note to self: Please rewrite this in Python. For the sake of your own sanity.
if [ "$test_sd2" == "" ]; then
export test_sd2="N"
fi
if [ -e "scripts/install_status.txt" ] && [ `grep -c sd_git_cloned scripts/install_status.txt` -gt "0" ]; then
echo "Stable Diffusion's git repository was already installed. Updating.."
cd stable-diffusion
git remote set-url origin https://github.com/easydiffusion/diffusion-kit.git
git reset --hard
git pull
git -c advice.detachedHead=false checkout f6cfebffa752ee11a7b07497b8529d5971de916c
git apply --whitespace=nowarn ../ui/sd_internal/ddim_callback.patch || fail "ddim patch failed"
git apply --whitespace=nowarn ../ui/sd_internal/env_yaml.patch || fail "yaml patch failed"
if [ "$test_sd2" == "N" ]; then
git -c advice.detachedHead=false checkout 7f32368ed1030a6e710537047bacd908adea183a
elif [ "$test_sd2" == "Y" ]; then
git -c advice.detachedHead=false checkout 733a1f6f9cae9b9a9b83294bf3281b123378cb1f
fi
cd ..
else
printf "\n\nDownloading Stable Diffusion..\n\n"
if git clone https://github.com/basujindal/stable-diffusion.git ; then
if git clone https://github.com/easydiffusion/diffusion-kit.git stable-diffusion ; then
echo sd_git_cloned >> scripts/install_status.txt
else
fail "git clone of basujindal/stable-diffusion.git failed"
fi
cd stable-diffusion
git -c advice.detachedHead=false checkout f6cfebffa752ee11a7b07497b8529d5971de916c
git apply --whitespace=nowarn ../ui/sd_internal/ddim_callback.patch || fail "ddim patch failed"
git apply --whitespace=nowarn ../ui/sd_internal/env_yaml.patch || fail "yaml patch failed"
git -c advice.detachedHead=false checkout 7f32368ed1030a6e710537047bacd908adea183a
cd ..
fi
@ -74,12 +79,6 @@ else
conda activate ./env || fail "conda activate failed"
if conda install -c conda-forge --prefix ./env -y antlr4-python3-runtime=4.8 ; then
echo "Installed. Testing.."
else
fail "Error installing antlr4-python3-runtime"
fi
out_test=`python -c "import torch; import ldm; import transformers; import numpy; import antlr4; print(42)"`
if [ "$out_test" != "42" ]; then
fail "Dependency test failed"
@ -88,6 +87,15 @@ else
echo conda_sd_env_created >> ../scripts/install_status.txt
fi
# allow rolling back the sdkit-based changes
if [ -e "src-old" ] && [ ! -e "src" ]; then
mv src-old src
if [ -e "ldm-old" ]; then rm -r ldm-old; fi
pip uninstall -y sdkit stable-diffusion-sdkit
fi
if [ `grep -c conda_sd_gfpgan_deps_installed ../scripts/install_status.txt` -gt "0" ]; then
echo "Packages necessary for GFPGAN (Face Correction) were already installed"
else
@ -96,12 +104,6 @@ else
export PYTHONNOUSERSITE=1
export PYTHONPATH="$(pwd):$(pwd)/env/lib/site-packages"
if pip install -e git+https://github.com/TencentARC/GFPGAN#egg=GFPGAN ; then
echo "Installed. Testing.."
else
fail "Error installing the packages necessary for GFPGAN (Face Correction)."
fi
out_test=`python -c "from gfpgan import GFPGANer; print(42)"`
if [ "$out_test" != "42" ]; then
echo "EE The dependency check has failed. This usually means that some system libraries are missing."
@ -121,12 +123,6 @@ else
export PYTHONNOUSERSITE=1
export PYTHONPATH="$(pwd):$(pwd)/env/lib/site-packages"
if pip install -e git+https://github.com/xinntao/Real-ESRGAN#egg=realesrgan ; then
echo "Installed. Testing.."
else
fail "Error installing the packages necessary for ESRGAN"
fi
out_test=`python -c "from basicsr.archs.rrdbnet_arch import RRDBNet; from realesrgan import RealESRGANer; print(42)"`
if [ "$out_test" != "42" ]; then
fail "ESRGAN dependency test failed"
@ -163,12 +159,27 @@ else
pip install picklescan || fail "Picklescan installation failed."
fi
if python -c "import safetensors" --help >/dev/null 2>&1; then
echo "SafeTensors is already installed."
else
echo "SafeTensors not found, installing."
pip install safetensors || fail "SafeTensors installation failed."
fi
mkdir -p "../models/stable-diffusion"
mkdir -p "../models/vae"
mkdir -p "../models/hypernetwork"
echo "" > "../models/stable-diffusion/Put your custom ckpt files here.txt"
echo "" > "../models/vae/Put your VAE files here.txt"
echo "" > "../models/hypernetwork/Put your hypernetwork files here.txt"
# if downgrading from v2.5, migrate the models to the legacy path (if already downloaded)
if [ -e "../models/stable-diffusion/sd-v1-4.ckpt" ] && [ ! -e "sd-v1-4.ckpt" ]; then mv ../models/stable-diffusion/sd-v1-4.ckpt . ; fi
if [ -e "../models/gfpgan/GFPGANv1.3.pth" ] && [ ! -e "GFPGANv1.3.pth" ]; then mv ../models/gfpgan/GFPGANv1.3.pth . ; fi
if [ -e "../models/realesrgan/RealESRGAN_x4plus.pth" ] && [ ! -e "RealESRGAN_x4plus.pth" ]; then mv ../models/realesrgan/RealESRGAN_x4plus.pth . ; fi
if [ -e "../models/realesrgan/RealESRGAN_x4plus_anime_6B.pth" ] && [ ! -e "RealESRGAN_x4plus_anime_6B.pth" ]; then mv ../models/realesrgan/RealESRGAN_x4plus_anime_6B.pth . ; fi
if [ -f "sd-v1-4.ckpt" ]; then
model_size=`find "sd-v1-4.ckpt" -printf "%s"`
@ -309,6 +320,9 @@ if [ ! -f "../models/vae/vae-ft-mse-840000-ema-pruned.ckpt" ]; then
fi
fi
if [ "$test_sd2" == "Y" ]; then
pip install open_clip_torch==2.0.2
fi
if [ `grep -c sd_install_complete ../scripts/install_status.txt` -gt "0" ]; then
echo sd_weights_downloaded >> ../scripts/install_status.txt

View File

@ -1,6 +0,0 @@
@call conda --version
@call git --version
cd %CONDA_PREFIX%\..\scripts
on_env_start.bat

View File

@ -1,12 +0,0 @@
#!/bin/bash
conda-unpack
source $CONDA_PREFIX/etc/profile.d/conda.sh
conda --version
git --version
cd $CONDA_PREFIX/../scripts
./on_env_start.sh

View File

@ -3,6 +3,7 @@
<head>
<title>Stable Diffusion UI</title>
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta name="theme-color" content="#673AB6">
<link rel="icon" type="image/png" href="/media/images/favicon-16x16.png" sizes="16x16">
<link rel="icon" type="image/png" href="/media/images/favicon-32x32.png" sizes="32x32">
<link rel="stylesheet" href="/media/css/fonts.css">
@ -11,16 +12,21 @@
<link rel="stylesheet" href="/media/css/auto-save.css">
<link rel="stylesheet" href="/media/css/modifier-thumbnails.css">
<link rel="stylesheet" href="/media/css/fontawesome-all.min.css">
<link rel="stylesheet" href="/media/css/drawingboard.min.css">
<link rel="stylesheet" href="/media/css/image-editor.css">
<link rel="stylesheet" href="/media/css/jquery-confirm.min.css">
<link rel="manifest" href="/media/manifest.webmanifest">
<script src="/media/js/jquery-3.6.1.min.js"></script>
<script src="/media/js/drawingboard.min.js"></script>
<script src="/media/js/jquery-confirm.min.js"></script>
<script src="/media/js/marked.min.js"></script>
</head>
<body>
<div id="container">
<div id="top-nav">
<div id="logo">
<h1>Stable Diffusion UI <small>v2.4.13 <span id="updateBranchLabel"></span></small></h1>
<h1>
Stable Diffusion UI
<small>v2.4.24 <span id="updateBranchLabel"></span></small>
</h1>
</div>
<div id="server-status">
<div id="server-status-color"></div>
@ -49,7 +55,7 @@
<input id="prompt_from_file" name="prompt_from_file" type="file" /> <!-- hidden -->
<label for="negative_prompt" class="collapsible" id="negative_prompt_handle">
Negative Prompt
<a href="https://github.com/cmdr2/stable-diffusion-ui/wiki/Writing-prompts#negative-prompts" target="_blank"><i class="fa-solid fa-circle-question help-btn"><span class="simple-tooltip right">Click to learn more about Negative Prompts</span></i></a>
<a href="https://github.com/cmdr2/stable-diffusion-ui/wiki/Writing-prompts#negative-prompts" target="_blank"><i class="fa-solid fa-circle-question help-btn"><span class="simple-tooltip top-left">Click to learn more about Negative Prompts</span></i></a>
<small>(optional)</small>
</label>
<div class="collapsible-content">
@ -58,33 +64,47 @@
</div>
<div id="editor-inputs-init-image" class="row">
<label for="init_image">Initial Image (img2img) <small>(optional)</small> </label> <input id="init_image" name="init_image" type="file" /><br/>
<label for="init_image">Initial Image (img2img) <small>(optional)</small> </label>
<div id="init_image_preview_container" class="image_preview_container">
<div id="init_image_wrapper">
<img id="init_image_preview" src="" />
<span id="init_image_size_box"></span>
<button class="init_image_clear image_clear_btn">X</button>
<button class="init_image_clear image_clear_btn"><i class="fa-solid fa-xmark"></i></button>
</div>
<div id="init_image_buttons">
<div class="button">
<i class="fa-regular fa-folder-open"></i>
Browse
<input id="init_image" name="init_image" type="file" />
</div>
<div id="init_image_button_draw" class="button">
<i class="fa-solid fa-pencil"></i>
Draw
</div>
<div id="inpaint_button_container">
<div id="init_image_button_inpaint" class="button">
<i class="fa-solid fa-paintbrush"></i>
Inpaint
</div>
<input id="enable_mask" name="enable_mask" type="checkbox">
</div>
</div>
<br/>
<input id="enable_mask" name="enable_mask" type="checkbox">
<label for="enable_mask">
In-Painting (beta)
<a href="https://github.com/cmdr2/stable-diffusion-ui/wiki/Inpainting" target="_blank"><i class="fa-solid fa-circle-question help-btn"><span class="simple-tooltip right">Click to learn more about InPainting</span></i></a>
<small>(select the area which the AI will paint into)</small>
</label>
<div id="inpaintingEditor"></div>
</div>
</div>
<div id="editor-inputs-tags-container" class="row">
<label>Image Modifiers: <small>(click an Image Modifier to remove it)</small></label>
<label>Image Modifiers <i class="fa-solid fa-circle-question help-btn"><span class="simple-tooltip top-left">click an Image Modifier to remove it, use Ctrl+Mouse Wheel to adjust its weight</span></i>:</label>
<div id="editor-inputs-tags-list"></div>
</div>
<button id="makeImage" class="primaryButton">Make Image</button>
<button id="stopImage" class="secondaryButton">Stop All</button>
<div id="render-buttons">
<button id="stopImage" class="secondaryButton">Stop All</button>
<button id="pause"><i class="fa-solid fa-pause"></i> Pause All</button>
<button id="resume"><i class="fa-solid fa-play"></i> Resume</button>
</div>
</div>
<span class="line-separator"></span>
@ -93,7 +113,7 @@
<h4 class="collapsible">
Image Settings
<i id="reset-image-settings" class="fa-solid fa-arrow-rotate-left section-button">
<span class="simple-tooltip right">
<span class="simple-tooltip top-left">
Reset Image Settings
</span>
</i>
@ -101,19 +121,19 @@
<div id="editor-settings-entries" class="collapsible-content">
<div><table>
<tr><b class="settings-subheader">Image Settings</b></tr>
<tr class="pl-5"><td><label for="seed">Seed:</label></td><td><input id="seed" name="seed" size="10" value="30000" onkeypress="preventNonNumericalInput(event)"> <input id="random_seed" name="random_seed" type="checkbox" checked><label for="random_seed">Random</label></td></tr>
<tr class="pl-5"><td><label for="seed">Seed:</label></td><td><input id="seed" name="seed" size="10" value="0" onkeypress="preventNonNumericalInput(event)"> <input id="random_seed" name="random_seed" type="checkbox" checked><label for="random_seed">Random</label></td></tr>
<tr class="pl-5"><td><label for="num_outputs_total">Number of Images:</label></td><td><input id="num_outputs_total" name="num_outputs_total" value="1" size="1" onkeypress="preventNonNumericalInput(event)"> <label><small>(total)</small></label> <input id="num_outputs_parallel" name="num_outputs_parallel" value="1" size="1" onkeypress="preventNonNumericalInput(event)"> <label for="num_outputs_parallel"><small>(in parallel)</small></label></td></tr>
<tr class="pl-5"><td><label for="stable_diffusion_model">Model:</label></td><td>
<select id="stable_diffusion_model" name="stable_diffusion_model">
<!-- <option value="sd-v1-4" selected>sd-v1-4</option> -->
</select>
<a href="https://github.com/cmdr2/stable-diffusion-ui/wiki/Custom-Models" target="_blank"><i class="fa-solid fa-circle-question help-btn"><span class="simple-tooltip right">Click to learn more about custom models</span></i></a>
<a href="https://github.com/cmdr2/stable-diffusion-ui/wiki/Custom-Models" target="_blank"><i class="fa-solid fa-circle-question help-btn"><span class="simple-tooltip top-left">Click to learn more about custom models</span></i></a>
</td></tr>
<tr class="pl-5"><td><label for="vae_model">Custom VAE:</i></label></td><td>
<select id="vae_model" name="vae_model">
<!-- <option value="" selected>None</option> -->
</select>
<a href="https://github.com/cmdr2/stable-diffusion-ui/wiki/VAE-Variational-Auto-Encoder" target="_blank"><i class="fa-solid fa-circle-question help-btn"><span class="simple-tooltip right">Click to learn more about VAEs</span></i></a>
<a href="https://github.com/cmdr2/stable-diffusion-ui/wiki/VAE-Variational-Auto-Encoder" target="_blank"><i class="fa-solid fa-circle-question help-btn"><span class="simple-tooltip top-left">Click to learn more about VAEs</span></i></a>
</td></tr>
<tr id="samplerSelection" class="pl-5"><td><label for="sampler">Sampler:</label></td><td>
<select id="sampler" name="sampler">
@ -126,7 +146,7 @@
<option value="dpm2_a">dpm2_a</option>
<option value="lms">lms</option>
</select>
<a href="https://github.com/cmdr2/stable-diffusion-ui/wiki/How-to-Use#samplers" target="_blank"><i class="fa-solid fa-circle-question help-btn"><span class="simple-tooltip right">Click to learn more about samplers</span></i></a>
<a href="https://github.com/cmdr2/stable-diffusion-ui/wiki/How-to-Use#samplers" target="_blank"><i class="fa-solid fa-circle-question help-btn"><span class="simple-tooltip top-left">Click to learn more about samplers</span></i></a>
</td></tr>
<tr class="pl-5"><td><label>Image Size: </label></td><td>
<select id="width" name="width" value="512">
@ -175,19 +195,31 @@
<label for="height"><small>(height)</small></label>
</td></tr>
<tr class="pl-5"><td><label for="num_inference_steps">Inference Steps:</label></td><td> <input id="num_inference_steps" name="num_inference_steps" size="4" value="25" onkeypress="preventNonNumericalInput(event)"></td></tr>
<tr class="pl-5"><td><label for="guidance_scale_slider">Guidance Scale:</label></td><td> <input id="guidance_scale_slider" name="guidance_scale_slider" class="editor-slider" value="75" type="range" min="10" max="500"> <input id="guidance_scale" name="guidance_scale" size="4" pattern="^[0-9\.]+$" onkeypress="preventNonNumericalInput(event)"></td></tr>
<tr id="prompt_strength_container" class="pl-5"><td><label for="prompt_strength_slider">Prompt Strength:</label></td><td> <input id="prompt_strength_slider" name="prompt_strength_slider" class="editor-slider" value="80" type="range" min="0" max="99"> <input id="prompt_strength" name="prompt_strength" size="4" pattern="^[0-9\.]+$" onkeypress="preventNonNumericalInput(event)"><br/></td></tr></span>
<tr class="pl-5"><td><label for="guidance_scale_slider">Guidance Scale:</label></td><td> <input id="guidance_scale_slider" name="guidance_scale_slider" class="editor-slider" value="75" type="range" min="11" max="500"> <input id="guidance_scale" name="guidance_scale" size="4" pattern="^[0-9\.]+$" onkeypress="preventNonNumericalInput(event)"></td></tr>
<tr id="prompt_strength_container" class="pl-5"><td><label for="prompt_strength_slider">Prompt Strength:</label></td><td> <input id="prompt_strength_slider" name="prompt_strength_slider" class="editor-slider" value="80" type="range" min="0" max="99"> <input id="prompt_strength" name="prompt_strength" size="4" pattern="^[0-9\.]+$" onkeypress="preventNonNumericalInput(event)"><br/></td></tr>
<tr class="pl-5"><td><label for="hypernetwork_model">Hypernetwork:</i></label></td><td>
<select id="hypernetwork_model" name="hypernetwork_model">
<!-- <option value="" selected>None</option> -->
</select>
</td></tr>
<tr id="hypernetwork_strength_container" class="pl-5">
<td><label for="hypernetwork_strength_slider">Hypernetwork Strength:</label></td>
<td> <input id="hypernetwork_strength_slider" name="hypernetwork_strength_slider" class="editor-slider" value="100" type="range" min="0" max="100"> <input id="hypernetwork_strength" name="hypernetwork_strength" size="4" pattern="^[0-9\.]+$" onkeypress="preventNonNumericalInput(event)"><br/></td>
</tr>
<tr class="pl-5"><td><label for="output_format">Output Format:</label></td><td>
<select id="output_format" name="output_format">
<option value="jpeg" selected>jpeg</option>
<option value="png">png</option>
</select>
</td></tr>
<tr class="pl-5" id="output_quality_row"><td><label for="output_quality">JPEG Quality:</label></td><td>
<input id="output_quality_slider" name="output_quality" class="editor-slider" value="75" type="range" min="10" max="95"> <input id="output_quality" name="output_quality" size="4" pattern="^[0-9\.]+$" onkeypress="preventNonNumericalInput(event)">
</td></tr>
</table></div>
<div><ul>
<li><b class="settings-subheader">Render Settings</b></li>
<li class="pl-5"><input id="stream_image_progress" name="stream_image_progress" type="checkbox"> <label for="stream_image_progress">Show a live preview <small>(uses more VRAM, and slower image creation)</small></label></li>
<li class="pl-5"><input id="stream_image_progress" name="stream_image_progress" type="checkbox"> <label for="stream_image_progress">Show a live preview <small>(uses more VRAM, slower images)</small></label></li>
<li class="pl-5"><input id="use_face_correction" name="use_face_correction" type="checkbox"> <label for="use_face_correction">Fix incorrect faces and eyes <small>(uses GFPGAN)</small></label></li>
<li class="pl-5">
<input id="use_upscale" name="use_upscale" type="checkbox"> <label for="use_upscale">Upscale image by 4x with </label>
@ -247,8 +279,17 @@
<br/><br/>
<div>
<h3><i class="fa fa-microchip icon"></i> System Info</h3>
<div id="system-info"></div>
<div id="system-info">
<table>
<tr><td><label>Processor:</label></td><td id="system-info-cpu" class="value"></td></tr>
<tr><td><label>Compatible Graphics Cards (all):</label></td><td id="system-info-gpus-all" class="value"></td></tr>
<tr><td></td><td>&nbsp;</td></tr>
<tr><td><label>Used for rendering 🔥:</label></td><td id="system-info-rendering-devices" class="value"></td></tr>
<tr><td><label>Server Addresses <i class="fa-solid fa-circle-question help-btn"><span class="simple-tooltip top-left">You can access Stable Diffusion UI from other devices using these addresses</span></i> :</label></td><td id="system-info-server-hosts" class="value"></td></tr>
</table>
</div>
</div>
</div>
</div>
<div id="tab-content-about" class="tab-content">
@ -314,6 +355,38 @@
</div>
</div>
<div id="image-editor" class="popup image-editor-popup">
<div>
<i class="close-button fa-solid fa-xmark"></i>
<h1>Image Editor</h1>
<div class="flex-container">
<div class="editor-controls-left"></div>
<div class="editor-controls-center">
<div></div>
</div>
<div class="editor-controls-right">
<div></div>
</div>
</div>
</div>
</div>
<div id="image-inpainter" class="popup image-editor-popup">
<div>
<i class="close-button fa-solid fa-xmark"></i>
<h1>Inpainter</h1>
<div class="flex-container">
<div class="editor-controls-left"></div>
<div class="editor-controls-center">
<div></div>
</div>
<div class="editor-controls-right">
<div></div>
</div>
</div>
</div>
</div>
<div id="footer-spacer"></div>
<div id="footer">
<div class="line-separator">&nbsp;</div>
@ -327,28 +400,34 @@
</div>
</div>
</body>
<script src="media/js/utils.js"></script>
<script src="media/js/engine.js"></script>
<script src="media/js/parameters.js"></script>
<script src="media/js/plugins.js"></script>
<script src="media/js/inpainting-editor.js"></script>
<script src="media/js/image-modifiers.js"></script>
<script src="media/js/auto-save.js"></script>
<script src="media/js/main.js"></script>
<script src="media/js/themes.js"></script>
<script src="media/js/dnd.js"></script>
<script src="media/js/image-editor.js"></script>
<script>
async function init() {
await initSettings()
await getModels()
await getDiskPath()
await getAppConfig()
await loadModifiers()
await loadUIPlugins()
await getDevices()
await loadModifiers()
await getSystemInfo()
setInterval(healthCheck, HEALTH_PING_INTERVAL * 1000)
healthCheck()
SD.init({
events: {
statusChange: setServerStatus
, idle: onIdle
}
})
playSound()
}

File diff suppressed because one or more lines are too long

View File

@ -0,0 +1,215 @@
.editor-controls-left {
padding-left: 32px;
text-align: left;
padding-bottom: 20px;
}
.editor-options-container {
display: flex;
row-gap: 10px;
max-width: 210px;
}
.editor-options-container > * {
flex: 1;
display: flex;
justify-content: center;
align-items: center;
}
.editor-options-container > * > * {
position: inherit;
width: 32px;
height: 32px;
border-radius: 16px;
background: var(--background-color3);
cursor: pointer;
transition: opacity 0.25s;
}
.editor-options-container > * > *:hover {
opacity: 0.75;
}
.editor-options-container > * > *.active {
border: 2px solid #3584e4;
}
.image_editor_opacity .editor-options-container > * > *:not(.active) {
border: 1px solid var(--background-color3);
}
.image_editor_color .editor-options-container {
flex-wrap: wrap;
}
.image_editor_color .editor-options-container > * {
flex: 20%;
}
.image_editor_color .editor-options-container > * > * {
position: relative;
}
.image_editor_color .editor-options-container > * > *.active::before {
content: "\f00c";
display: var(--fa-display,inline-block);
font-style: normal;
font-variant: normal;
line-height: 1;
text-rendering: auto;
font-family: var(--fa-style-family, "Font Awesome 6 Free");
font-weight: var(--fa-style, 900);
position: absolute;
left: 50%;
top: 50%;
transform: translate(-50%, -50%) scale(125%);
color: black;
}
.image_editor_color .editor-options-container > *:first-child {
flex: 100%;
}
.image_editor_color .editor-options-container > *:first-child > * {
width: 100%;
}
.image_editor_color .editor-options-container > *:first-child > * > input {
width: 100%;
height: 100%;
opacity: 0;
cursor: pointer;
}
.image_editor_color .editor-options-container > *:first-child > * > span {
position: absolute;
left: 50%;
top: 50%;
transform: translate(-50%, -50%);
opacity: 0.5;
}
.image_editor_color .editor-options-container > *:first-child > *.active > span {
opacity: 0;
}
.image_editor_tool .editor-options-container {
flex-wrap: wrap;
}
.image_editor_tool .editor-options-container > * {
padding: 2px;
flex: 50%;
}
.editor-controls-center {
/* background: var(--background-color2); */
flex: 1;
display: flex;
justify-content: center;
align-items: center;
}
.editor-controls-center > div {
position: relative;
background: black;
}
.editor-controls-center canvas {
position: absolute;
left: 0;
top: 0;
}
.editor-controls-right {
padding: 32px;
display: flex;
flex-direction: column;
}
.editor-controls-right > div:last-child {
flex: 1;
display: flex;
flex-direction: column;
min-width: 200px;
gap: 5px;
justify-content: end;
}
.image-editor-button {
width: 100%;
height: 32px;
border-radius: 16px;
background: var(--background-color3);
}
.editor-controls-right .image-editor-button {
margin-bottom: 4px;
}
#init_image_button_inpaint .input-toggle {
position: absolute;
left: 16px;
}
#init_image_button_inpaint .input-toggle input:not(:checked) ~ label {
pointer-events: none;
}
.image-editor-popup {
--popup-margin: 16px;
--popup-padding: 24px;
}
.image-editor-popup > div {
margin: var(--popup-margin);
padding: var(--popup-padding);
min-height: calc(100vh - (2 * var(--popup-margin)));
max-width: none;
}
.image-editor-popup h1 {
position: absolute;
top: 32px;
left: 50%;
transform: translateX(-50%);
}
@media screen and (max-width: 700px) {
.image-editor-popup > div {
margin: 0px;
padding: 0px;
}
.image-editor-popup h1 {
position: relative;
transform: none;
left: auto;
}
}
.image-editor-popup > div > div {
min-height: calc(100vh - (2 * var(--popup-margin)) - (2 * var(--popup-padding)));
}
.inpainter .image_editor_color {
display: none;
}
.inpainter .editor-canvas-background {
opacity: 0.75;
}
#init_image_preview_container .button {
display: flex;
padding: 6px;
height: 24px;
box-shadow: 2px 2px 1px 1px #00000088;
}
#init_image_preview_container .button:hover {
background: var(--background-color4)
}
.image-editor-popup .button {
display: flex;
}
.image-editor-popup h4 {
text-align: left;
}

9
ui/media/css/jquery-confirm.min.css vendored Normal file

File diff suppressed because one or more lines are too long

View File

@ -44,9 +44,6 @@ code {
margin-top: 5px;
display: block;
}
.image_preview_container {
margin-top: 10pt;
}
.image_clear_btn {
position: absolute;
transform: translate(30%, -30%);
@ -64,6 +61,11 @@ code {
top: 0px;
right: 0px;
}
.image_clear_btn:active {
position: absolute;
top: 0px;
left: auto;
}
.settings-box ul {
font-size: 9pt;
margin-bottom: 5px;
@ -137,7 +139,7 @@ code {
padding: 16px;
display: flex;
flex-direction: column;
flex: 0 0 370pt;
flex: 0 0 380pt;
}
#editor label {
font-weight: normal;
@ -189,15 +191,29 @@ code {
background: rgb(132, 8, 0);
border: 2px solid rgb(122, 29, 0);
color: rgb(255, 221, 255);
width: 100%;
height: 30pt;
border-radius: 6px;
display: none;
margin-top: 2pt;
flex-grow: 2;
}
#stopImage:hover {
background: rgb(177, 27, 0);
}
div#render-buttons {
gap: 3px;
margin-top: 4px;
display: none;
}
button#pause {
flex-grow: 1;
background: var(--accent-color);
}
button#resume {
flex-grow: 1;
background: var(--accent-color);
display: none;
}
.flex-container {
display: flex;
width: 100%;
@ -210,7 +226,7 @@ code {
}
.collapsible-content {
display: block;
padding-left: 15px;
padding-left: 10px;
}
.collapsible-content h5 {
padding: 5pt 0pt;
@ -263,39 +279,13 @@ img {
}
.preview-prompt {
font-size: 13pt;
margin-bottom: 10pt;
display: inline;
}
#coffeeButton {
height: 23px;
transform: translateY(25%);
}
#inpaintingEditor {
width: 300pt;
height: 300pt;
margin-top: 5pt;
}
.drawing-board-canvas-wrapper {
background-size: 100% 100%;
}
.drawing-board-controls {
min-width: 273px;
}
.drawing-board-control > button {
background-color: #eee;
border-radius: 3pt;
}
.drawing-board-control-inner {
background-color: #eee;
border-radius: 3pt;
}
#inpaintingEditor canvas {
opacity: 0.6;
}
#enable_mask {
margin-top: 8pt;
}
#top-nav {
position: relative;
background: var(--background-color4);
@ -415,14 +405,34 @@ img {
.imageTaskContainer > div > .collapsible-handle {
display: none;
}
.dropTargetBefore::before{
content: "";
border: 1px solid #fff;
margin-bottom: -2px;
display: block;
box-shadow: 0 0 5px #fff;
transform: translate(0px, -14px);
}
.dropTargetAfter::after{
content: "";
border: 1px solid #fff;
margin-bottom: -2px;
display: block;
box-shadow: 0 0 5px #fff;
transform: translate(0px, 14px);
}
.drag-handle {
margin-right: 6px;
cursor: move;
}
.taskStatusLabel {
float: left;
font-size: 8pt;
background:var(--background-color2);
border: 1px solid rgb(61, 62, 66);
padding: 2pt 4pt;
border-radius: 2pt;
margin-right: 5pt;
display: inline;
}
.activeTaskLabel {
background:rgb(0, 90, 30);
@ -472,6 +482,7 @@ img {
font-size: 10pt;
color: #aaa;
margin-bottom: 5pt;
margin-top: 5pt;
}
.img-batch {
display: inline;
@ -479,8 +490,58 @@ img {
#prompt_from_file {
display: none;
}
#init_image_preview_container {
display: flex;
margin-top: 6px;
margin-bottom: 8px;
}
#init_image_preview_container:not(.has-image) #init_image_wrapper,
#init_image_preview_container:not(.has-image) #inpaint_button_container {
display: none;
}
#init_image_buttons {
display: flex;
gap: 8px;
}
#init_image_preview_container.has-image #init_image_buttons {
flex-direction: column;
padding-left: 8px;
}
#init_image_buttons .button {
position: relative;
height: 32px;
width: 150px;
}
#init_image_buttons .button > input {
position: absolute;
left: 0;
top: 0;
right: 0;
bottom: 0;
opacity: 0;
}
#inpaint_button_container {
display: flex;
align-items: center;
gap: 8px;
}
#init_image_wrapper {
grid-row: span 3;
position: relative;
width: fit-content;
max-height: 150px;
}
#init_image_preview {
max-width: 150px;
max-height: 150px;
height: 100%;
width: 100%;
@ -488,23 +549,18 @@ img {
border-radius: 6px;
transition: all 1s ease-in-out;
}
/*
#init_image_preview:hover {
max-width: 500px;
max-height: 1000px;
transition: all 1s 0.5s ease-in-out;
}
#init_image_wrapper {
position: relative;
width: fit-content;
}
} */
#init_image_size_box {
position: absolute;
right: 0px;
bottom: 3px;
bottom: 0px;
padding: 3px;
background: black;
color: white;
@ -556,6 +612,10 @@ option {
cursor: pointer;
}
input[type="file"] * {
cursor: pointer;
}
input,
select,
textarea {
@ -594,12 +654,26 @@ input[type="file"] {
}
button,
input::file-selector-button {
input::file-selector-button,
.button {
padding: 2px 4px;
border-radius: 4px;
border-radius: var(--input-border-radius);
background: var(--button-color);
color: var(--button-text-color);
border: var(--button-border);
align-items: center;
justify-content: center;
cursor: pointer;
}
.button i {
margin-right: 8px;
}
button:hover,
.button:hover {
transition-duration: 0.1s;
background: hsl(var(--accent-hue), 100%, calc(var(--accent-lightness) + 6%));
}
input::file-selector-button {
@ -658,11 +732,15 @@ input::file-selector-button {
opacity: 1;
}
/* MOBILE SUPPORT */
@media screen and (max-width: 700px) {
/* Small screens */
@media screen and (max-width: 1265px) {
#top-nav {
flex-direction: column;
}
}
/* MOBILE SUPPORT */
@media screen and (max-width: 700px) {
body {
margin: 0px;
}
@ -712,7 +790,7 @@ input::file-selector-button {
padding-right: 0px;
}
#server-status {
display: none;
top: 75%;
}
.popup > div {
padding-left: 5px !important;
@ -730,6 +808,15 @@ input::file-selector-button {
}
}
@media screen and (max-width: 500px) {
#server-status #server-status-msg {
display: none;
}
#server-status:hover #server-status-msg {
display: inline;
}
}
@media (min-width: 700px) {
/* #editor {
max-width: 480px;
@ -750,6 +837,8 @@ input::file-selector-button {
#promptsFromFileBtn {
font-size: 9pt;
display: inline;
background-color: var(--accent-color);
}
.section-button {
@ -839,6 +928,15 @@ input::file-selector-button {
transform: translate(-50%, 100%);
}
.simple-tooltip.top-left {
top: 0px;
left: 0px;
transform: translate(calc(-100% + 15%), calc(-100% + 15%));
}
:hover > .simple-tooltip.top-left {
transform: translate(-80%, -100%);
}
/* PROGRESS BAR */
.progress-bar {
background: var(--background-color3);
@ -847,6 +945,7 @@ input::file-selector-button {
height: 16px;
position: relative;
transition: 0.25s 1s border, 0.25s 1s height;
clear: both;
}
.progress-bar > div {
background: var(--accent-color);
@ -951,8 +1050,8 @@ input::file-selector-button {
display: none;
}
#tab-content-wrapper {
border-top: 8px solid var(--background-color1);
#tab-content-wrapper > * {
padding-top: 8px;
}
.tab-content-inner {
@ -989,16 +1088,89 @@ i.active {
float: right;
font-weight: bold;
}
button:hover {
transition-duration: 0.1s;
background: hsl(var(--accent-hue), 100%, calc(var(--accent-lightness) + 6%));
}
button:active {
transition-duration: 0.1s;
background-color: hsl(var(--accent-hue), 100%, calc(var(--accent-lightness) + 24%));
position: relative;
top: 1px;
left: 1px;
}
div.task-initimg > img {
margin-right: 6px;
display: block;
}
div.task-fs-initimage {
display: none;
# position: absolute;
}
div.task-initimg:hover div.task-fs-initimage {
display: block;
position: absolute;
z-index: 9999;
box-shadow: 0 0 30px #000;
margin-top:-64px;
}
div.top-right {
position: absolute;
top: 8px;
right: 8px;
}
button#save-system-settings-btn {
padding: 4pt 8pt;
}
#ip-info a {
color:var(--text-color)
}
#ip-info div {
line-height: 200%;
}
/* SCROLLBARS */
:root {
--scrollbar-width: 14px;
--scrollbar-radius: 10px;
}
.scrollbar-editor::-webkit-scrollbar {
width: 8px;
}
.scrollbar-editor::-webkit-scrollbar-track {
border-radius: 10px;
}
.scrollbar-editor::-webkit-scrollbar-thumb {
background: --background-color2;
border-radius: 10px;
}
::-webkit-scrollbar {
width: var(--scrollbar-width);
}
::-webkit-scrollbar-track {
box-shadow: inset 0 0 5px var(--input-border-color);
border-radius: var(--input-border-radius);
}
::-webkit-scrollbar-thumb {
background: var(--background-color2);
border-radius: var(--scrollbar-radius);
}
body.pause {
border: solid 12px var(--accent-color);
}
body.wait-pause {
animation: blinker 2s linear infinite;
}
@keyframes blinker {
0% { border: solid 12px var(--accent-color); }
50% { border: solid 12px var(--background-color1); }
100% { border: solid 12px var(--accent-color); }
}

View File

@ -19,7 +19,7 @@
--input-border-color: var(--background-color4);
--button-text-color: var(--input-text-color);
--button-color: var(--accent-color);
--button-color: var(--input-background-color);
--button-border: none;
/* other */
@ -30,6 +30,9 @@
--primary-button-border: none;
--input-switch-padding: 1px;
--input-height: 18px;
/* Main theme color, hex color fallback. */
--theme-color-fallback: #673AB6;
}
.theme-light {
@ -39,11 +42,12 @@
--background-color4: #cccccc;
--text-color: black;
--button-text-color: white;
--input-text-color: black;
--input-background-color: #f8f9fa;
--input-border-color: grey;
--theme-color-fallback: #aaaaaa;
}
.theme-discord {
@ -58,6 +62,8 @@
--input-border-size: 2px;
--input-background-color: #202225;
--input-border-color: var(--input-background-color);
--theme-color-fallback: #202225;
}
.theme-cool-blue {
@ -71,8 +77,10 @@
--background-color4: hsl(var(--main-hue), var(--main-saturation), calc(var(--value-base) - (3 * var(--value-step))));
--input-background-color: var(--background-color3);
--accent-hue: 212;
--theme-color-fallback: #0056b8;
}
@ -87,6 +95,8 @@
--background-color4: hsl(var(--main-hue), var(--main-saturation), calc(var(--value-base) - (3 * var(--value-step))));
--input-background-color: var(--background-color3);
--theme-color-fallback: #5300b8;
}
.theme-super-dark {
@ -101,6 +111,8 @@
--input-background-color: var(--background-color3);
--input-border-size: 0px;
--theme-color-fallback: #000000;
}
.theme-wild {
@ -117,10 +129,11 @@
--input-border-size: 1px;
--input-background-color: hsl(222, var(--main-saturation), calc(var(--value-base) - (2 * var(--value-step))));
--input-text-color: red;
--input-border-color: green;
--input-text-color: #FF0000;
--input-border-color: #005E05;
}
.theme-gnomie {
--background-color1: #242424;
--background-color2: #353535;
@ -136,11 +149,12 @@
--input-background-color: #2a2a2a;
--input-border-size: 0px;
--input-border-color: var(--input-background-color);
--theme-color-fallback: #2168bf;
}
.theme-gnomie .panel-box {
border: none;
box-shadow: 0px 1px 2px rgba(0, 0, 0, 0.25);
border-radius: 10px;
}
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 10 KiB

View File

@ -14,13 +14,16 @@ const SETTINGS_IDS_LIST = [
"num_outputs_parallel",
"stable_diffusion_model",
"vae_model",
"hypernetwork_model",
"sampler",
"width",
"height",
"num_inference_steps",
"guidance_scale",
"prompt_strength",
"hypernetwork_strength",
"output_format",
"output_quality",
"negative_prompt",
"stream_image_progress",
"use_face_correction",
@ -35,6 +38,7 @@ const SETTINGS_IDS_LIST = [
"sound_toggle",
"turbo",
"use_full_precision",
"confirm_dangerous_actions",
"auto_save_settings"
]
@ -55,6 +59,9 @@ async function initSettings() {
if (!element) {
console.error(`Missing settings element ${id}`)
}
if (id in SETTINGS) { // don't create it again
return
}
SETTINGS[id] = {
key: id,
element: element,
@ -124,7 +131,7 @@ function loadSettings() {
var saved_settings_text = localStorage.getItem(SETTINGS_KEY)
if (saved_settings_text) {
var saved_settings = JSON.parse(saved_settings_text)
if (saved_settings.find(s => s.key == "auto_save_settings").value == false) {
if (saved_settings.find(s => s.key == "auto_save_settings")?.value == false) {
setSetting("auto_save_settings", false)
return
}

View File

@ -51,6 +51,13 @@ const TASK_MAPPING = {
readUI: () => negativePromptField.value,
parse: (val) => val
},
active_tags: { name: "Image Modifiers",
setUI: (active_tags) => {
refreshModifiersState(active_tags)
},
readUI: () => activeTags.map(x => x.name),
parse: (val) => val
},
width: { name: 'Width',
setUI: (width) => {
const oldVal = widthField.value
@ -78,13 +85,14 @@ const TASK_MAPPING = {
if (!seed) {
randomSeedField.checked = true
seedField.disabled = true
seedField.value = 0
return
}
randomSeedField.checked = false
seedField.disabled = false
seedField.value = seed
},
readUI: () => (randomSeedField.checked ? Math.floor(Math.random() * 10000000) : parseInt(seedField.value)),
readUI: () => parseInt(seedField.value), // just return the value the user is seeing in the UI
parse: (val) => parseInt(val)
},
num_inference_steps: { name: 'Steps',
@ -120,10 +128,12 @@ const TASK_MAPPING = {
},
mask: { name: 'Mask',
setUI: (mask) => {
inpaintingEditor.setImg(mask)
setTimeout(() => { // add a delay to insure this happens AFTER the main image loads (which reloads the inpainter)
imageInpainter.setImg(mask)
}, 250)
maskSetting.checked = Boolean(mask)
},
readUI: () => (maskSetting.checked ? inpaintingEditor.getImg() : undefined),
readUI: () => (maskSetting.checked ? imageInpainter.getImg() : undefined),
parse: (val) => val
},
@ -184,10 +194,32 @@ const TASK_MAPPING = {
readUI: () => vaeModelField.value,
parse: (val) => val
},
use_hypernetwork_model: { name: 'Hypernetwork model',
setUI: (use_hypernetwork_model) => {
const oldVal = hypernetworkModelField.value
numOutputsParallel: { name: 'Parallel Images',
setUI: (numOutputsParallel) => {
numOutputsParallelField.value = numOutputsParallel
if (use_hypernetwork_model !== '') {
use_hypernetwork_model = getModelPath(use_hypernetwork_model, ['.pt'])
use_hypernetwork_model = use_hypernetwork_model !== '' ? use_hypernetwork_model : oldVal
}
hypernetworkModelField.value = use_hypernetwork_model
hypernetworkModelField.dispatchEvent(new Event('change'))
},
readUI: () => hypernetworkModelField.value,
parse: (val) => val
},
hypernetwork_strength: { name: 'Hypernetwork Strength',
setUI: (hypernetwork_strength) => {
hypernetworkStrengthField.value = hypernetwork_strength
updateHypernetworkStrengthSlider()
},
readUI: () => parseFloat(hypernetworkStrengthField.value),
parse: (val) => parseFloat(val)
},
num_outputs: { name: 'Parallel Images',
setUI: (num_outputs) => {
numOutputsParallelField.value = num_outputs
},
readUI: () => parseInt(numOutputsParallelField.value),
parse: (val) => val
@ -267,11 +299,6 @@ function restoreTaskToUI(task, fieldsToSkip) {
// restore the original tag
promptField.value = task.reqBody.original_prompt || task.reqBody.prompt
// Restore modifiers
if (task.reqBody.active_tags) {
refreshModifiersState(task.reqBody.active_tags)
}
// properly reset checkboxes
if (!('use_face_correction' in task.reqBody)) {
useFaceCorrectionField.checked = false
@ -287,18 +314,11 @@ function restoreTaskToUI(task, fieldsToSkip) {
// Show the source picture if present
initImagePreview.src = (task.reqBody.init_image == undefined ? '' : task.reqBody.init_image)
if (IMAGE_REGEX.test(initImagePreview.src)) {
Boolean(task.reqBody.mask) ? inpaintingEditor.setImg(task.reqBody.mask) : inpaintingEditor.resetBackground()
initImagePreviewContainer.style.display = 'block'
inpaintingEditorContainer.style.display = 'none'
promptStrengthContainer.style.display = 'table-row'
//samplerSelectionContainer.style.display = 'none'
// maskSetting.checked = false
inpaintingEditorContainer.style.display = maskSetting.checked ? 'block' : 'none'
} else {
initImagePreviewContainer.style.display = 'none'
// inpaintingEditorContainer.style.display = 'none'
promptStrengthContainer.style.display = 'none'
// maskSetting.style.display = 'none'
if (Boolean(task.reqBody.mask)) {
setTimeout(() => { // add a delay to insure this happens AFTER the main image loads (which reloads the inpainter)
imageInpainter.setImg(task.reqBody.mask)
}, 250)
}
}
}
function readUI() {
@ -326,6 +346,7 @@ function getModelPath(filename, extensions)
filename = filename.slice(0, filename.length - ext.length)
}
})
return filename
}
const TASK_TEXT_MAPPING = {
@ -339,7 +360,9 @@ const TASK_TEXT_MAPPING = {
use_upscale: 'Use Upscaling',
sampler: 'Sampler',
negative_prompt: 'Negative Prompt',
use_stable_diffusion_model: 'Stable Diffusion model'
use_stable_diffusion_model: 'Stable Diffusion model',
use_hypernetwork_model: 'Hypernetwork model',
hypernetwork_strength: 'Hypernetwork Strength'
}
const afterPromptRe = /^\s*Width\s*:\s*\d+\s*(?:\r\n|\r|\n)+\s*Height\s*:\s*\d+\s*(\r\n|\r|\n)+Seed\s*:\s*\d+\s*$/igm
function parseTaskFromText(str) {
@ -406,7 +429,7 @@ async function parseContent(text) {
}
async function readFile(file, i) {
console.log(`Event %o reading file[${i}]:${file.name}...`, e)
console.log(`Event %o reading file[${i}]:${file.name}...`)
const fileContent = (await file.text()).trim()
return await parseContent(fileContent)
}
@ -466,7 +489,7 @@ function checkReadTextClipboardPermission (result) {
// PASTE ICON
const pasteIcon = document.createElement('i')
pasteIcon.className = 'fa-solid fa-paste section-button'
pasteIcon.innerHTML = `<span class="simple-tooltip right">Paste Image Settings</span>`
pasteIcon.innerHTML = `<span class="simple-tooltip top-left">Paste Image Settings</span>`
pasteIcon.addEventListener('click', async (event) => {
event.stopPropagation()
// Add css class 'active'
@ -506,7 +529,7 @@ function checkWriteToClipboardPermission (result) {
// COPY ICON
const copyIcon = document.createElement('i')
copyIcon.className = 'fa-solid fa-clipboard section-button'
copyIcon.innerHTML = `<span class="simple-tooltip right">Copy Image Settings</span>`
copyIcon.innerHTML = `<span class="simple-tooltip top-left">Copy Image Settings</span>`
copyIcon.addEventListener('click', (event) => {
event.stopPropagation()
// Add css class 'active'

File diff suppressed because one or more lines are too long

1315
ui/media/js/engine.js Normal file

File diff suppressed because it is too large Load Diff

706
ui/media/js/image-editor.js Normal file
View File

@ -0,0 +1,706 @@
var editorControlsLeft = document.getElementById("image-editor-controls-left")
const IMAGE_EDITOR_MAX_SIZE = 800
const IMAGE_EDITOR_BUTTONS = [
{
name: "Cancel",
icon: "fa-regular fa-circle-xmark",
handler: editor => {
editor.hide()
}
},
{
name: "Save",
icon: "fa-solid fa-floppy-disk",
handler: editor => {
editor.saveImage()
}
}
]
const defaultToolBegin = (editor, ctx, x, y, is_overlay = false) => {
ctx.beginPath()
ctx.moveTo(x, y)
}
const defaultToolMove = (editor, ctx, x, y, is_overlay = false) => {
ctx.lineTo(x, y)
if (is_overlay) {
ctx.clearRect(0, 0, editor.width, editor.height)
ctx.stroke()
}
}
const defaultToolEnd = (editor, ctx, x, y, is_overlay = false) => {
ctx.stroke()
if (is_overlay) {
ctx.clearRect(0, 0, editor.width, editor.height)
}
}
const IMAGE_EDITOR_TOOLS = [
{
id: "draw",
name: "Draw",
icon: "fa-solid fa-pencil",
cursor: "url(/media/images/fa-pencil.png) 0 24, pointer",
begin: defaultToolBegin,
move: defaultToolMove,
end: defaultToolEnd
},
{
id: "erase",
name: "Erase",
icon: "fa-solid fa-eraser",
cursor: "url(/media/images/fa-eraser.png) 0 18, pointer",
begin: defaultToolBegin,
move: (editor, ctx, x, y, is_overlay = false) => {
ctx.lineTo(x, y)
if (is_overlay) {
ctx.clearRect(0, 0, editor.width, editor.height)
ctx.globalCompositeOperation = "source-over"
ctx.globalAlpha = 1
ctx.filter = "none"
ctx.drawImage(editor.canvas_current, 0, 0)
editor.setBrush(editor.layers.overlay)
ctx.stroke()
editor.canvas_current.style.opacity = 0
}
},
end: (editor, ctx, x, y, is_overlay = false) => {
ctx.stroke()
if (is_overlay) {
ctx.clearRect(0, 0, editor.width, editor.height)
editor.canvas_current.style.opacity = ""
}
},
setBrush: (editor, layer) => {
layer.ctx.globalCompositeOperation = "destination-out"
}
},
{
id: "colorpicker",
name: "Color Picker",
icon: "fa-solid fa-eye-dropper",
cursor: "url(/media/images/fa-eye-dropper.png) 0 24, pointer",
begin: (editor, ctx, x, y, is_overlay = false) => {
var img_rgb = editor.layers.background.ctx.getImageData(x, y, 1, 1).data
var drawn_rgb = editor.ctx_current.getImageData(x, y, 1, 1).data
var drawn_opacity = drawn_rgb[3] / 255
editor.custom_color_input.value = rgbToHex({
r: (drawn_rgb[0] * drawn_opacity) + (img_rgb[0] * (1 - drawn_opacity)),
g: (drawn_rgb[1] * drawn_opacity) + (img_rgb[1] * (1 - drawn_opacity)),
b: (drawn_rgb[2] * drawn_opacity) + (img_rgb[2] * (1 - drawn_opacity)),
})
editor.custom_color_input.dispatchEvent(new Event("change"))
},
move: (editor, ctx, x, y, is_overlay = false) => {},
end: (editor, ctx, x, y, is_overlay = false) => {}
}
]
const IMAGE_EDITOR_ACTIONS = [
{
id: "clear",
name: "Clear",
icon: "fa-solid fa-xmark",
handler: (editor) => {
editor.ctx_current.clearRect(0, 0, editor.width, editor.height)
},
trackHistory: true
},
{
id: "undo",
name: "Undo",
icon: "fa-solid fa-rotate-left",
handler: (editor) => {
editor.history.undo()
},
trackHistory: false
},
{
id: "redo",
name: "Redo",
icon: "fa-solid fa-rotate-right",
handler: (editor) => {
editor.history.redo()
},
trackHistory: false
}
]
var IMAGE_EDITOR_SECTIONS = [
{
name: "tool",
title: "Tool",
default: "draw",
options: Array.from(IMAGE_EDITOR_TOOLS.map(t => t.id)),
initElement: (element, option) => {
var tool_info = IMAGE_EDITOR_TOOLS.find(t => t.id == option)
element.className = "image-editor-button button"
var sub_element = document.createElement("div")
var icon = document.createElement("i")
tool_info.icon.split(" ").forEach(c => icon.classList.add(c))
sub_element.appendChild(icon)
sub_element.append(tool_info.name)
element.appendChild(sub_element)
}
},
{
name: "color",
title: "Color",
default: "#f1c232",
options: [
"custom",
"#ea9999", "#e06666", "#cc0000", "#990000", "#660000",
"#f9cb9c", "#f6b26b", "#e69138", "#b45f06", "#783f04",
"#ffe599", "#ffd966", "#f1c232", "#bf9000", "#7f6000",
"#b6d7a8", "#93c47d", "#6aa84f", "#38761d", "#274e13",
"#a4c2f4", "#6d9eeb", "#3c78d8", "#1155cc", "#1c4587",
"#b4a7d6", "#8e7cc3", "#674ea7", "#351c75", "#20124d",
"#d5a6bd", "#c27ba0", "#a64d79", "#741b47", "#4c1130",
"#ffffff", "#c0c0c0", "#838383", "#525252", "#000000",
],
initElement: (element, option) => {
if (option == "custom") {
var input = document.createElement("input")
input.type = "color"
element.appendChild(input)
var span = document.createElement("span")
span.textContent = "Custom"
span.onclick = function(e) {
input.click()
}
element.appendChild(span)
}
else {
element.style.background = option
}
},
getCustom: editor => {
var input = editor.popup.querySelector(".image_editor_color input")
return input.value
}
},
{
name: "brush_size",
title: "Brush Size",
default: 48,
options: [ 6, 12, 16, 24, 30, 40, 48, 64 ],
initElement: (element, option) => {
element.parentElement.style.flex = option
element.style.width = option + "px"
element.style.height = option + "px"
element.style['margin-right'] = '2px'
element.style["border-radius"] = (option / 2).toFixed() + "px"
}
},
{
name: "opacity",
title: "Opacity",
default: 0,
options: [ 0, 0.2, 0.4, 0.6, 0.8 ],
initElement: (element, option) => {
element.style.background = `repeating-conic-gradient(rgba(0, 0, 0, ${option}) 0% 25%, rgba(255, 255, 255, ${option}) 0% 50%) 50% / 10px 10px`
}
},
{
name: "sharpness",
title: "Sharpness",
default: 0,
options: [ 0, 0.05, 0.1, 0.2, 0.3 ],
initElement: (element, option) => {
var size = 32
var blur_amount = parseInt(option * size)
var sub_element = document.createElement("div")
sub_element.style.background = `var(--background-color3)`
sub_element.style.filter = `blur(${blur_amount}px)`
sub_element.style.width = `${size - 4}px`
sub_element.style.height = `${size - 4}px`
sub_element.style['border-radius'] = `${size}px`
element.style.background = "none"
element.appendChild(sub_element)
}
}
]
class EditorHistory {
constructor(editor) {
this.editor = editor
this.events = [] // stack of all events (actions/edits)
this.current_edit = null
this.rewind_index = 0 // how many events back into the history we've rewound to. (current state is just after event at index 'length - this.rewind_index - 1')
}
push(event) {
// probably add something here eventually to save state every x events
if (this.rewind_index != 0) {
this.events = this.events.slice(0, 0 - this.rewind_index)
this.rewind_index = 0
}
var snapshot_frequency = 20 // (every x edits, take a snapshot of the current drawing state, for faster rewinding)
if (this.events.length > 0 && this.events.length % snapshot_frequency == 0) {
event.snapshot = this.editor.layers.drawing.ctx.getImageData(0, 0, this.editor.width, this.editor.height)
}
this.events.push(event)
}
pushAction(action) {
this.push({
type: "action",
id: action
});
}
editBegin(x, y) {
this.current_edit = {
type: "edit",
id: this.editor.getOptionValue("tool"),
options: Object.assign({}, this.editor.options),
points: [ { x: x, y: y } ]
}
}
editMove(x, y) {
if (this.current_edit) {
this.current_edit.points.push({ x: x, y: y })
}
}
editEnd(x, y) {
if (this.current_edit) {
this.push(this.current_edit)
this.current_edit = null
}
}
clear() {
this.events = []
}
undo() {
this.rewindTo(this.rewind_index + 1)
}
redo() {
this.rewindTo(this.rewind_index - 1)
}
rewindTo(new_rewind_index) {
if (new_rewind_index < 0 || new_rewind_index > this.events.length) {
return; // do nothing if target index is out of bounds
}
var ctx = this.editor.layers.drawing.ctx
ctx.clearRect(0, 0, this.editor.width, this.editor.height)
var target_index = this.events.length - 1 - new_rewind_index
var snapshot_index = target_index
while (snapshot_index > -1) {
if (this.events[snapshot_index].snapshot) {
break
}
snapshot_index--
}
if (snapshot_index != -1) {
ctx.putImageData(this.events[snapshot_index].snapshot, 0, 0);
}
for (var i = (snapshot_index + 1); i <= target_index; i++) {
var event = this.events[i]
if (event.type == "action") {
var action = IMAGE_EDITOR_ACTIONS.find(a => a.id == event.id)
action.handler(this.editor)
}
else if (event.type == "edit") {
var tool = IMAGE_EDITOR_TOOLS.find(t => t.id == event.id)
this.editor.setBrush(this.editor.layers.drawing, event.options)
var first_point = event.points[0]
tool.begin(this.editor, ctx, first_point.x, first_point.y)
for (var point_i = 1; point_i < event.points.length; point_i++) {
tool.move(this.editor, ctx, event.points[point_i].x, event.points[point_i].y)
}
var last_point = event.points[event.points.length - 1]
tool.end(this.editor, ctx, last_point.x, last_point.y)
}
}
// re-set brush to current settings
this.editor.setBrush(this.editor.layers.drawing)
this.rewind_index = new_rewind_index
}
}
class ImageEditor {
constructor(popup, inpainter = false) {
this.inpainter = inpainter
this.popup = popup
this.history = new EditorHistory(this)
if (inpainter) {
this.popup.classList.add("inpainter")
}
this.drawing = false
this.temp_previous_tool = null // used for the ctrl-colorpicker functionality
this.container = popup.querySelector(".editor-controls-center > div")
this.layers = {}
var layer_names = [
"background",
"drawing",
"overlay"
]
layer_names.forEach(name => {
let canvas = document.createElement("canvas")
canvas.className = `editor-canvas-${name}`
this.container.appendChild(canvas)
this.layers[name] = {
name: name,
canvas: canvas,
ctx: canvas.getContext("2d")
}
})
// add mouse handlers
this.container.addEventListener("mousedown", this.mouseHandler.bind(this))
this.container.addEventListener("mouseup", this.mouseHandler.bind(this))
this.container.addEventListener("mousemove", this.mouseHandler.bind(this))
this.container.addEventListener("mouseout", this.mouseHandler.bind(this))
this.container.addEventListener("mouseenter", this.mouseHandler.bind(this))
this.container.addEventListener("touchstart", this.mouseHandler.bind(this))
this.container.addEventListener("touchmove", this.mouseHandler.bind(this))
this.container.addEventListener("touchcancel", this.mouseHandler.bind(this))
this.container.addEventListener("touchend", this.mouseHandler.bind(this))
// initialize editor controls
this.options = {}
this.optionElements = {}
IMAGE_EDITOR_SECTIONS.forEach(section => {
section.id = `image_editor_${section.name}`
var sectionElement = document.createElement("div")
sectionElement.className = section.id
var title = document.createElement("h4")
title.innerText = section.title
sectionElement.appendChild(title)
var optionsContainer = document.createElement("div")
optionsContainer.classList.add("editor-options-container")
this.optionElements[section.name] = []
section.options.forEach((option, index) => {
var optionHolder = document.createElement("div")
var optionElement = document.createElement("div")
optionHolder.appendChild(optionElement)
section.initElement(optionElement, option)
optionElement.addEventListener("click", target => this.selectOption(section.name, index))
optionsContainer.appendChild(optionHolder)
this.optionElements[section.name].push(optionElement)
})
this.selectOption(section.name, section.options.indexOf(section.default))
sectionElement.appendChild(optionsContainer)
this.popup.querySelector(".editor-controls-left").appendChild(sectionElement)
})
this.custom_color_input = this.popup.querySelector(`input[type="color"]`)
this.custom_color_input.addEventListener("change", () => {
this.custom_color_input.parentElement.style.background = this.custom_color_input.value
this.selectOption("color", 0)
})
if (this.inpainter) {
this.selectOption("color", IMAGE_EDITOR_SECTIONS.find(s => s.name == "color").options.indexOf("#ffffff"))
this.selectOption("opacity", IMAGE_EDITOR_SECTIONS.find(s => s.name == "opacity").options.indexOf(0.4))
}
// initialize the right-side controls
var buttonContainer = document.createElement("div")
IMAGE_EDITOR_BUTTONS.forEach(button => {
var element = document.createElement("div")
var icon = document.createElement("i")
element.className = "image-editor-button button"
icon.className = button.icon
element.appendChild(icon)
element.append(button.name)
buttonContainer.appendChild(element)
element.addEventListener("click", event => button.handler(this))
})
var actionsContainer = document.createElement("div")
var actionsTitle = document.createElement("h4")
actionsTitle.textContent = "Actions"
actionsContainer.appendChild(actionsTitle);
IMAGE_EDITOR_ACTIONS.forEach(action => {
var element = document.createElement("div")
var icon = document.createElement("i")
element.className = "image-editor-button button"
icon.className = action.icon
element.appendChild(icon)
element.append(action.name)
actionsContainer.appendChild(element)
element.addEventListener("click", event => this.runAction(action.id))
})
this.popup.querySelector(".editor-controls-right").appendChild(actionsContainer)
this.popup.querySelector(".editor-controls-right").appendChild(buttonContainer)
this.keyHandlerBound = this.keyHandler.bind(this)
this.setSize(512, 512)
}
show() {
this.popup.classList.add("active")
document.addEventListener("keydown", this.keyHandlerBound)
document.addEventListener("keyup", this.keyHandlerBound)
}
hide() {
this.popup.classList.remove("active")
document.removeEventListener("keydown", this.keyHandlerBound)
document.removeEventListener("keyup", this.keyHandlerBound)
}
setSize(width, height) {
if (width == this.width && height == this.height) {
return
}
if (width > height) {
var max_size = Math.min(parseInt(window.innerWidth * 0.9), width, 768)
var multiplier = max_size / width
width = (multiplier * width).toFixed()
height = (multiplier * height).toFixed()
}
else {
var max_size = Math.min(parseInt(window.innerHeight * 0.9), height, 768)
var multiplier = max_size / height
width = (multiplier * width).toFixed()
height = (multiplier * height).toFixed()
}
this.width = width
this.height = height
this.container.style.width = width + "px"
this.container.style.height = height + "px"
Object.values(this.layers).forEach(layer => {
layer.canvas.width = width
layer.canvas.height = height
})
if (this.inpainter) {
this.saveImage() // We've reset the size of the image so inpainting is different
}
this.setBrush()
this.history.clear()
}
get tool() {
var tool_id = this.getOptionValue("tool")
return IMAGE_EDITOR_TOOLS.find(t => t.id == tool_id);
}
loadTool() {
this.drawing = false
this.container.style.cursor = this.tool.cursor;
}
setImage(url, width, height) {
this.setSize(width, height)
this.layers.drawing.ctx.clearRect(0, 0, this.width, this.height)
this.layers.background.ctx.clearRect(0, 0, this.width, this.height)
if (url) {
var image = new Image()
image.onload = () => {
this.layers.background.ctx.drawImage(image, 0, 0, this.width, this.height)
}
image.src = url
}
else {
this.layers.background.ctx.fillStyle = "#ffffff"
this.layers.background.ctx.beginPath()
this.layers.background.ctx.rect(0, 0, this.width, this.height)
this.layers.background.ctx.fill()
}
this.history.clear()
}
saveImage() {
if (!this.inpainter) {
// This is not an inpainter, so save the image as the new img2img input
this.layers.background.ctx.drawImage(this.layers.drawing.canvas, 0, 0, this.width, this.height)
var base64 = this.layers.background.canvas.toDataURL()
initImagePreview.src = base64 // this will trigger the rest of the app to use it
}
else {
// This is an inpainter, so make sure the toggle is set accordingly
var is_blank = !this.layers.drawing.ctx
.getImageData(0, 0, this.width, this.height).data
.some(channel => channel !== 0)
maskSetting.checked = !is_blank
}
this.hide()
}
getImg() { // a drop-in replacement of the drawingboard version
return this.layers.drawing.canvas.toDataURL()
}
setImg(dataUrl) { // a drop-in replacement of the drawingboard version
var image = new Image()
image.onload = () => {
var ctx = this.layers.drawing.ctx;
ctx.clearRect(0, 0, this.width, this.height)
ctx.globalCompositeOperation = "source-over"
ctx.globalAlpha = 1
ctx.filter = "none"
ctx.drawImage(image, 0, 0, this.width, this.height)
this.setBrush(this.layers.drawing)
}
image.src = dataUrl
}
runAction(action_id) {
var action = IMAGE_EDITOR_ACTIONS.find(a => a.id == action_id)
if (action.trackHistory) {
this.history.pushAction(action_id)
}
action.handler(this)
}
setBrush(layer = null, options = null) {
if (options == null) {
options = this.options
}
if (layer) {
layer.ctx.lineCap = "round"
layer.ctx.lineJoin = "round"
layer.ctx.lineWidth = options.brush_size
layer.ctx.fillStyle = options.color
layer.ctx.strokeStyle = options.color
var sharpness = parseInt(options.sharpness * options.brush_size)
layer.ctx.filter = sharpness == 0 ? `none` : `blur(${sharpness}px)`
layer.ctx.globalAlpha = (1 - options.opacity)
layer.ctx.globalCompositeOperation = "source-over"
var tool = IMAGE_EDITOR_TOOLS.find(t => t.id == options.tool)
if (tool && tool.setBrush) {
tool.setBrush(editor, layer)
}
}
else {
Object.values([ "drawing", "overlay" ]).map(name => this.layers[name]).forEach(l => {
this.setBrush(l)
})
}
}
get ctx_overlay() {
return this.layers.overlay.ctx
}
get ctx_current() { // the idea is this will help support having custom layers and editing each one
return this.layers.drawing.ctx
}
get canvas_current() {
return this.layers.drawing.canvas
}
keyHandler(event) { // handles keybinds like ctrl+z, ctrl+y
if (!this.popup.classList.contains("active")) {
document.removeEventListener("keydown", this.keyHandlerBound)
document.removeEventListener("keyup", this.keyHandlerBound)
return // this catches if something else closes the window but doesnt properly unbind the key handler
}
// keybindings
if (event.type == "keydown") {
if ((event.key == "z" || event.key == "Z") && event.ctrlKey) {
if (!event.shiftKey) {
this.history.undo()
}
else {
this.history.redo()
}
}
if (event.key == "y" && event.ctrlKey) {
this.history.redo()
}
}
// dropper ctrl holding handler stuff
var dropper_active = this.temp_previous_tool != null;
if (dropper_active && !event.ctrlKey) {
this.selectOption("tool", IMAGE_EDITOR_TOOLS.findIndex(t => t.id == this.temp_previous_tool))
this.temp_previous_tool = null
}
else if (!dropper_active && event.ctrlKey) {
this.temp_previous_tool = this.getOptionValue("tool")
this.selectOption("tool", IMAGE_EDITOR_TOOLS.findIndex(t => t.id == "colorpicker"))
}
}
mouseHandler(event) {
var bbox = this.layers.overlay.canvas.getBoundingClientRect()
var x = (event.clientX || 0) - bbox.left
var y = (event.clientY || 0) - bbox.top
var type = event.type;
var touchmap = {
touchstart: "mousedown",
touchmove: "mousemove",
touchend: "mouseup",
touchcancel: "mouseup"
}
if (type in touchmap) {
type = touchmap[type]
if (event.touches && event.touches[0]) {
var touch = event.touches[0]
var x = (touch.clientX || 0) - bbox.left
var y = (touch.clientY || 0) - bbox.top
}
}
event.preventDefault()
// do drawing-related stuff
if (type == "mousedown" || (type == "mouseenter" && event.buttons == 1)) {
this.drawing = true
this.tool.begin(this, this.ctx_current, x, y)
this.tool.begin(this, this.ctx_overlay, x, y, true)
this.history.editBegin(x, y)
}
if (type == "mouseup" || type == "mousemove") {
if (this.drawing) {
if (x > 0 && y > 0) {
this.tool.move(this, this.ctx_current, x, y)
this.tool.move(this, this.ctx_overlay, x, y, true)
this.history.editMove(x, y)
}
}
}
if (type == "mouseup" || type == "mouseout") {
if (this.drawing) {
this.drawing = false
this.tool.end(this, this.ctx_current, x, y)
this.tool.end(this, this.ctx_overlay, x, y, true)
this.history.editEnd(x, y)
}
}
}
getOptionValue(section_name) {
var section = IMAGE_EDITOR_SECTIONS.find(s => s.name == section_name)
return this.options && section_name in this.options ? this.options[section_name] : section.default
}
selectOption(section_name, option_index) {
var section = IMAGE_EDITOR_SECTIONS.find(s => s.name == section_name)
var value = section.options[option_index]
this.options[section_name] = value == "custom" ? section.getCustom(this) : value
this.optionElements[section_name].forEach(element => element.classList.remove("active"))
this.optionElements[section_name][option_index].classList.add("active")
// change the editor
this.setBrush()
if (section.name == "tool") {
this.loadTool()
}
}
}
function rgbToHex(rgb) {
function componentToHex(c) {
var hex = parseInt(c).toString(16)
return hex.length == 1 ? "0" + hex : hex
}
return "#" + componentToHex(rgb.r) + componentToHex(rgb.g) + componentToHex(rgb.b)
}
const imageEditor = new ImageEditor(document.getElementById("image-editor"))
const imageInpainter = new ImageEditor(document.getElementById("image-inpainter"), true)
imageEditor.setImage(null, 512, 512)
imageInpainter.setImage(null, 512, 512)
document.getElementById("init_image_button_draw").addEventListener("click", () => {
imageEditor.show()
})
document.getElementById("init_image_button_inpaint").addEventListener("click", () => {
imageInpainter.show()
})
img2imgUnload() // no init image when the app starts

View File

@ -85,14 +85,13 @@ function createModifierGroup(modifierGroup, initiallyExpanded) {
if(typeof modifierCard == 'object') {
modifiersEl.appendChild(modifierCard)
const trimmedName = trimModifiers(modifierName)
modifierCard.addEventListener('click', () => {
if (activeTags.map(x => x.name).includes(modifierName)) {
if (activeTags.map(x => trimModifiers(x.name)).includes(trimmedName)) {
// remove modifier from active array
activeTags = activeTags.filter(x => x.name != modifierName)
modifierCard.classList.remove(activeCardClass)
modifierCard.querySelector('.modifier-card-image-overlay').innerText = '+'
activeTags = activeTags.filter(x => trimModifiers(x.name) != trimmedName)
toggleCardState(trimmedName, false)
} else {
// add modifier to active array
activeTags.push({
@ -101,10 +100,7 @@ function createModifierGroup(modifierGroup, initiallyExpanded) {
'originElement': modifierCard,
'previews': modifierPreviews
})
modifierCard.classList.add(activeCardClass)
modifierCard.querySelector('.modifier-card-image-overlay').innerText = '-'
toggleCardState(trimmedName, true)
}
refreshTagsList()
@ -125,6 +121,10 @@ function createModifierGroup(modifierGroup, initiallyExpanded) {
return e
}
function trimModifiers(tag) {
return tag.replace(/^\(+|\)+$/g, '').replace(/^\[+|\]+$/g, '')
}
async function loadModifiers() {
try {
let res = await fetch('/get/modifiers')
@ -219,11 +219,10 @@ function refreshTagsList() {
editorModifierTagsList.appendChild(tag.element)
tag.element.addEventListener('click', () => {
let idx = activeTags.indexOf(tag)
let idx = activeTags.findIndex(o => { return o.name === tag.name })
if (idx !== -1 && activeTags[idx].originElement !== undefined) {
activeTags[idx].originElement.classList.remove(activeCardClass)
activeTags[idx].originElement.querySelector('.modifier-card-image-overlay').innerText = '+'
if (idx !== -1) {
toggleCardState(activeTags[idx].name, false)
activeTags.splice(idx, 1)
refreshTagsList()
@ -236,6 +235,23 @@ function refreshTagsList() {
editorModifierTagsList.appendChild(brk)
}
function toggleCardState(modifierName, makeActive) {
document.querySelector('#editor-modifiers').querySelectorAll('.modifier-card').forEach(card => {
const name = card.querySelector('.modifier-card-label').innerText
if ( trimModifiers(modifierName) == trimModifiers(name)
|| trimModifiers(modifierName) == 'by ' + trimModifiers(name)) {
if(makeActive) {
card.classList.add(activeCardClass)
card.querySelector('.modifier-card-image-overlay').innerText = '-'
}
else{
card.classList.remove(activeCardClass)
card.querySelector('.modifier-card-image-overlay').innerText = '+'
}
}
})
}
function changePreviewImages(val) {
const previewImages = document.querySelectorAll('.modifier-card-image-container img')
@ -310,31 +326,7 @@ function saveCustomModifiers() {
}
function loadCustomModifiers() {
let customModifiers = localStorage.getItem(CUSTOM_MODIFIERS_KEY, '')
customModifiersTextBox.value = customModifiers
if (customModifiersGroupElement !== undefined) {
customModifiersGroupElement.remove()
}
if (customModifiers && customModifiers.trim() !== '') {
customModifiers = customModifiers.split('\n')
customModifiers = customModifiers.filter(m => m.trim() !== '')
customModifiers = customModifiers.map(function(m) {
return {
"modifier": m
}
})
let customGroup = {
'category': 'Custom Modifiers',
'modifiers': customModifiers
}
customModifiersGroupElement = createModifierGroup(customGroup, true)
createCollapsibles(customModifiersGroupElement)
}
PLUGINS['MODIFIERS_LOAD'].forEach(fn=>fn.loader.call())
}
customModifiersTextBox.addEventListener('change', saveCustomModifiers)

View File

@ -1,41 +0,0 @@
const INPAINTING_EDITOR_SIZE = 450
let inpaintingEditorContainer = document.querySelector('#inpaintingEditor')
let inpaintingEditor = new DrawingBoard.Board('inpaintingEditor', {
color: "#ffffff",
background: false,
size: 30,
webStorage: false,
controls: [{'DrawingMode': {'filler': false}}, 'Size', 'Navigation']
})
let inpaintingEditorCanvasBackground = document.querySelector('.drawing-board-canvas-wrapper')
function resizeInpaintingEditor(widthValue, heightValue) {
if (widthValue === heightValue) {
widthValue = INPAINTING_EDITOR_SIZE
heightValue = INPAINTING_EDITOR_SIZE
} else if (widthValue > heightValue) {
heightValue = (heightValue / widthValue) * INPAINTING_EDITOR_SIZE
widthValue = INPAINTING_EDITOR_SIZE
} else {
widthValue = (widthValue / heightValue) * INPAINTING_EDITOR_SIZE
heightValue = INPAINTING_EDITOR_SIZE
}
if (inpaintingEditor.opts.aspectRatio === (widthValue / heightValue).toFixed(3)) {
// Same ratio, don't reset the canvas.
return
}
inpaintingEditor.opts.aspectRatio = (widthValue / heightValue).toFixed(3)
inpaintingEditorContainer.style.width = widthValue + 'px'
inpaintingEditorContainer.style.height = heightValue + 'px'
inpaintingEditor.opts.enlargeYourContainer = true
inpaintingEditor.opts.size = inpaintingEditor.ctx.lineWidth
inpaintingEditor.resize()
inpaintingEditor.ctx.lineCap = "round"
inpaintingEditor.ctx.lineJoin = "round"
inpaintingEditor.ctx.lineWidth = inpaintingEditor.opts.size
inpaintingEditor.setColor(inpaintingEditor.opts.color)
}

10
ui/media/js/jquery-confirm.min.js vendored Normal file

File diff suppressed because one or more lines are too long

File diff suppressed because it is too large Load Diff

View File

@ -5,9 +5,9 @@
*/
var ParameterType = {
checkbox: "checkbox",
select: "select",
select_multiple: "select_multiple",
custom: "custom",
select: "select",
select_multiple: "select_multiple",
custom: "custom",
};
/**
@ -23,166 +23,189 @@
/** @type {Array.<Parameter>} */
var PARAMETERS = [
{
id: "theme",
type: ParameterType.select,
label: "Theme",
default: "theme-default",
note: "customize the look and feel of the ui",
options: [ // Note: options expanded dynamically
{
value: "theme-default",
label: "Default"
}
],
icon: "fa-palette"
},
{
id: "save_to_disk",
type: ParameterType.checkbox,
label: "Auto-Save Images",
note: "automatically saves images to the specified location",
icon: "fa-download",
default: false,
},
{
id: "diskPath",
type: ParameterType.custom,
label: "Save Location",
render: (parameter) => {
return `<input id="${parameter.id}" name="${parameter.id}" size="30" disabled>`
}
},
{
id: "sound_toggle",
type: ParameterType.checkbox,
label: "Enable Sound",
note: "plays a sound on task completion",
icon: "fa-volume-low",
default: true,
},
{
id: "ui_open_browser_on_start",
type: ParameterType.checkbox,
label: "Open browser on startup",
note: "starts the default browser on startup",
icon: "fa-window-restore",
default: true,
},
{
id: "turbo",
type: ParameterType.checkbox,
label: "Turbo Mode",
note: "generates images faster, but uses an additional 1 GB of GPU memory",
icon: "fa-forward",
default: true,
},
{
id: "use_cpu",
type: ParameterType.checkbox,
label: "Use CPU (not GPU)",
note: "warning: this will be *very* slow",
icon: "fa-microchip",
default: false,
},
{
id: "auto_pick_gpus",
type: ParameterType.checkbox,
label: "Automatically pick the GPUs (experimental)",
default: false,
},
{
id: "use_gpus",
type: ParameterType.select_multiple,
label: "GPUs to use (experimental)",
note: "to process in parallel",
default: false,
},
{
id: "use_full_precision",
type: ParameterType.checkbox,
label: "Use Full Precision",
note: "for GPU-only. warning: this will consume more VRAM",
icon: "fa-crosshairs",
default: false,
},
{
id: "auto_save_settings",
type: ParameterType.checkbox,
label: "Auto-Save Settings",
note: "restores settings on browser load",
icon: "fa-gear",
default: true,
},
{
id: "listen_to_network",
type: ParameterType.checkbox,
label: "Make Stable Diffusion available on your network",
note: "Other devices on your network can access this web page",
icon: "fa-network-wired",
default: true,
},
{
id: "listen_port",
type: ParameterType.custom,
label: "Network port",
note: "Port that this server listens to. The '9000' part in 'http://localhost:9000'",
icon: "fa-anchor",
render: (parameter) => {
return `<input id="${parameter.id}" name="${parameter.id}" size="6" value="9000" onkeypress="preventNonNumericalInput(event)">`
}
},
{
id: "use_beta_channel",
type: ParameterType.checkbox,
label: "Beta channel",
note: "Get the latest features immediately (but could be less stable). Please restart the program after changing this.",
icon: "fa-fire",
default: false,
},
{
id: "theme",
type: ParameterType.select,
label: "Theme",
default: "theme-default",
note: "customize the look and feel of the ui",
options: [ // Note: options expanded dynamically
{
value: "theme-default",
label: "Default"
}
],
icon: "fa-palette"
},
{
id: "save_to_disk",
type: ParameterType.checkbox,
label: "Auto-Save Images",
note: "automatically saves images to the specified location",
icon: "fa-download",
default: false,
},
{
id: "diskPath",
type: ParameterType.custom,
label: "Save Location",
render: (parameter) => {
return `<input id="${parameter.id}" name="${parameter.id}" size="30" disabled>`
}
},
{
id: "sound_toggle",
type: ParameterType.checkbox,
label: "Enable Sound",
note: "plays a sound on task completion",
icon: "fa-volume-low",
default: true,
},
{
id: "process_order_toggle",
type: ParameterType.checkbox,
label: "Process newest jobs first",
note: "reverse the normal processing order",
default: false,
},
{
id: "ui_open_browser_on_start",
type: ParameterType.checkbox,
label: "Open browser on startup",
note: "starts the default browser on startup",
icon: "fa-window-restore",
default: true,
},
{
id: "turbo",
type: ParameterType.checkbox,
label: "Turbo Mode",
note: "generates images faster, but uses an additional 1 GB of GPU memory",
icon: "fa-forward",
default: true,
},
{
id: "use_cpu",
type: ParameterType.checkbox,
label: "Use CPU (not GPU)",
note: "warning: this will be *very* slow",
icon: "fa-microchip",
default: false,
},
{
id: "auto_pick_gpus",
type: ParameterType.checkbox,
label: "Automatically pick the GPUs (experimental)",
default: false,
},
{
id: "use_gpus",
type: ParameterType.select_multiple,
label: "GPUs to use (experimental)",
note: "to process in parallel",
default: false,
},
{
id: "use_full_precision",
type: ParameterType.checkbox,
label: "Use Full Precision",
note: "for GPU-only. warning: this will consume more VRAM",
icon: "fa-crosshairs",
default: false,
},
{
id: "auto_save_settings",
type: ParameterType.checkbox,
label: "Auto-Save Settings",
note: "restores settings on browser load",
icon: "fa-gear",
default: true,
},
{
id: "confirm_dangerous_actions",
type: ParameterType.checkbox,
label: "Confirm dangerous actions",
note: "Actions that might lead to data loss must either be clicked with the shift key pressed, or confirmed in an 'Are you sure?' dialog",
icon: "fa-check-double",
default: true,
},
{
id: "listen_to_network",
type: ParameterType.checkbox,
label: "Make Stable Diffusion available on your network",
note: "Other devices on your network can access this web page",
icon: "fa-network-wired",
default: true,
},
{
id: "listen_port",
type: ParameterType.custom,
label: "Network port",
note: "Port that this server listens to. The '9000' part in 'http://localhost:9000'",
icon: "fa-anchor",
render: (parameter) => {
return `<input id="${parameter.id}" name="${parameter.id}" size="6" value="9000" onkeypress="preventNonNumericalInput(event)">`
}
},
{
id: "test_sd2",
type: ParameterType.checkbox,
label: "Test SD 2.0",
note: "Experimental! High memory usage! GPU-only! Not the final version! Please restart the program after changing this.",
icon: "fa-fire",
default: false,
},
{
id: "use_beta_channel",
type: ParameterType.checkbox,
label: "Beta channel",
note: "Get the latest features immediately (but could be less stable). Please restart the program after changing this.",
icon: "fa-fire",
default: false,
},
];
function getParameterSettingsEntry(id) {
let parameter = PARAMETERS.filter(p => p.id === id)
if (parameter.length === 0) {
return
}
return parameter[0].settingsEntry
let parameter = PARAMETERS.filter(p => p.id === id)
if (parameter.length === 0) {
return
}
return parameter[0].settingsEntry
}
function getParameterElement(parameter) {
switch (parameter.type) {
case ParameterType.checkbox:
var is_checked = parameter.default ? " checked" : "";
return `<input id="${parameter.id}" name="${parameter.id}"${is_checked} type="checkbox">`
case ParameterType.select:
case ParameterType.select_multiple:
var options = (parameter.options || []).map(option => `<option value="${option.value}">${option.label}</option>`).join("")
var multiple = (parameter.type == ParameterType.select_multiple ? 'multiple' : '')
return `<select id="${parameter.id}" name="${parameter.id}" ${multiple}>${options}</select>`
case ParameterType.custom:
return parameter.render(parameter)
default:
console.error(`Invalid type for parameter ${parameter.id}`);
return "ERROR: Invalid Type"
}
switch (parameter.type) {
case ParameterType.checkbox:
var is_checked = parameter.default ? " checked" : "";
return `<input id="${parameter.id}" name="${parameter.id}"${is_checked} type="checkbox">`
case ParameterType.select:
case ParameterType.select_multiple:
var options = (parameter.options || []).map(option => `<option value="${option.value}">${option.label}</option>`).join("")
var multiple = (parameter.type == ParameterType.select_multiple ? 'multiple' : '')
return `<select id="${parameter.id}" name="${parameter.id}" ${multiple}>${options}</select>`
case ParameterType.custom:
return parameter.render(parameter)
default:
console.error(`Invalid type for parameter ${parameter.id}`);
return "ERROR: Invalid Type"
}
}
let parametersTable = document.querySelector("#system-settings .parameters-table")
/* fill in the system settings popup table */
function initParameters() {
PARAMETERS.forEach(parameter => {
var element = getParameterElement(parameter)
var note = parameter.note ? `<small>${parameter.note}</small>` : "";
var icon = parameter.icon ? `<i class="fa ${parameter.icon}"></i>` : "";
var newrow = document.createElement('div')
newrow.innerHTML = `
<div>${icon}</div>
<div><label for="${parameter.id}">${parameter.label}</label>${note}</div>
<div>${element}</div>`
parametersTable.appendChild(newrow)
parameter.settingsEntry = newrow
})
PARAMETERS.forEach(parameter => {
var element = getParameterElement(parameter)
var note = parameter.note ? `<small>${parameter.note}</small>` : "";
var icon = parameter.icon ? `<i class="fa ${parameter.icon}"></i>` : "";
var newrow = document.createElement('div')
newrow.innerHTML = `
<div>${icon}</div>
<div><label for="${parameter.id}">${parameter.label}</label>${note}</div>
<div>${element}</div>`
parametersTable.appendChild(newrow)
parameter.settingsEntry = newrow
})
}
initParameters()
@ -196,11 +219,14 @@ let saveToDiskField = document.querySelector('#save_to_disk')
let diskPathField = document.querySelector('#diskPath')
let listenToNetworkField = document.querySelector("#listen_to_network")
let listenPortField = document.querySelector("#listen_port")
let testSD2Field = document.querySelector("#test_sd2")
let useBetaChannelField = document.querySelector("#use_beta_channel")
let uiOpenBrowserOnStartField = document.querySelector("#ui_open_browser_on_start")
let confirmDangerousActionsField = document.querySelector("#confirm_dangerous_actions")
let saveSettingsBtn = document.querySelector('#save-system-settings-btn')
async function changeAppConfig(configDelta) {
try {
let res = await fetch('/app_config', {
@ -230,12 +256,18 @@ async function getAppConfig() {
if (config.ui && config.ui.open_browser_on_start === false) {
uiOpenBrowserOnStartField.checked = false
}
if (config.net && config.net.listen_to_network === false) {
listenToNetworkField.checked = false
}
if (config.net && config.net.listen_port !== undefined) {
listenPortField.value = config.net.listen_port
}
if ('test_sd2' in config) {
testSD2Field.checked = config['test_sd2']
}
let testSD2SettingEntry = getParameterSettingsEntry('test_sd2')
testSD2SettingEntry.style.display = (config.update_branch === 'beta' ? '' : 'none')
if (config.net && config.net.listen_to_network === false) {
listenToNetworkField.checked = false
}
if (config.net && config.net.listen_port !== undefined) {
listenPortField.value = config.net.listen_port
}
console.log('get config status response', config)
} catch (e) {
@ -263,7 +295,6 @@ function getCurrentRenderDeviceSelection() {
useCPUField.addEventListener('click', function() {
let gpuSettingEntry = getParameterSettingsEntry('use_gpus')
let autoPickGPUSettingEntry = getParameterSettingsEntry('auto_pick_gpus')
console.log("hello", this.checked);
if (this.checked) {
gpuSettingEntry.style.display = 'none'
autoPickGPUSettingEntry.style.display = 'none'
@ -313,69 +344,100 @@ async function getDiskPath() {
}
}
async function getDevices() {
try {
let res = await fetch('/get/devices')
if (res.status === 200) {
res = await res.json()
function setDeviceInfo(devices) {
let cpu = devices.all.cpu.name
let allGPUs = Object.keys(devices.all).filter(d => d != 'cpu')
let activeGPUs = Object.keys(devices.active)
let allDeviceIds = Object.keys(res['all']).filter(d => d !== 'cpu')
let activeDeviceIds = Object.keys(res['active']).filter(d => d !== 'cpu')
if (activeDeviceIds.length === 0) {
useCPUField.checked = true
}
if (allDeviceIds.length < MIN_GPUS_TO_SHOW_SELECTION || useCPUField.checked) {
let gpuSettingEntry = getParameterSettingsEntry('use_gpus')
gpuSettingEntry.style.display = 'none'
let autoPickGPUSettingEntry = getParameterSettingsEntry('auto_pick_gpus')
autoPickGPUSettingEntry.style.display = 'none'
}
if (allDeviceIds.length === 0) {
useCPUField.checked = true
useCPUField.disabled = true // no compatible GPUs, so make the CPU mandatory
}
autoPickGPUsField.checked = (res['config'] === 'auto')
useGPUsField.innerHTML = ''
allDeviceIds.forEach(device => {
let deviceName = res['all'][device]['name']
let deviceOption = `<option value="${device}">${deviceName} (${device})</option>`
useGPUsField.insertAdjacentHTML('beforeend', deviceOption)
})
if (autoPickGPUsField.checked) {
let gpuSettingEntry = getParameterSettingsEntry('use_gpus')
gpuSettingEntry.style.display = 'none'
} else {
$('#use_gpus').val(activeDeviceIds)
}
function ID_TO_TEXT(d) {
let info = devices.all[d]
if ("mem_free" in info && "mem_total" in info) {
return `${info.name} <small>(${d}) (${info.mem_free.toFixed(1)}Gb free / ${info.mem_total.toFixed(1)} Gb total)</small>`
} else {
return `${info.name} <small>(${d}) (no memory info)</small>`
}
}
allGPUs = allGPUs.map(ID_TO_TEXT)
activeGPUs = activeGPUs.map(ID_TO_TEXT)
let systemInfoEl = document.querySelector('#system-info')
systemInfoEl.querySelector('#system-info-cpu').innerText = cpu
systemInfoEl.querySelector('#system-info-gpus-all').innerHTML = allGPUs.join('</br>')
systemInfoEl.querySelector('#system-info-rendering-devices').innerHTML = activeGPUs.join('</br>')
}
function setHostInfo(hosts) {
let port = listenPortField.value
hosts = hosts.map(addr => `http://${addr}:${port}/`).map(url => `<div><a href="${url}">${url}</a></div>`)
document.querySelector('#system-info-server-hosts').innerHTML = hosts.join('')
}
async function getSystemInfo() {
try {
const res = await SD.getSystemInfo()
let devices = res['devices']
let allDeviceIds = Object.keys(devices['all']).filter(d => d !== 'cpu')
let activeDeviceIds = Object.keys(devices['active']).filter(d => d !== 'cpu')
if (activeDeviceIds.length === 0) {
useCPUField.checked = true
}
if (allDeviceIds.length < MIN_GPUS_TO_SHOW_SELECTION || useCPUField.checked) {
let gpuSettingEntry = getParameterSettingsEntry('use_gpus')
gpuSettingEntry.style.display = 'none'
let autoPickGPUSettingEntry = getParameterSettingsEntry('auto_pick_gpus')
autoPickGPUSettingEntry.style.display = 'none'
}
if (allDeviceIds.length === 0) {
useCPUField.checked = true
useCPUField.disabled = true // no compatible GPUs, so make the CPU mandatory
}
autoPickGPUsField.checked = (devices['config'] === 'auto')
useGPUsField.innerHTML = ''
allDeviceIds.forEach(device => {
let deviceName = devices['all'][device]['name']
let deviceOption = `<option value="${device}">${deviceName} (${device})</option>`
useGPUsField.insertAdjacentHTML('beforeend', deviceOption)
})
if (autoPickGPUsField.checked) {
let gpuSettingEntry = getParameterSettingsEntry('use_gpus')
gpuSettingEntry.style.display = 'none'
} else {
$('#use_gpus').val(activeDeviceIds)
}
setDeviceInfo(devices)
setHostInfo(res['hosts'])
} catch (e) {
console.log('error fetching devices', e)
}
}
saveSettingsBtn.addEventListener('click', function() {
let updateBranch = (useBetaChannelField.checked ? 'beta' : 'main')
if (listenPortField.value == '') {
alert('The network port field must not be empty.')
} else if (listenPortField.value<1 || listenPortField.value>65535) {
alert('The network port must be a number from 1 to 65535')
} else {
changeAppConfig({
'render_devices': getCurrentRenderDeviceSelection(),
'update_branch': updateBranch,
'ui_open_browser_on_start': uiOpenBrowserOnStartField.checked,
'listen_to_network': listenToNetworkField.checked,
'listen_port': listenPortField.value
})
}
saveSettingsBtn.classList.add('active')
asyncDelay(300).then(() => saveSettingsBtn.classList.remove('active'))
if (listenPortField.value == '') {
alert('The network port field must not be empty.')
return
}
if (listenPortField.value < 1 || listenPortField.value > 65535) {
alert('The network port must be a number from 1 to 65535')
return
}
let updateBranch = (useBetaChannelField.checked ? 'beta' : 'main')
changeAppConfig({
'render_devices': getCurrentRenderDeviceSelection(),
'update_branch': updateBranch,
'ui_open_browser_on_start': uiOpenBrowserOnStartField.checked,
'listen_to_network': listenToNetworkField.checked,
'listen_port': listenPortField.value,
'test_sd2': testSD2Field.checked
})
saveSettingsBtn.classList.add('active')
asyncDelay(300).then(() => saveSettingsBtn.classList.remove('active'))
})

View File

@ -24,23 +24,48 @@ const PLUGINS = {
* }
* })
*/
IMAGE_INFO_BUTTONS: []
IMAGE_INFO_BUTTONS: [],
MODIFIERS_LOAD: [],
TASK_CREATE: [],
OUTPUTS_FORMATS: new ServiceContainer(
function png() { return (reqBody) => new SD.RenderTask(reqBody) }
, function jpeg() { return (reqBody) => new SD.RenderTask(reqBody) }
),
}
PLUGINS.OUTPUTS_FORMATS.register = function(...args) {
const service = ServiceContainer.prototype.register.apply(this, args)
if (typeof outputFormatField !== 'undefined') {
const newOption = document.createElement("option")
newOption.setAttribute("value", service.name)
newOption.innerText = service.name
outputFormatField.appendChild(newOption)
}
return service
}
function loadScript(url) {
const script = document.createElement('script')
const promiseSrc = new PromiseSource()
script.addEventListener('error', () => promiseSrc.reject(new Error(`Script "${url}" couldn't be loaded.`)))
script.addEventListener('load', () => promiseSrc.resolve(url))
script.src = url + '?t=' + Date.now()
console.log('loading script', url)
document.head.appendChild(script)
return promiseSrc.promise
}
async function loadUIPlugins() {
try {
let res = await fetch('/get/ui_plugins')
if (res.status === 200) {
res = await res.json()
res.forEach(pluginPath => {
let script = document.createElement('script')
script.src = pluginPath + '?t=' + Date.now()
console.log('loading plugin', pluginPath)
document.head.appendChild(script)
})
const res = await fetch('/get/ui_plugins')
if (!res.ok) {
console.error(`Error HTTP${res.status} while loading plugins list. - ${res.statusText}`)
return
}
const plugins = await res.json()
const loadingPromises = plugins.map(loadScript)
return await Promise.allSettled(loadingPromises)
} catch (e) {
console.log('error fetching plugin paths', e)
}

View File

@ -60,6 +60,7 @@ function themeFieldChanged() {
body.style = "";
var theme = THEMES.find(t => t.key == theme_key);
let borderColor = undefined
if (theme) {
// refresh variables incase they are back referencing
Array.from(DEFAULT_THEME.rule.style)
@ -67,7 +68,14 @@ function themeFieldChanged() {
.forEach(cssVariable => {
body.style.setProperty(cssVariable, DEFAULT_THEME.rule.style.getPropertyValue(cssVariable));
});
borderColor = theme.rule.style.getPropertyValue('--input-border-color').trim()
if (!borderColor.startsWith('#')) {
borderColor = theme.rule.style.getPropertyValue('--theme-color-fallback')
}
} else {
borderColor = DEFAULT_THEME.rule.style.getPropertyValue('--theme-color-fallback')
}
document.querySelector('meta[name="theme-color"]').setAttribute("content", borderColor)
}
themeField.addEventListener('change', themeFieldChanged);

View File

@ -1,32 +1,37 @@
"use strict";
// https://gomakethings.com/finding-the-next-and-previous-sibling-elements-that-match-a-selector-with-vanilla-js/
function getNextSibling(elem, selector) {
// Get the next sibling element
var sibling = elem.nextElementSibling
// Get the next sibling element
let sibling = elem.nextElementSibling
// If there's no selector, return the first sibling
if (!selector) return sibling
// If there's no selector, return the first sibling
if (!selector) {
return sibling
}
// If the sibling matches our selector, use it
// If not, jump to the next sibling and continue the loop
while (sibling) {
if (sibling.matches(selector)) return sibling
sibling = sibling.nextElementSibling
}
// If the sibling matches our selector, use it
// If not, jump to the next sibling and continue the loop
while (sibling) {
if (sibling.matches(selector)) {
return sibling
}
sibling = sibling.nextElementSibling
}
}
/* Panel Stuff */
// true = open
var COLLAPSIBLES_INITIALIZED = false;
let COLLAPSIBLES_INITIALIZED = false;
const COLLAPSIBLES_KEY = "collapsibles";
const COLLAPSIBLE_PANELS = []; // filled in by createCollapsibles with all the elements matching .collapsible
// on-init call this for any panels that are marked open
function toggleCollapsible(element) {
var collapsibleHeader = element.querySelector(".collapsible");
var handle = element.querySelector(".collapsible-handle");
const collapsibleHeader = element.querySelector(".collapsible");
const handle = element.querySelector(".collapsible-handle");
collapsibleHeader.classList.toggle("active")
let content = getNextSibling(collapsibleHeader, '.collapsible-content')
if (!collapsibleHeader.classList.contains("active")) {
@ -40,6 +45,7 @@ function toggleCollapsible(element) {
handle.innerHTML = '&#x2796;' // minus
}
}
document.dispatchEvent(new CustomEvent('collapsibleClick', { detail: collapsibleHeader }))
if (COLLAPSIBLES_INITIALIZED && COLLAPSIBLE_PANELS.includes(element)) {
saveCollapsibles()
@ -47,16 +53,16 @@ function toggleCollapsible(element) {
}
function saveCollapsibles() {
var values = {}
let values = {}
COLLAPSIBLE_PANELS.forEach(element => {
var value = element.querySelector(".collapsible").className.indexOf("active") !== -1
let value = element.querySelector(".collapsible").className.indexOf("active") !== -1
values[element.id] = value
})
localStorage.setItem(COLLAPSIBLES_KEY, JSON.stringify(values))
}
function createCollapsibles(node) {
var save = false
let save = false
if (!node) {
node = document
save = true
@ -81,7 +87,7 @@ function createCollapsibles(node) {
})
})
if (save) {
var saved = localStorage.getItem(COLLAPSIBLES_KEY)
let saved = localStorage.getItem(COLLAPSIBLES_KEY)
if (!saved) {
saved = tryLoadOldCollapsibles();
}
@ -89,9 +95,9 @@ function createCollapsibles(node) {
saveCollapsibles()
saved = localStorage.getItem(COLLAPSIBLES_KEY)
}
var values = JSON.parse(saved)
let values = JSON.parse(saved)
COLLAPSIBLE_PANELS.forEach(element => {
var value = element.querySelector(".collapsible").className.indexOf("active") !== -1
let value = element.querySelector(".collapsible").className.indexOf("active") !== -1
if (values[element.id] != value) {
toggleCollapsible(element)
}
@ -101,17 +107,17 @@ function createCollapsibles(node) {
}
function tryLoadOldCollapsibles() {
var old_map = {
const old_map = {
"advancedPanelOpen": "editor-settings",
"modifiersPanelOpen": "editor-modifiers",
"negativePromptPanelOpen": "editor-inputs-prompt"
};
if (localStorage.getItem(Object.keys(old_map)[0])) {
var result = {};
let result = {};
Object.keys(old_map).forEach(key => {
var value = localStorage.getItem(key);
const value = localStorage.getItem(key);
if (value !== null) {
result[old_map[key]] = value == true || value == "true"
result[old_map[key]] = (value == true || value == "true")
localStorage.removeItem(key)
}
});
@ -150,17 +156,17 @@ function millisecondsToStr(milliseconds) {
return (number > 1) ? 's' : ''
}
var temp = Math.floor(milliseconds / 1000)
var hours = Math.floor((temp %= 86400) / 3600)
var s = ''
let temp = Math.floor(milliseconds / 1000)
let hours = Math.floor((temp %= 86400) / 3600)
let s = ''
if (hours) {
s += hours + ' hour' + numberEnding(hours) + ' '
}
var minutes = Math.floor((temp %= 3600) / 60)
let minutes = Math.floor((temp %= 3600) / 60)
if (minutes) {
s += minutes + ' minute' + numberEnding(minutes) + ' '
}
var seconds = temp % 60
let seconds = temp % 60
if (!hours && minutes < 4 && seconds) {
s += seconds + ' second' + numberEnding(seconds)
}
@ -178,7 +184,7 @@ function BraceExpander() {
function bracePair(tkns, iPosn, iNest, lstCommas) {
if (iPosn >= tkns.length || iPosn < 0) return null;
var t = tkns[iPosn],
let t = tkns[iPosn],
n = (t === '{') ? (
iNest + 1
) : (t === '}' ? (
@ -198,7 +204,7 @@ function BraceExpander() {
function andTree(dctSofar, tkns) {
if (!tkns.length) return [dctSofar, []];
var dctParse = dctSofar ? dctSofar : {
let dctParse = dctSofar ? dctSofar : {
fn: and,
args: []
},
@ -231,14 +237,14 @@ function BraceExpander() {
// Parse of a PARADIGM subtree
function orTree(dctSofar, tkns, lstCommas) {
if (!tkns.length) return [dctSofar, []];
var iLast = lstCommas.length;
let iLast = lstCommas.length;
return {
fn: or,
args: splitsAt(
lstCommas, tkns
).map(function (x, i) {
var ts = x.slice(
let ts = x.slice(
1, i === iLast ? (
-1
) : void 0
@ -256,7 +262,7 @@ function BraceExpander() {
// List of unescaped braces and commas, and remaining strings
function tokens(str) {
// Filter function excludes empty splitting artefacts
var toS = function (x) {
let toS = function (x) {
return x.toString();
};
@ -270,7 +276,7 @@ function BraceExpander() {
// PARSE TREE OPERATOR (1 of 2)
// Each possible head * each possible tail
function and(args) {
var lng = args.length,
let lng = args.length,
head = lng ? args[0] : null,
lstHead = "string" === typeof head ? (
[head]
@ -330,7 +336,7 @@ function BraceExpander() {
// s -> [s]
this.expand = function(s) {
// BRACE EXPRESSION PARSED
var dctParse = andTree(null, tokens(s))[0];
let dctParse = andTree(null, tokens(s))[0];
// ABSTRACT SYNTAX TREE LOGGED
// console.log(pp(dctParse));
@ -341,12 +347,76 @@ function BraceExpander() {
}
/** Pause the execution of an async function until timer elapse.
* @Returns a promise that will resolve after the specified timeout.
*/
function asyncDelay(timeout) {
return new Promise(function(resolve, reject) {
setTimeout(resolve, timeout, true)
})
}
function PromiseSource() {
const srcPromise = new Promise((resolve, reject) => {
Object.defineProperties(this, {
resolve: { value: resolve, writable: false }
, reject: { value: reject, writable: false }
})
})
Object.defineProperties(this, {
promise: {value: makeQuerablePromise(srcPromise), writable: false}
})
}
/** A debounce is a higher-order function, which is a function that returns another function
* that, as long as it continues to be invoked, will not be triggered.
* The function will be called after it stops being called for N milliseconds.
* If `immediate` is passed, trigger the function on the leading edge, instead of the trailing.
* @Returns a promise that will resolve to func return value.
*/
function debounce (func, wait, immediate) {
if (typeof wait === "undefined") {
wait = 40
}
if (typeof wait !== "number") {
throw new Error("wait is not an number.")
}
let timeout = null
let lastPromiseSrc = new PromiseSource()
const applyFn = function(context, args) {
let result = undefined
try {
result = func.apply(context, args)
} catch (err) {
lastPromiseSrc.reject(err)
}
if (result instanceof Promise) {
result.then(lastPromiseSrc.resolve, lastPromiseSrc.reject)
} else {
lastPromiseSrc.resolve(result)
}
}
return function(...args) {
const callNow = Boolean(immediate && !timeout)
const context = this;
if (timeout) {
clearTimeout(timeout)
}
timeout = setTimeout(function () {
if (!immediate) {
applyFn(context, args)
}
lastPromiseSrc = new PromiseSource()
timeout = null
}, wait)
if (callNow) {
applyFn(context, args)
}
return lastPromiseSrc.promise
}
}
function preventNonNumericalInput(e) {
e = e || window.event;
let charCode = (typeof e.which == "undefined") ? e.keyCode : e.which;
@ -359,6 +429,83 @@ function preventNonNumericalInput(e) {
}
}
/** Returns the global object for the current execution environement.
* @Returns window in a browser, global in node and self in a ServiceWorker.
* @Notes Allows unit testing and use of the engine outside of a browser.
*/
function getGlobal() {
if (typeof globalThis === 'object') {
return globalThis
} else if (typeof global === 'object') {
return global
} else if (typeof self === 'object') {
return self
}
try {
return Function('return this')()
} catch {
// If the Function constructor fails, we're in a browser with eval disabled by CSP headers.
return window
} // Returns undefined if global can't be found.
}
/** Check if x is an Array or a TypedArray.
* @Returns true if x is an Array or a TypedArray, false otherwise.
*/
function isArrayOrTypedArray(x) {
return Boolean(typeof x === 'object' && (Array.isArray(x) || (ArrayBuffer.isView(x) && !(x instanceof DataView))))
}
function makeQuerablePromise(promise) {
if (typeof promise !== 'object') {
throw new Error('promise is not an object.')
}
if (!(promise instanceof Promise)) {
throw new Error('Argument is not a promise.')
}
// Don't modify a promise that's been already modified.
if ('isResolved' in promise || 'isRejected' in promise || 'isPending' in promise) {
return promise
}
let isPending = true
let isRejected = false
let rejectReason = undefined
let isResolved = false
let resolvedValue = undefined
const qurPro = promise.then(
function(val){
isResolved = true
isPending = false
resolvedValue = val
return val
}
, function(reason) {
rejectReason = reason
isRejected = true
isPending = false
throw reason
}
)
Object.defineProperties(qurPro, {
'isResolved': {
get: () => isResolved
}
, 'resolvedValue': {
get: () => resolvedValue
}
, 'isPending': {
get: () => isPending
}
, 'isRejected': {
get: () => isRejected
}
, 'rejectReason': {
get: () => rejectReason
}
})
return qurPro
}
/* inserts custom html to allow prettifying of inputs */
function prettifyInputs(root_element) {
root_element.querySelectorAll(`input[type="checkbox"]`).forEach(element => {
@ -374,3 +521,156 @@ function prettifyInputs(root_element) {
}
})
}
class GenericEventSource {
#events = {};
#types = []
constructor(...eventsTypes) {
if (Array.isArray(eventsTypes) && eventsTypes.length === 1 && Array.isArray(eventsTypes[0])) {
eventsTypes = eventsTypes[0]
}
this.#types.push(...eventsTypes)
}
get eventTypes() {
return this.#types
}
/** Add a new event listener
*/
addEventListener(name, handler) {
if (!this.#types.includes(name)) {
throw new Error('Invalid event name.')
}
if (this.#events.hasOwnProperty(name)) {
this.#events[name].push(handler)
} else {
this.#events[name] = [handler]
}
}
/** Remove the event listener
*/
removeEventListener(name, handler) {
if (!this.#events.hasOwnProperty(name)) {
return
}
const index = this.#events[name].indexOf(handler)
if (index != -1) {
this.#events[name].splice(index, 1)
}
}
fireEvent(name, ...args) {
if (!this.#types.includes(name)) {
throw new Error(`Event ${String(name)} missing from Events.types`)
}
if (!this.#events.hasOwnProperty(name)) {
return Promise.resolve()
}
if (!args || !args.length) {
args = []
}
const evs = this.#events[name]
if (evs.length <= 0) {
return Promise.resolve()
}
return Promise.allSettled(evs.map((callback) => {
try {
return Promise.resolve(callback.apply(SD, args))
} catch (ex) {
return Promise.reject(ex)
}
}))
}
}
class ServiceContainer {
#services = new Map()
#singletons = new Map()
constructor(...servicesParams) {
servicesParams.forEach(this.register.bind(this))
}
get services () {
return this.#services
}
get singletons() {
return this.#singletons
}
register(params) {
if (ServiceContainer.isConstructor(params)) {
if (typeof params.name !== 'string') {
throw new Error('params.name is not a string.')
}
params = {name:params.name, definition:params}
}
if (typeof params !== 'object') {
throw new Error('params is not an object.')
}
[ 'name',
'definition',
].forEach((key) => {
if (!(key in params)) {
console.error('Invalid service %o registration.', params)
throw new Error(`params.${key} is not defined.`)
}
})
const opts = {definition: params.definition}
if ('dependencies' in params) {
if (Array.isArray(params.dependencies)) {
params.dependencies.forEach((dep) => {
if (typeof dep !== 'string') {
throw new Error('dependency name is not a string.')
}
})
opts.dependencies = params.dependencies
} else {
throw new Error('params.dependencies is not an array.')
}
}
if (params.singleton) {
opts.singleton = true
}
this.#services.set(params.name, opts)
return Object.assign({name: params.name}, opts)
}
get(name) {
const ctorInfos = this.#services.get(name)
if (!ctorInfos) {
return
}
if(!ServiceContainer.isConstructor(ctorInfos.definition)) {
return ctorInfos.definition
}
if(!ctorInfos.singleton) {
return this._createInstance(ctorInfos)
}
const singletonInstance = this.#singletons.get(name)
if(singletonInstance) {
return singletonInstance
}
const newSingletonInstance = this._createInstance(ctorInfos)
this.#singletons.set(name, newSingletonInstance)
return newSingletonInstance
}
_getResolvedDependencies(service) {
let classDependencies = []
if(service.dependencies) {
classDependencies = service.dependencies.map(this.get.bind(this))
}
return classDependencies
}
_createInstance(service) {
if (!ServiceContainer.isClass(service.definition)) {
// Call as normal function.
return service.definition(...this._getResolvedDependencies(service))
}
// Use new
return new service.definition(...this._getResolvedDependencies(service))
}
static isClass(definition) {
return typeof definition === 'function' && Boolean(definition.prototype) && definition.prototype.constructor === definition
}
static isConstructor(definition) {
return typeof definition === 'function'
}
}

View File

@ -0,0 +1,8 @@
{
"name": "Stable Diffusion UI",
"display": "standalone",
"display_override": [
"window-controls-overlay"
],
"theme_color": "#000000"
}

View File

@ -0,0 +1,45 @@
(function () {
"use strict"
var styleSheet = document.createElement("style");
styleSheet.textContent = `
.auto-scroll {
float: right;
}
`;
document.head.appendChild(styleSheet);
const autoScrollControl = document.createElement('div');
autoScrollControl.innerHTML = `<input id="auto_scroll" name="auto_scroll" type="checkbox">
<label for="auto_scroll">Scroll to generated image</label>`
autoScrollControl.className = "auto-scroll"
clearAllPreviewsBtn.parentNode.insertBefore(autoScrollControl, clearAllPreviewsBtn.nextSibling)
prettifyInputs(document);
let autoScroll = document.querySelector("#auto_scroll")
// save/restore the toggle state
autoScroll.addEventListener('click', (e) => {
localStorage.setItem('auto_scroll', autoScroll.checked)
})
autoScroll.checked = localStorage.getItem('auto_scroll') == "true"
// observe for changes in the preview pane
var observer = new MutationObserver(function (mutations) {
mutations.forEach(function (mutation) {
if (mutation.target.className == 'img-batch') {
Autoscroll(mutation.target)
}
})
})
observer.observe(document.getElementById('preview'), {
childList: true,
subtree: true
})
function Autoscroll(target) {
if (autoScroll.checked && target !== null) {
target.parentElement.parentElement.parentElement.scrollIntoView();
}
}
})()

View File

@ -1,7 +1,10 @@
(function () {
"use strict"
(function () { "use strict"
if (typeof editorModifierTagsList !== 'object') {
console.error('editorModifierTagsList missing...')
return
}
var styleSheet = document.createElement("style");
const styleSheet = document.createElement("style");
styleSheet.textContent = `
.modifier-card-tiny.drag-sort-active {
background: transparent;
@ -12,7 +15,7 @@
document.head.appendChild(styleSheet);
// observe for changes in tag list
var observer = new MutationObserver(function (mutations) {
const observer = new MutationObserver(function (mutations) {
// mutations.forEach(function (mutation) {
if (editorModifierTagsList.childNodes.length > 0) {
ModifierDragAndDrop(editorModifierTagsList)

View File

@ -1,8 +1,11 @@
(function () {
"use strict"
(function () { "use strict"
if (typeof editorModifierTagsList !== 'object') {
console.error('editorModifierTagsList missing...')
return
}
// observe for changes in tag list
var observer = new MutationObserver(function (mutations) {
const observer = new MutationObserver(function (mutations) {
// mutations.forEach(function (mutation) {
if (editorModifierTagsList.childNodes.length > 0) {
ModifierMouseWheel(editorModifierTagsList)
@ -18,40 +21,42 @@
let overlays = document.querySelector('#editor-inputs-tags-list').querySelectorAll('.modifier-card-overlay')
overlays.forEach (i => {
i.onwheel = (e) => {
e.preventDefault()
const delta = Math.sign(event.deltaY)
let s = i.parentElement.getElementsByClassName('modifier-card-label')[0].getElementsByTagName("p")[0].innerText
if (delta < 0) {
// wheel scrolling up
if (s.substring(0, 1) == '[' && s.substring(s.length-1) == ']') {
s = s.substring(1, s.length - 1)
}
else
{
if (s.substring(0, 10) !== '('.repeat(10) && s.substring(s.length-10) !== ')'.repeat(10)) {
s = '(' + s + ')'
if (e.ctrlKey == true) {
e.preventDefault()
const delta = Math.sign(event.deltaY)
let s = i.parentElement.getElementsByClassName('modifier-card-label')[0].getElementsByTagName("p")[0].innerText
if (delta < 0) {
// wheel scrolling up
if (s.substring(0, 1) == '[' && s.substring(s.length-1) == ']') {
s = s.substring(1, s.length - 1)
}
else
{
if (s.substring(0, 10) !== '('.repeat(10) && s.substring(s.length-10) !== ')'.repeat(10)) {
s = '(' + s + ')'
}
}
}
}
else{
// wheel scrolling down
if (s.substring(0, 1) == '(' && s.substring(s.length-1) == ')') {
s = s.substring(1, s.length - 1)
}
else
{
if (s.substring(0, 10) !== '['.repeat(10) && s.substring(s.length-10) !== ']'.repeat(10)) {
s = '[' + s + ']'
else{
// wheel scrolling down
if (s.substring(0, 1) == '(' && s.substring(s.length-1) == ')') {
s = s.substring(1, s.length - 1)
}
else
{
if (s.substring(0, 10) !== '['.repeat(10) && s.substring(s.length-10) !== ']'.repeat(10)) {
s = '[' + s + ']'
}
}
}
}
i.parentElement.getElementsByClassName('modifier-card-label')[0].getElementsByTagName("p")[0].innerText = s
// update activeTags
for (let it = 0; it < overlays.length; it++) {
if (i == overlays[it]) {
activeTags[it].name = s
break
i.parentElement.getElementsByClassName('modifier-card-label')[0].getElementsByTagName("p")[0].innerText = s
// update activeTags
for (let it = 0; it < overlays.length; it++) {
if (i == overlays[it]) {
activeTags[it].name = s
break
}
}
}
}

View File

@ -0,0 +1,29 @@
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<title>Jasmine Spec Runner v4.5.0</title>
<link rel="shortcut icon" type="image/png" href="./jasmine/jasmine_favicon.png">
<link rel="stylesheet" href="./jasmine/jasmine.css">
<script src="./jasmine/jasmine.js"></script>
<script src="./jasmine/jasmine-html.js"></script>
<script src="./jasmine/boot0.js"></script>
<!-- optional: include a file here that configures the Jasmine env -->
<script src="./jasmine/boot1.js"></script>
<!-- include source files here... -->
<script src="/media/js/utils.js?v=4"></script>
<script src="/media/js/engine.js?v=1"></script>
<!-- <script src="./engine.js?v=1"></script> -->
<script src="/media/js/plugins.js?v=1"></script>
<!-- include spec files here... -->
<script src="./jasmineSpec.js"></script>
</head>
<body>
</body>
</html>

View File

@ -0,0 +1,31 @@
(function() {
PLUGINS['MODIFIERS_LOAD'].push({
loader: function() {
let customModifiers = localStorage.getItem(CUSTOM_MODIFIERS_KEY, '')
customModifiersTextBox.value = customModifiers
if (customModifiersGroupElement !== undefined) {
customModifiersGroupElement.remove()
}
if (customModifiers && customModifiers.trim() !== '') {
customModifiers = customModifiers.split('\n')
customModifiers = customModifiers.filter(m => m.trim() !== '')
customModifiers = customModifiers.map(function(m) {
return {
"modifier": m
}
})
let customGroup = {
'category': 'Custom Modifiers',
'modifiers': customModifiers
}
customModifiersGroupElement = createModifierGroup(customGroup, true)
createCollapsibles(customModifiersGroupElement)
}
}
})
})()

View File

@ -0,0 +1,64 @@
/*
Copyright (c) 2008-2022 Pivotal Labs
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*/
/**
This file starts the process of "booting" Jasmine. It initializes Jasmine,
makes its globals available, and creates the env. This file should be loaded
after `jasmine.js` and `jasmine_html.js`, but before `boot1.js` or any project
source files or spec files are loaded.
*/
(function() {
const jasmineRequire = window.jasmineRequire || require('./jasmine.js');
/**
* ## Require &amp; Instantiate
*
* Require Jasmine's core files. Specifically, this requires and attaches all of Jasmine's code to the `jasmine` reference.
*/
const jasmine = jasmineRequire.core(jasmineRequire),
global = jasmine.getGlobal();
global.jasmine = jasmine;
/**
* Since this is being run in a browser and the results should populate to an HTML page, require the HTML-specific Jasmine code, injecting the same reference.
*/
jasmineRequire.html(jasmine);
/**
* Create the Jasmine environment. This is used to run all specs in a project.
*/
const env = jasmine.getEnv();
/**
* ## The Global Interface
*
* Build up the functions that will be exposed as the Jasmine public interface. A project can customize, rename or alias any of these functions as desired, provided the implementation remains unchanged.
*/
const jasmineInterface = jasmineRequire.interface(jasmine, env);
/**
* Add all of the Jasmine global/public interface to the global scope, so a project can use the public interface directly. For example, calling `describe` in specs instead of `jasmine.getEnv().describe`.
*/
for (const property in jasmineInterface) {
global[property] = jasmineInterface[property];
}
})();

View File

@ -0,0 +1,132 @@
/*
Copyright (c) 2008-2022 Pivotal Labs
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*/
/**
This file finishes 'booting' Jasmine, performing all of the necessary
initialization before executing the loaded environment and all of a project's
specs. This file should be loaded after `boot0.js` but before any project
source files or spec files are loaded. Thus this file can also be used to
customize Jasmine for a project.
If a project is using Jasmine via the standalone distribution, this file can
be customized directly. If you only wish to configure the Jasmine env, you
can load another file that calls `jasmine.getEnv().configure({...})`
after `boot0.js` is loaded and before this file is loaded.
*/
(function() {
const env = jasmine.getEnv();
/**
* ## Runner Parameters
*
* More browser specific code - wrap the query string in an object and to allow for getting/setting parameters from the runner user interface.
*/
const queryString = new jasmine.QueryString({
getWindowLocation: function() {
return window.location;
}
});
const filterSpecs = !!queryString.getParam('spec');
const config = {
stopOnSpecFailure: queryString.getParam('stopOnSpecFailure'),
stopSpecOnExpectationFailure: queryString.getParam(
'stopSpecOnExpectationFailure'
),
hideDisabled: queryString.getParam('hideDisabled')
};
const random = queryString.getParam('random');
if (random !== undefined && random !== '') {
config.random = random;
}
const seed = queryString.getParam('seed');
if (seed) {
config.seed = seed;
}
/**
* ## Reporters
* The `HtmlReporter` builds all of the HTML UI for the runner page. This reporter paints the dots, stars, and x's for specs, as well as all spec names and all failures (if any).
*/
const htmlReporter = new jasmine.HtmlReporter({
env: env,
navigateWithNewParam: function(key, value) {
return queryString.navigateWithNewParam(key, value);
},
addToExistingQueryString: function(key, value) {
return queryString.fullStringWithNewParam(key, value);
},
getContainer: function() {
return document.body;
},
createElement: function() {
return document.createElement.apply(document, arguments);
},
createTextNode: function() {
return document.createTextNode.apply(document, arguments);
},
timer: new jasmine.Timer(),
filterSpecs: filterSpecs
});
/**
* The `jsApiReporter` also receives spec results, and is used by any environment that needs to extract the results from JavaScript.
*/
env.addReporter(jsApiReporter);
env.addReporter(htmlReporter);
/**
* Filter which specs will be run by matching the start of the full name against the `spec` query param.
*/
const specFilter = new jasmine.HtmlSpecFilter({
filterString: function() {
return queryString.getParam('spec');
}
});
config.specFilter = function(spec) {
return specFilter.matches(spec.getFullName());
};
env.configure(config);
/**
* ## Execution
*
* Replace the browser window's `onload`, ensure it's called, and then run all of the loaded specs. This includes initializing the `HtmlReporter` instance and then executing the loaded Jasmine environment. All of this will happen after all of the specs are loaded.
*/
const currentWindowOnload = window.onload;
window.onload = function() {
if (currentWindowOnload) {
currentWindowOnload();
}
htmlReporter.initialize();
env.execute();
};
})();

View File

@ -0,0 +1,964 @@
/*
Copyright (c) 2008-2022 Pivotal Labs
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*/
// eslint-disable-next-line no-var
var jasmineRequire = window.jasmineRequire || require('./jasmine.js');
jasmineRequire.html = function(j$) {
j$.ResultsNode = jasmineRequire.ResultsNode();
j$.HtmlReporter = jasmineRequire.HtmlReporter(j$);
j$.QueryString = jasmineRequire.QueryString();
j$.HtmlSpecFilter = jasmineRequire.HtmlSpecFilter();
};
jasmineRequire.HtmlReporter = function(j$) {
function ResultsStateBuilder() {
this.topResults = new j$.ResultsNode({}, '', null);
this.currentParent = this.topResults;
this.specsExecuted = 0;
this.failureCount = 0;
this.pendingSpecCount = 0;
}
ResultsStateBuilder.prototype.suiteStarted = function(result) {
this.currentParent.addChild(result, 'suite');
this.currentParent = this.currentParent.last();
};
ResultsStateBuilder.prototype.suiteDone = function(result) {
this.currentParent.updateResult(result);
if (this.currentParent !== this.topResults) {
this.currentParent = this.currentParent.parent;
}
if (result.status === 'failed') {
this.failureCount++;
}
};
ResultsStateBuilder.prototype.specStarted = function(result) {};
ResultsStateBuilder.prototype.specDone = function(result) {
this.currentParent.addChild(result, 'spec');
if (result.status !== 'excluded') {
this.specsExecuted++;
}
if (result.status === 'failed') {
this.failureCount++;
}
if (result.status == 'pending') {
this.pendingSpecCount++;
}
};
ResultsStateBuilder.prototype.jasmineDone = function(result) {
if (result.failedExpectations) {
this.failureCount += result.failedExpectations.length;
}
};
function HtmlReporter(options) {
function config() {
return (options.env && options.env.configuration()) || {};
}
const getContainer = options.getContainer;
const createElement = options.createElement;
const createTextNode = options.createTextNode;
const navigateWithNewParam = options.navigateWithNewParam || function() {};
const addToExistingQueryString =
options.addToExistingQueryString || defaultQueryString;
const filterSpecs = options.filterSpecs;
let htmlReporterMain;
let symbols;
const deprecationWarnings = [];
const failures = [];
this.initialize = function() {
clearPrior();
htmlReporterMain = createDom(
'div',
{ className: 'jasmine_html-reporter' },
createDom(
'div',
{ className: 'jasmine-banner' },
createDom('a', {
className: 'jasmine-title',
href: 'http://jasmine.github.io/',
target: '_blank'
}),
createDom('span', { className: 'jasmine-version' }, j$.version)
),
createDom('ul', { className: 'jasmine-symbol-summary' }),
createDom('div', { className: 'jasmine-alert' }),
createDom(
'div',
{ className: 'jasmine-results' },
createDom('div', { className: 'jasmine-failures' })
)
);
getContainer().appendChild(htmlReporterMain);
};
let totalSpecsDefined;
this.jasmineStarted = function(options) {
totalSpecsDefined = options.totalSpecsDefined || 0;
};
const summary = createDom('div', { className: 'jasmine-summary' });
const stateBuilder = new ResultsStateBuilder();
this.suiteStarted = function(result) {
stateBuilder.suiteStarted(result);
};
this.suiteDone = function(result) {
stateBuilder.suiteDone(result);
if (result.status === 'failed') {
failures.push(failureDom(result));
}
addDeprecationWarnings(result, 'suite');
};
this.specStarted = function(result) {
stateBuilder.specStarted(result);
};
this.specDone = function(result) {
stateBuilder.specDone(result);
if (noExpectations(result)) {
const noSpecMsg = "Spec '" + result.fullName + "' has no expectations.";
if (result.status === 'failed') {
console.error(noSpecMsg);
} else {
console.warn(noSpecMsg);
}
}
if (!symbols) {
symbols = find('.jasmine-symbol-summary');
}
symbols.appendChild(
createDom('li', {
className: this.displaySpecInCorrectFormat(result),
id: 'spec_' + result.id,
title: result.fullName
})
);
if (result.status === 'failed') {
failures.push(failureDom(result));
}
addDeprecationWarnings(result, 'spec');
};
this.displaySpecInCorrectFormat = function(result) {
return noExpectations(result) && result.status === 'passed'
? 'jasmine-empty'
: this.resultStatus(result.status);
};
this.resultStatus = function(status) {
if (status === 'excluded') {
return config().hideDisabled
? 'jasmine-excluded-no-display'
: 'jasmine-excluded';
}
return 'jasmine-' + status;
};
this.jasmineDone = function(doneResult) {
stateBuilder.jasmineDone(doneResult);
const banner = find('.jasmine-banner');
const alert = find('.jasmine-alert');
const order = doneResult && doneResult.order;
alert.appendChild(
createDom(
'span',
{ className: 'jasmine-duration' },
'finished in ' + doneResult.totalTime / 1000 + 's'
)
);
banner.appendChild(optionsMenu(config()));
if (stateBuilder.specsExecuted < totalSpecsDefined) {
const skippedMessage =
'Ran ' +
stateBuilder.specsExecuted +
' of ' +
totalSpecsDefined +
' specs - run all';
// include window.location.pathname to fix issue with karma-jasmine-html-reporter in angular: see https://github.com/jasmine/jasmine/issues/1906
const skippedLink =
(window.location.pathname || '') +
addToExistingQueryString('spec', '');
alert.appendChild(
createDom(
'span',
{ className: 'jasmine-bar jasmine-skipped' },
createDom(
'a',
{ href: skippedLink, title: 'Run all specs' },
skippedMessage
)
)
);
}
let statusBarMessage = '';
let statusBarClassName = 'jasmine-overall-result jasmine-bar ';
const globalFailures =
(doneResult && doneResult.failedExpectations) || [];
const failed = stateBuilder.failureCount + globalFailures.length > 0;
if (totalSpecsDefined > 0 || failed) {
statusBarMessage +=
pluralize('spec', stateBuilder.specsExecuted) +
', ' +
pluralize('failure', stateBuilder.failureCount);
if (stateBuilder.pendingSpecCount) {
statusBarMessage +=
', ' + pluralize('pending spec', stateBuilder.pendingSpecCount);
}
}
if (doneResult.overallStatus === 'passed') {
statusBarClassName += ' jasmine-passed ';
} else if (doneResult.overallStatus === 'incomplete') {
statusBarClassName += ' jasmine-incomplete ';
statusBarMessage =
'Incomplete: ' +
doneResult.incompleteReason +
', ' +
statusBarMessage;
} else {
statusBarClassName += ' jasmine-failed ';
}
let seedBar;
if (order && order.random) {
seedBar = createDom(
'span',
{ className: 'jasmine-seed-bar' },
', randomized with seed ',
createDom(
'a',
{
title: 'randomized with seed ' + order.seed,
href: seedHref(order.seed)
},
order.seed
)
);
}
alert.appendChild(
createDom(
'span',
{ className: statusBarClassName },
statusBarMessage,
seedBar
)
);
const errorBarClassName = 'jasmine-bar jasmine-errored';
const afterAllMessagePrefix = 'AfterAll ';
for (let i = 0; i < globalFailures.length; i++) {
alert.appendChild(
createDom(
'span',
{ className: errorBarClassName },
globalFailureMessage(globalFailures[i])
)
);
}
function globalFailureMessage(failure) {
if (failure.globalErrorType === 'load') {
const prefix = 'Error during loading: ' + failure.message;
if (failure.filename) {
return (
prefix + ' in ' + failure.filename + ' line ' + failure.lineno
);
} else {
return prefix;
}
} else if (failure.globalErrorType === 'afterAll') {
return afterAllMessagePrefix + failure.message;
} else {
return failure.message;
}
}
addDeprecationWarnings(doneResult);
for (let i = 0; i < deprecationWarnings.length; i++) {
const children = [];
let context;
switch (deprecationWarnings[i].runnableType) {
case 'spec':
context = '(in spec: ' + deprecationWarnings[i].runnableName + ')';
break;
case 'suite':
context = '(in suite: ' + deprecationWarnings[i].runnableName + ')';
break;
default:
context = '';
}
deprecationWarnings[i].message.split('\n').forEach(function(line) {
children.push(line);
children.push(createDom('br'));
});
children[0] = 'DEPRECATION: ' + children[0];
children.push(context);
if (deprecationWarnings[i].stack) {
children.push(createExpander(deprecationWarnings[i].stack));
}
alert.appendChild(
createDom(
'span',
{ className: 'jasmine-bar jasmine-warning' },
children
)
);
}
const results = find('.jasmine-results');
results.appendChild(summary);
summaryList(stateBuilder.topResults, summary);
if (failures.length) {
alert.appendChild(
createDom(
'span',
{ className: 'jasmine-menu jasmine-bar jasmine-spec-list' },
createDom('span', {}, 'Spec List | '),
createDom(
'a',
{ className: 'jasmine-failures-menu', href: '#' },
'Failures'
)
)
);
alert.appendChild(
createDom(
'span',
{ className: 'jasmine-menu jasmine-bar jasmine-failure-list' },
createDom(
'a',
{ className: 'jasmine-spec-list-menu', href: '#' },
'Spec List'
),
createDom('span', {}, ' | Failures ')
)
);
find('.jasmine-failures-menu').onclick = function() {
setMenuModeTo('jasmine-failure-list');
return false;
};
find('.jasmine-spec-list-menu').onclick = function() {
setMenuModeTo('jasmine-spec-list');
return false;
};
setMenuModeTo('jasmine-failure-list');
const failureNode = find('.jasmine-failures');
for (let i = 0; i < failures.length; i++) {
failureNode.appendChild(failures[i]);
}
}
};
return this;
function failureDom(result) {
const failure = createDom(
'div',
{ className: 'jasmine-spec-detail jasmine-failed' },
failureDescription(result, stateBuilder.currentParent),
createDom('div', { className: 'jasmine-messages' })
);
const messages = failure.childNodes[1];
for (let i = 0; i < result.failedExpectations.length; i++) {
const expectation = result.failedExpectations[i];
messages.appendChild(
createDom(
'div',
{ className: 'jasmine-result-message' },
expectation.message
)
);
messages.appendChild(
createDom(
'div',
{ className: 'jasmine-stack-trace' },
expectation.stack
)
);
}
if (result.failedExpectations.length === 0) {
messages.appendChild(
createDom(
'div',
{ className: 'jasmine-result-message' },
'Spec has no expectations'
)
);
}
if (result.debugLogs) {
messages.appendChild(debugLogTable(result.debugLogs));
}
return failure;
}
function debugLogTable(debugLogs) {
const tbody = createDom('tbody');
debugLogs.forEach(function(entry) {
tbody.appendChild(
createDom(
'tr',
{},
createDom('td', {}, entry.timestamp.toString()),
createDom('td', {}, entry.message)
)
);
});
return createDom(
'div',
{ className: 'jasmine-debug-log' },
createDom(
'div',
{ className: 'jasmine-debug-log-header' },
'Debug logs'
),
createDom(
'table',
{},
createDom(
'thead',
{},
createDom(
'tr',
{},
createDom('th', {}, 'Time (ms)'),
createDom('th', {}, 'Message')
)
),
tbody
)
);
}
function summaryList(resultsTree, domParent) {
let specListNode;
for (let i = 0; i < resultsTree.children.length; i++) {
const resultNode = resultsTree.children[i];
if (filterSpecs && !hasActiveSpec(resultNode)) {
continue;
}
if (resultNode.type === 'suite') {
const suiteListNode = createDom(
'ul',
{ className: 'jasmine-suite', id: 'suite-' + resultNode.result.id },
createDom(
'li',
{
className:
'jasmine-suite-detail jasmine-' + resultNode.result.status
},
createDom(
'a',
{ href: specHref(resultNode.result) },
resultNode.result.description
)
)
);
summaryList(resultNode, suiteListNode);
domParent.appendChild(suiteListNode);
}
if (resultNode.type === 'spec') {
if (domParent.getAttribute('class') !== 'jasmine-specs') {
specListNode = createDom('ul', { className: 'jasmine-specs' });
domParent.appendChild(specListNode);
}
let specDescription = resultNode.result.description;
if (noExpectations(resultNode.result)) {
specDescription = 'SPEC HAS NO EXPECTATIONS ' + specDescription;
}
if (
resultNode.result.status === 'pending' &&
resultNode.result.pendingReason !== ''
) {
specDescription =
specDescription +
' PENDING WITH MESSAGE: ' +
resultNode.result.pendingReason;
}
specListNode.appendChild(
createDom(
'li',
{
className: 'jasmine-' + resultNode.result.status,
id: 'spec-' + resultNode.result.id
},
createDom(
'a',
{ href: specHref(resultNode.result) },
specDescription
)
)
);
}
}
}
function optionsMenu(config) {
const optionsMenuDom = createDom(
'div',
{ className: 'jasmine-run-options' },
createDom('span', { className: 'jasmine-trigger' }, 'Options'),
createDom(
'div',
{ className: 'jasmine-payload' },
createDom(
'div',
{ className: 'jasmine-stop-on-failure' },
createDom('input', {
className: 'jasmine-fail-fast',
id: 'jasmine-fail-fast',
type: 'checkbox'
}),
createDom(
'label',
{ className: 'jasmine-label', for: 'jasmine-fail-fast' },
'stop execution on spec failure'
)
),
createDom(
'div',
{ className: 'jasmine-throw-failures' },
createDom('input', {
className: 'jasmine-throw',
id: 'jasmine-throw-failures',
type: 'checkbox'
}),
createDom(
'label',
{ className: 'jasmine-label', for: 'jasmine-throw-failures' },
'stop spec on expectation failure'
)
),
createDom(
'div',
{ className: 'jasmine-random-order' },
createDom('input', {
className: 'jasmine-random',
id: 'jasmine-random-order',
type: 'checkbox'
}),
createDom(
'label',
{ className: 'jasmine-label', for: 'jasmine-random-order' },
'run tests in random order'
)
),
createDom(
'div',
{ className: 'jasmine-hide-disabled' },
createDom('input', {
className: 'jasmine-disabled',
id: 'jasmine-hide-disabled',
type: 'checkbox'
}),
createDom(
'label',
{ className: 'jasmine-label', for: 'jasmine-hide-disabled' },
'hide disabled tests'
)
)
)
);
const failFastCheckbox = optionsMenuDom.querySelector(
'#jasmine-fail-fast'
);
failFastCheckbox.checked = config.stopOnSpecFailure;
failFastCheckbox.onclick = function() {
navigateWithNewParam('stopOnSpecFailure', !config.stopOnSpecFailure);
};
const throwCheckbox = optionsMenuDom.querySelector(
'#jasmine-throw-failures'
);
throwCheckbox.checked = config.stopSpecOnExpectationFailure;
throwCheckbox.onclick = function() {
navigateWithNewParam(
'stopSpecOnExpectationFailure',
!config.stopSpecOnExpectationFailure
);
};
const randomCheckbox = optionsMenuDom.querySelector(
'#jasmine-random-order'
);
randomCheckbox.checked = config.random;
randomCheckbox.onclick = function() {
navigateWithNewParam('random', !config.random);
};
const hideDisabled = optionsMenuDom.querySelector(
'#jasmine-hide-disabled'
);
hideDisabled.checked = config.hideDisabled;
hideDisabled.onclick = function() {
navigateWithNewParam('hideDisabled', !config.hideDisabled);
};
const optionsTrigger = optionsMenuDom.querySelector('.jasmine-trigger'),
optionsPayload = optionsMenuDom.querySelector('.jasmine-payload'),
isOpen = /\bjasmine-open\b/;
optionsTrigger.onclick = function() {
if (isOpen.test(optionsPayload.className)) {
optionsPayload.className = optionsPayload.className.replace(
isOpen,
''
);
} else {
optionsPayload.className += ' jasmine-open';
}
};
return optionsMenuDom;
}
function failureDescription(result, suite) {
const wrapper = createDom(
'div',
{ className: 'jasmine-description' },
createDom(
'a',
{ title: result.description, href: specHref(result) },
result.description
)
);
let suiteLink;
while (suite && suite.parent) {
wrapper.insertBefore(createTextNode(' > '), wrapper.firstChild);
suiteLink = createDom(
'a',
{ href: suiteHref(suite) },
suite.result.description
);
wrapper.insertBefore(suiteLink, wrapper.firstChild);
suite = suite.parent;
}
return wrapper;
}
function suiteHref(suite) {
const els = [];
while (suite && suite.parent) {
els.unshift(suite.result.description);
suite = suite.parent;
}
// include window.location.pathname to fix issue with karma-jasmine-html-reporter in angular: see https://github.com/jasmine/jasmine/issues/1906
return (
(window.location.pathname || '') +
addToExistingQueryString('spec', els.join(' '))
);
}
function addDeprecationWarnings(result, runnableType) {
if (result && result.deprecationWarnings) {
for (let i = 0; i < result.deprecationWarnings.length; i++) {
const warning = result.deprecationWarnings[i].message;
deprecationWarnings.push({
message: warning,
stack: result.deprecationWarnings[i].stack,
runnableName: result.fullName,
runnableType: runnableType
});
}
}
}
function createExpander(stackTrace) {
const expandLink = createDom('a', { href: '#' }, 'Show stack trace');
const root = createDom(
'div',
{ className: 'jasmine-expander' },
expandLink,
createDom(
'div',
{ className: 'jasmine-expander-contents jasmine-stack-trace' },
stackTrace
)
);
expandLink.addEventListener('click', function(e) {
e.preventDefault();
if (root.classList.contains('jasmine-expanded')) {
root.classList.remove('jasmine-expanded');
expandLink.textContent = 'Show stack trace';
} else {
root.classList.add('jasmine-expanded');
expandLink.textContent = 'Hide stack trace';
}
});
return root;
}
function find(selector) {
return getContainer().querySelector('.jasmine_html-reporter ' + selector);
}
function clearPrior() {
const oldReporter = find('');
if (oldReporter) {
getContainer().removeChild(oldReporter);
}
}
function createDom(type, attrs, childrenArrayOrVarArgs) {
const el = createElement(type);
let children;
if (j$.isArray_(childrenArrayOrVarArgs)) {
children = childrenArrayOrVarArgs;
} else {
children = [];
for (let i = 2; i < arguments.length; i++) {
children.push(arguments[i]);
}
}
for (let i = 0; i < children.length; i++) {
const child = children[i];
if (typeof child === 'string') {
el.appendChild(createTextNode(child));
} else {
if (child) {
el.appendChild(child);
}
}
}
for (const attr in attrs) {
if (attr == 'className') {
el[attr] = attrs[attr];
} else {
el.setAttribute(attr, attrs[attr]);
}
}
return el;
}
function pluralize(singular, count) {
const word = count == 1 ? singular : singular + 's';
return '' + count + ' ' + word;
}
function specHref(result) {
// include window.location.pathname to fix issue with karma-jasmine-html-reporter in angular: see https://github.com/jasmine/jasmine/issues/1906
return (
(window.location.pathname || '') +
addToExistingQueryString('spec', result.fullName)
);
}
function seedHref(seed) {
// include window.location.pathname to fix issue with karma-jasmine-html-reporter in angular: see https://github.com/jasmine/jasmine/issues/1906
return (
(window.location.pathname || '') +
addToExistingQueryString('seed', seed)
);
}
function defaultQueryString(key, value) {
return '?' + key + '=' + value;
}
function setMenuModeTo(mode) {
htmlReporterMain.setAttribute('class', 'jasmine_html-reporter ' + mode);
}
function noExpectations(result) {
const allExpectations =
result.failedExpectations.length + result.passedExpectations.length;
return (
allExpectations === 0 &&
(result.status === 'passed' || result.status === 'failed')
);
}
function hasActiveSpec(resultNode) {
if (resultNode.type == 'spec' && resultNode.result.status != 'excluded') {
return true;
}
if (resultNode.type == 'suite') {
for (let i = 0, j = resultNode.children.length; i < j; i++) {
if (hasActiveSpec(resultNode.children[i])) {
return true;
}
}
}
}
}
return HtmlReporter;
};
jasmineRequire.HtmlSpecFilter = function() {
function HtmlSpecFilter(options) {
const filterString =
options &&
options.filterString() &&
options.filterString().replace(/[-[\]{}()*+?.,\\^$|#\s]/g, '\\$&');
const filterPattern = new RegExp(filterString);
this.matches = function(specName) {
return filterPattern.test(specName);
};
}
return HtmlSpecFilter;
};
jasmineRequire.ResultsNode = function() {
function ResultsNode(result, type, parent) {
this.result = result;
this.type = type;
this.parent = parent;
this.children = [];
this.addChild = function(result, type) {
this.children.push(new ResultsNode(result, type, this));
};
this.last = function() {
return this.children[this.children.length - 1];
};
this.updateResult = function(result) {
this.result = result;
};
}
return ResultsNode;
};
jasmineRequire.QueryString = function() {
function QueryString(options) {
this.navigateWithNewParam = function(key, value) {
options.getWindowLocation().search = this.fullStringWithNewParam(
key,
value
);
};
this.fullStringWithNewParam = function(key, value) {
const paramMap = queryStringToParamMap();
paramMap[key] = value;
return toQueryString(paramMap);
};
this.getParam = function(key) {
return queryStringToParamMap()[key];
};
return this;
function toQueryString(paramMap) {
const qStrPairs = [];
for (const prop in paramMap) {
qStrPairs.push(
encodeURIComponent(prop) + '=' + encodeURIComponent(paramMap[prop])
);
}
return '?' + qStrPairs.join('&');
}
function queryStringToParamMap() {
const paramStr = options.getWindowLocation().search.substring(1);
let params = [];
const paramMap = {};
if (paramStr.length > 0) {
params = paramStr.split('&');
for (let i = 0; i < params.length; i++) {
const p = params[i].split('=');
let value = decodeURIComponent(p[1]);
if (value === 'true' || value === 'false') {
value = JSON.parse(value);
}
paramMap[decodeURIComponent(p[0])] = value;
}
}
return paramMap;
}
}
return QueryString;
};

File diff suppressed because one or more lines are too long

File diff suppressed because it is too large Load Diff

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.5 KiB

View File

@ -0,0 +1,412 @@
"use strict"
const JASMINE_SESSION_ID = `jasmine-${String(Date.now()).slice(8)}`
beforeEach(function () {
jasmine.DEFAULT_TIMEOUT_INTERVAL = 15 * 60 * 1000 // Test timeout after 15 minutes
jasmine.addMatchers({
toBeOneOf: function () {
return {
compare: function (actual, expected) {
return {
pass: expected.includes(actual)
}
}
}
}
})
})
describe('stable-diffusion-ui', function() {
beforeEach(function() {
expect(typeof SD).toBe('object')
expect(typeof SD.serverState).toBe('object')
expect(typeof SD.serverState.status).toBe('string')
})
it('should be able to reach the backend', async function() {
expect(SD.serverState.status).toBe(SD.ServerStates.unavailable)
SD.sessionId = JASMINE_SESSION_ID
await SD.init()
expect(SD.isServerAvailable()).toBeTrue()
})
it('enfore the current task state', function() {
const task = new SD.Task()
expect(task.status).toBe(SD.TaskStatus.init)
expect(task.isPending).toBeTrue()
task._setStatus(SD.TaskStatus.pending)
expect(task.status).toBe(SD.TaskStatus.pending)
expect(task.isPending).toBeTrue()
expect(function() {
task._setStatus(SD.TaskStatus.init)
}).toThrowError()
task._setStatus(SD.TaskStatus.waiting)
expect(task.status).toBe(SD.TaskStatus.waiting)
expect(task.isPending).toBeTrue()
expect(function() {
task._setStatus(SD.TaskStatus.pending)
}).toThrowError()
task._setStatus(SD.TaskStatus.processing)
expect(task.status).toBe(SD.TaskStatus.processing)
expect(task.isPending).toBeTrue()
expect(function() {
task._setStatus(SD.TaskStatus.pending)
}).toThrowError()
task._setStatus(SD.TaskStatus.failed)
expect(task.status).toBe(SD.TaskStatus.failed)
expect(task.isPending).toBeFalse()
expect(function() {
task._setStatus(SD.TaskStatus.processing)
}).toThrowError()
expect(function() {
task._setStatus(SD.TaskStatus.completed)
}).toThrowError()
})
it('should be able to run tasks', async function() {
expect(typeof SD.Task.run).toBe('function')
const promiseGenerator = (function*(val) {
expect(val).toBe('start')
expect(yield 1 + 1).toBe(4)
expect(yield 2 + 2).toBe(8)
yield asyncDelay(500)
expect(yield 3 + 3).toBe(12)
expect(yield 4 + 4).toBe(16)
return 8 + 8
})('start')
const callback = function({value, done}) {
return {value: 2 * value, done}
}
expect(await SD.Task.run(promiseGenerator, {callback})).toBe(32)
})
it('should be able to queue tasks', async function() {
expect(typeof SD.Task.enqueue).toBe('function')
const promiseGenerator = (function*(val) {
expect(val).toBe('start')
expect(yield 1 + 1).toBe(4)
expect(yield 2 + 2).toBe(8)
yield asyncDelay(500)
expect(yield 3 + 3).toBe(12)
expect(yield 4 + 4).toBe(16)
return 8 + 8
})('start')
const callback = function({value, done}) {
return {value: 2 * value, done}
}
const gen = SD.Task.asGenerator({generator: promiseGenerator, callback})
expect(await SD.Task.enqueue(gen)).toBe(32)
})
it('should be able to chain handlers', async function() {
expect(typeof SD.Task.enqueue).toBe('function')
const promiseGenerator = (function*(val) {
expect(val).toBe('start')
expect(yield {test: '1'}).toEqual({test: '1', foo: 'bar'})
expect(yield 2 + 2).toEqual(8)
yield asyncDelay(500)
expect(yield 3 + 3).toEqual(12)
expect(yield {test: 4}).toEqual({test: 8, foo: 'bar'})
return {test: 8}
})('start')
const gen1 = SD.Task.asGenerator({generator: promiseGenerator, callback: function({value, done}) {
if (typeof value === "object") {
value['foo'] = 'bar'
}
return {value, done}
}})
const gen2 = SD.Task.asGenerator({generator: gen1, callback: function({value, done}) {
if (typeof value === 'number') {
value = 2 * value
}
if (typeof value === 'object' && typeof value.test === 'number') {
value.test = 2 * value.test
}
return {value, done}
}})
expect(await SD.Task.enqueue(gen2)).toEqual({test:32, foo: 'bar'})
})
describe('ServiceContainer', function() {
it('should be able to register providers', function() {
const cont = new ServiceContainer(
function foo() {
this.bar = ''
},
function bar() {
return () => 0
},
{ name: 'zero', definition: 0 },
{ name: 'ctx', definition: () => Object.create(null), singleton: true },
{ name: 'test',
definition: (ctx, missing, one, foo) => {
expect(ctx).toEqual({ran: true})
expect(one).toBe(1)
expect(typeof foo).toBe('object')
expect(foo.bar).toBeDefined()
expect(typeof missing).toBe('undefined')
return {foo: 'bar'}
}, dependencies: ['ctx', 'missing', 'one', 'foo']
}
)
const fooObj = cont.get('foo')
expect(typeof fooObj).toBe('object')
fooObj.ran = true
const ctx = cont.get('ctx')
expect(ctx).toEqual({})
ctx.ran = true
const bar = cont.get('bar')
expect(typeof bar).toBe('function')
expect(bar()).toBe(0)
cont.register({name: 'one', definition: 1})
const test = cont.get('test')
expect(typeof test).toBe('object')
expect(test.foo).toBe('bar')
})
})
it('should be able to stream data in chunks', async function() {
expect(SD.isServerAvailable()).toBeTrue()
const nbr_steps = 15
let res = await fetch('/render', {
method: 'POST',
headers: {
'Content-Type': 'application/json'
},
body: JSON.stringify({
"prompt": "a photograph of an astronaut riding a horse",
"negative_prompt": "",
"width": 128,
"height": 128,
"seed": Math.floor(Math.random() * 10000000),
"sampler": "plms",
"use_stable_diffusion_model": "sd-v1-4",
"num_inference_steps": nbr_steps,
"guidance_scale": 7.5,
"numOutputsParallel": 1,
"stream_image_progress": true,
"show_only_filtered_image": true,
"output_format": "jpeg",
"session_id": JASMINE_SESSION_ID,
}),
})
expect(res.ok).toBeTruthy()
const renderRequest = await res.json()
expect(typeof renderRequest.stream).toBe('string')
expect(renderRequest.task).toBeDefined()
// Wait for server status to update.
await SD.waitUntil(() => {
console.log('Waiting for %s to be received...', renderRequest.task)
return (!SD.serverState.tasks || SD.serverState.tasks[String(renderRequest.task)])
}, 250, 10 * 60 * 1000)
// Wait for task to start on server.
await SD.waitUntil(() => {
console.log('Waiting for %s to start...', renderRequest.task)
return !SD.serverState.tasks || SD.serverState.tasks[String(renderRequest.task)] !== 'pending'
}, 250)
const reader = new SD.ChunkedStreamReader(renderRequest.stream)
const parseToString = reader.parse
reader.parse = function(value) {
value = parseToString.call(this, value)
if (!value || value.length <= 0) {
return
}
return reader.readStreamAsJSON(value.join(''))
}
reader.onNext = function({done, value}) {
console.log(value)
if (typeof value === 'object' && 'status' in value) {
done = true
}
return {done, value}
}
let lastUpdate = undefined
let stepCount = 0
let complete = false
//for await (const stepUpdate of reader) {
for await (const stepUpdate of reader.open()) {
console.log('ChunkedStreamReader received ', stepUpdate)
lastUpdate = stepUpdate
if (complete) {
expect(stepUpdate.status).toBe('succeeded')
expect(stepUpdate.output).toHaveSize(1)
} else {
expect(stepUpdate.total_steps).toBe(nbr_steps)
expect(stepUpdate.step).toBe(stepCount)
if (stepUpdate.step === stepUpdate.total_steps) {
complete = true
} else {
stepCount++
}
}
}
for(let i=1; i <= 5; ++i) {
res = await fetch(renderRequest.stream)
expect(res.ok).toBeTruthy()
const cachedResponse = await res.json()
console.log('Cache test %s received %o', i, cachedResponse)
expect(lastUpdate).toEqual(cachedResponse)
}
})
describe('should be able to make renders', function() {
beforeEach(function() {
expect(SD.isServerAvailable()).toBeTrue()
})
it('basic inline request', async function() {
let stepCount = 0
let complete = false
const result = await SD.render({
"prompt": "a photograph of an astronaut riding a horse",
"width": 128,
"height": 128,
"num_inference_steps": 10,
"show_only_filtered_image": false,
//"use_face_correction": 'GFPGANv1.3',
"use_upscale": "RealESRGAN_x4plus",
"session_id": JASMINE_SESSION_ID,
}, function(event) {
console.log(this, event)
if ('update' in event) {
const stepUpdate = event.update
if (complete || (stepUpdate.status && stepUpdate.step === stepUpdate.total_steps)) {
expect(stepUpdate.status).toBe('succeeded')
expect(stepUpdate.output).toHaveSize(2)
} else {
expect(stepUpdate.step).toBe(stepCount)
if (stepUpdate.step === stepUpdate.total_steps) {
complete = true
} else {
stepCount++
}
}
}
})
console.log(result)
expect(result.status).toBe('succeeded')
expect(result.output).toHaveSize(2)
})
it('post and reader request', async function() {
const renderTask = new SD.RenderTask({
"prompt": "a photograph of an astronaut riding a horse",
"width": 128,
"height": 128,
"seed": SD.MAX_SEED_VALUE,
"num_inference_steps": 10,
"session_id": JASMINE_SESSION_ID,
})
expect(renderTask.status).toBe(SD.TaskStatus.init)
const timeout = -1
const renderRequest = await renderTask.post(timeout)
expect(typeof renderRequest.stream).toBe('string')
expect(renderTask.status).toBe(SD.TaskStatus.waiting)
expect(renderTask.streamUrl).toBe(renderRequest.stream)
await renderTask.waitUntil({state: SD.TaskStatus.processing, callback: () => console.log('Waiting for render task to start...') })
expect(renderTask.status).toBe(SD.TaskStatus.processing)
let stepCount = 0
let complete = false
//for await (const stepUpdate of renderTask.reader) {
for await (const stepUpdate of renderTask.reader.open()) {
console.log(stepUpdate)
if (complete || (stepUpdate.status && stepUpdate.step === stepUpdate.total_steps)) {
expect(stepUpdate.status).toBe('succeeded')
expect(stepUpdate.output).toHaveSize(1)
} else {
expect(stepUpdate.step).toBe(stepCount)
if (stepUpdate.step === stepUpdate.total_steps) {
complete = true
} else {
stepCount++
}
}
}
expect(renderTask.status).toBe(SD.TaskStatus.completed)
expect(renderTask.result.status).toBe('succeeded')
expect(renderTask.result.output).toHaveSize(1)
})
it('queued request', async function() {
let stepCount = 0
let complete = false
const renderTask = new SD.RenderTask({
"prompt": "a photograph of an astronaut riding a horse",
"width": 128,
"height": 128,
"num_inference_steps": 10,
"show_only_filtered_image": false,
//"use_face_correction": 'GFPGANv1.3',
"use_upscale": "RealESRGAN_x4plus",
"session_id": JASMINE_SESSION_ID,
})
await renderTask.enqueue(function(event) {
console.log(this, event)
if ('update' in event) {
const stepUpdate = event.update
if (complete || (stepUpdate.status && stepUpdate.step === stepUpdate.total_steps)) {
expect(stepUpdate.status).toBe('succeeded')
expect(stepUpdate.output).toHaveSize(2)
} else {
expect(stepUpdate.step).toBe(stepCount)
if (stepUpdate.step === stepUpdate.total_steps) {
complete = true
} else {
stepCount++
}
}
}
})
console.log(renderTask.result)
expect(renderTask.result.status).toBe('succeeded')
expect(renderTask.result.output).toHaveSize(2)
})
})
describe('# Special cases', function() {
it('should throw an exception on set for invalid sessionId', function() {
expect(function() {
SD.sessionId = undefined
}).toThrowError("Can't set sessionId to undefined.")
})
})
})
const loadCompleted = window.onload
let loadEvent = undefined
window.onload = function(evt) {
loadEvent = evt
}
if (!PLUGINS.SELFTEST) {
PLUGINS.SELFTEST = {}
}
loadUIPlugins().then(function() {
console.log('loadCompleted', loadEvent)
describe('@Plugins', function() {
it('exposes hooks to overide', function() {
expect(typeof PLUGINS.IMAGE_INFO_BUTTONS).toBe('object')
expect(typeof PLUGINS.TASK_CREATE).toBe('object')
})
describe('supports selftests', function() { // Hook to allow plugins to define tests.
const pluginsTests = Object.keys(PLUGINS.SELFTEST).filter((key) => PLUGINS.SELFTEST.hasOwnProperty(key))
if (!pluginsTests || pluginsTests.length <= 0) {
it('but nothing loaded...', function() {
expect(true).toBeTruthy()
})
return
}
for (const pTest of pluginsTests) {
describe(pTest, function() {
const testFn = PLUGINS.SELFTEST[pTest]
return Promise.resolve(testFn.call(jasmine, pTest))
})
}
})
})
loadCompleted.call(window, loadEvent)
})

View File

@ -0,0 +1,53 @@
(function () {
"use strict"
var styleSheet = document.createElement("style");
styleSheet.textContent = `
.modifier-card-tiny.modifier-toggle-inactive {
background: transparent;
border: 2px dashed red;
opacity:0.2;
}
`;
document.head.appendChild(styleSheet);
// observe for changes in tag list
var observer = new MutationObserver(function (mutations) {
// mutations.forEach(function (mutation) {
if (editorModifierTagsList.childNodes.length > 0) {
ModifierToggle()
}
// })
})
observer.observe(editorModifierTagsList, {
childList: true
})
function ModifierToggle() {
let overlays = document.querySelector('#editor-inputs-tags-list').querySelectorAll('.modifier-card-overlay')
overlays.forEach (i => {
i.oncontextmenu = (e) => {
e.preventDefault()
if (i.parentElement.classList.contains('modifier-toggle-inactive')) {
i.parentElement.classList.remove('modifier-toggle-inactive')
}
else
{
i.parentElement.classList.add('modifier-toggle-inactive')
}
// refresh activeTags
let modifierName = i.parentElement.getElementsByClassName('modifier-card-label')[0].getElementsByTagName("p")[0].innerText
activeTags = activeTags.map(obj => {
if (obj.name === modifierName) {
return {...obj, inactive: (obj.element.classList.contains('modifier-toggle-inactive'))};
}
return obj;
});
console.log(activeTags)
}
})
}
})()

View File

@ -1,11 +1,21 @@
(function() {
document.querySelector('#tab-container').insertAdjacentHTML('beforeend', `
// Register selftests when loaded by jasmine.
if (typeof PLUGINS?.SELFTEST === 'object') {
PLUGINS.SELFTEST["release-notes"] = function() {
it('should be able to fetch CHANGES.md', async function() {
let releaseNotes = await fetch(`https://raw.githubusercontent.com/cmdr2/stable-diffusion-ui/main/CHANGES.md`)
expect(releaseNotes.status).toBe(200)
})
}
}
document.querySelector('#tab-container')?.insertAdjacentHTML('beforeend', `
<span id="tab-news" class="tab">
<span><i class="fa fa-bolt icon"></i> What's new?</span>
</span>
`)
document.querySelector('#tab-content-wrapper').insertAdjacentHTML('beforeend', `
document.querySelector('#tab-content-wrapper')?.insertAdjacentHTML('beforeend', `
<div id="tab-content-news" class="tab-content">
<div id="news" class="tab-content-inner">
Loading..
@ -13,6 +23,16 @@
</div>
`)
const tabNews = document.querySelector('#tab-news')
if (tabNews) {
linkTabContents(tabNews)
}
const news = document.querySelector('#news')
if (!news) {
// news tab not found, dont exec plugin code.
return
}
document.querySelector('body').insertAdjacentHTML('beforeend', `
<style>
#tab-content-news .tab-content-inner {
@ -23,25 +43,22 @@
</style>
`)
linkTabContents(document.querySelector('#tab-news'))
let markedScript = document.createElement('script')
markedScript.src = '/media/js/marked.min.js'
markedScript.onload = async function() {
loadScript('/media/js/marked.min.js').then(async function() {
let appConfig = await fetch('/get/app_config')
if (!appConfig.ok) {
console.error('[release-notes] Failed to get app_config.')
return
}
appConfig = await appConfig.json()
let updateBranch = appConfig.update_branch || 'main'
const updateBranch = appConfig.update_branch || 'main'
let news = document.querySelector('#news')
let releaseNotes = await fetch(`https://raw.githubusercontent.com/cmdr2/stable-diffusion-ui/${updateBranch}/CHANGES.md`)
if (releaseNotes.status != 200) {
if (!releaseNotes.ok) {
console.error('[release-notes] Failed to get CHANGES.md.')
return
}
releaseNotes = await releaseNotes.text()
news.innerHTML = marked.parse(releaseNotes)
}
document.querySelector('body').appendChild(markedScript)
})
})()

View File

@ -0,0 +1,25 @@
/* SD-UI Selftest Plugin.js
*/
(function() { "use strict"
const ID_PREFIX = "selftest-plugin"
const links = document.getElementById("community-links")
if (!links) {
console.error('%s the ID "community-links" cannot be found.', ID_PREFIX)
return
}
// Add link to Jasmine SpecRunner
const pluginLink = document.createElement('li')
const options = {
'stopSpecOnExpectationFailure': "true",
'stopOnSpecFailure': 'false',
'random': 'false',
'hideDisabled': 'false'
}
const optStr = Object.entries(options).map(([key, val]) => `${key}=${val}`).join('&')
pluginLink.innerHTML = `<a id="${ID_PREFIX}-starttest" href="${location.protocol}/plugins/core/SpecRunner.html?${optStr}" target="_blank"><i class="fa-solid fa-vial-circle-check"></i> Start SelfTest</a>`
links.appendChild(pluginLink)
console.log('%s loaded!', ID_PREFIX)
})()

View File

@ -1,6 +1,7 @@
import json
class Request:
request_id: str = None
session_id: str = "session"
prompt: str = ""
negative_prompt: str = ""
@ -23,8 +24,11 @@ class Request:
use_upscale: str = None # or "RealESRGAN_x4plus" or "RealESRGAN_x4plus_anime_6B"
use_stable_diffusion_model: str = "sd-v1-4"
use_vae_model: str = None
use_hypernetwork_model: str = None
hypernetwork_strength: float = 1
show_only_filtered_image: bool = False
output_format: str = "jpeg" # or "png"
output_quality: int = 75
stream_progress_updates: bool = False
stream_image_progress: bool = False
@ -37,6 +41,7 @@ class Request:
"num_outputs": self.num_outputs,
"num_inference_steps": self.num_inference_steps,
"guidance_scale": self.guidance_scale,
"hypernetwork_strengtgh": self.guidance_scale,
"width": self.width,
"height": self.height,
"seed": self.seed,
@ -46,7 +51,10 @@ class Request:
"use_upscale": self.use_upscale,
"use_stable_diffusion_model": self.use_stable_diffusion_model,
"use_vae_model": self.use_vae_model,
"use_hypernetwork_model": self.use_hypernetwork_model,
"hypernetwork_strength": self.hypernetwork_strength,
"output_format": self.output_format,
"output_quality": self.output_quality,
}
def __str__(self):
@ -68,8 +76,11 @@ class Request:
use_upscale: {self.use_upscale}
use_stable_diffusion_model: {self.use_stable_diffusion_model}
use_vae_model: {self.use_vae_model}
use_hypernetwork_model: {self.use_hypernetwork_model}
hypernetwork_strength: {self.hypernetwork_strength}
show_only_filtered_image: {self.show_only_filtered_image}
output_format: {self.output_format}
output_quality: {self.output_quality}
stream_progress_updates: {self.stream_progress_updates}
stream_image_progress: {self.stream_image_progress}'''

View File

@ -1,72 +1,13 @@
diff --git a/optimizedSD/ddpm.py b/optimizedSD/ddpm.py
index b967b55..35ef520 100644
index 79058bc..a473411 100644
--- a/optimizedSD/ddpm.py
+++ b/optimizedSD/ddpm.py
@@ -22,7 +22,7 @@ from ldm.util import exists, default, instantiate_from_config
from ldm.modules.diffusionmodules.util import make_beta_schedule
from ldm.modules.diffusionmodules.util import make_ddim_sampling_parameters, make_ddim_timesteps, noise_like
from ldm.modules.diffusionmodules.util import make_beta_schedule, extract_into_tensor, noise_like
-from samplers import CompVisDenoiser, get_ancestral_step, to_d, append_dims,linear_multistep_coeff
+from .samplers import CompVisDenoiser, get_ancestral_step, to_d, append_dims,linear_multistep_coeff
@@ -564,12 +564,12 @@ class UNet(DDPM):
unconditional_guidance_scale=unconditional_guidance_scale,
callback=callback, img_callback=img_callback)
def disabled_train(self):
"""Overwrite model.train with this function to make sure train/eval mode
@@ -506,6 +506,8 @@ class UNet(DDPM):
x_latent = noise if x0 is None else x0
# sampling
+ if sampler in ('ddim', 'dpm2', 'heun', 'dpm2_a', 'lms') and not hasattr(self, 'ddim_timesteps'):
+ self.make_schedule(ddim_num_steps=S, ddim_eta=eta, verbose=False)
if sampler == "plms":
self.make_schedule(ddim_num_steps=S, ddim_eta=eta, verbose=False)
@@ -528,39 +530,46 @@ class UNet(DDPM):
elif sampler == "ddim":
samples = self.ddim_sampling(x_latent, conditioning, S, unconditional_guidance_scale=unconditional_guidance_scale,
unconditional_conditioning=unconditional_conditioning,
- mask = mask,init_latent=x_T,use_original_steps=False)
+ mask = mask,init_latent=x_T,use_original_steps=False,
+ callback=callback, img_callback=img_callback)
elif sampler == "euler":
self.make_schedule(ddim_num_steps=S, ddim_eta=eta, verbose=False)
samples = self.euler_sampling(self.alphas_cumprod,x_latent, S, conditioning, unconditional_conditioning=unconditional_conditioning,
- unconditional_guidance_scale=unconditional_guidance_scale)
+ unconditional_guidance_scale=unconditional_guidance_scale,
+ img_callback=img_callback)
elif sampler == "euler_a":
self.make_schedule(ddim_num_steps=S, ddim_eta=eta, verbose=False)
samples = self.euler_ancestral_sampling(self.alphas_cumprod,x_latent, S, conditioning, unconditional_conditioning=unconditional_conditioning,
- unconditional_guidance_scale=unconditional_guidance_scale)
+ unconditional_guidance_scale=unconditional_guidance_scale,
+ img_callback=img_callback)
elif sampler == "dpm2":
samples = self.dpm_2_sampling(self.alphas_cumprod,x_latent, S, conditioning, unconditional_conditioning=unconditional_conditioning,
- unconditional_guidance_scale=unconditional_guidance_scale)
+ unconditional_guidance_scale=unconditional_guidance_scale,
+ img_callback=img_callback)
elif sampler == "heun":
samples = self.heun_sampling(self.alphas_cumprod,x_latent, S, conditioning, unconditional_conditioning=unconditional_conditioning,
- unconditional_guidance_scale=unconditional_guidance_scale)
+ unconditional_guidance_scale=unconditional_guidance_scale,
+ img_callback=img_callback)
elif sampler == "dpm2_a":
samples = self.dpm_2_ancestral_sampling(self.alphas_cumprod,x_latent, S, conditioning, unconditional_conditioning=unconditional_conditioning,
- unconditional_guidance_scale=unconditional_guidance_scale)
+ unconditional_guidance_scale=unconditional_guidance_scale,
+ img_callback=img_callback)
elif sampler == "lms":
samples = self.lms_sampling(self.alphas_cumprod,x_latent, S, conditioning, unconditional_conditioning=unconditional_conditioning,
- unconditional_guidance_scale=unconditional_guidance_scale)
+ unconditional_guidance_scale=unconditional_guidance_scale,
+ img_callback=img_callback)
+
+ yield from samples
+
if(self.turbo):
self.model1.to("cpu")
self.model2.to("cpu")
@ -76,7 +17,7 @@ index b967b55..35ef520 100644
@torch.no_grad()
def plms_sampling(self, cond,b, img,
ddim_use_original_steps=False,
@@ -599,10 +608,10 @@ class UNet(DDPM):
@@ -608,10 +608,10 @@ class UNet(DDPM):
old_eps.append(e_t)
if len(old_eps) >= 4:
old_eps.pop(0)
@ -90,23 +31,15 @@ index b967b55..35ef520 100644
@torch.no_grad()
def p_sample_plms(self, x, c, t, index, repeat_noise=False, use_original_steps=False, quantize_denoised=False,
@@ -706,7 +715,8 @@ class UNet(DDPM):
@torch.no_grad()
def ddim_sampling(self, x_latent, cond, t_start, unconditional_guidance_scale=1.0, unconditional_conditioning=None,
- mask = None,init_latent=None,use_original_steps=False):
+ mask = None,init_latent=None,use_original_steps=False,
+ callback=None, img_callback=None):
timesteps = self.ddim_timesteps
timesteps = timesteps[:t_start]
@@ -730,10 +740,13 @@ class UNet(DDPM):
@@ -740,13 +740,13 @@ class UNet(DDPM):
unconditional_guidance_scale=unconditional_guidance_scale,
unconditional_conditioning=unconditional_conditioning)
- if callback: callback(i)
- if img_callback: img_callback(x_dec, i)
+ if callback: yield from callback(i)
+ if img_callback: yield from img_callback(x_dec, i)
+
if mask is not None:
- return x0 * mask + (1. - mask) * x_dec
+ x_dec = x0 * mask + (1. - mask) * x_dec
@ -116,217 +49,114 @@ index b967b55..35ef520 100644
@torch.no_grad()
@@ -779,13 +792,16 @@ class UNet(DDPM):
@@ -820,12 +820,12 @@ class UNet(DDPM):
@torch.no_grad()
- def euler_sampling(self, ac, x, S, cond, unconditional_conditioning = None, unconditional_guidance_scale = 1,extra_args=None,callback=None, disable=None, s_churn=0., s_tmin=0., s_tmax=float('inf'), s_noise=1.):
+ def euler_sampling(self, ac, x, S, cond, unconditional_conditioning = None, unconditional_guidance_scale = 1,extra_args=None,callback=None, disable=None, s_churn=0., s_tmin=0., s_tmax=float('inf'), s_noise=1.,
+ img_callback=None):
"""Implements Algorithm 2 (Euler steps) from Karras et al. (2022)."""
extra_args = {} if extra_args is None else extra_args
cvd = CompVisDenoiser(ac)
sigmas = cvd.get_sigmas(S)
x = x*sigmas[0]
+ print(f"Running Euler Sampling with {len(sigmas) - 1} timesteps")
+
s_in = x.new_ones([x.shape[0]]).half()
for i in trange(len(sigmas) - 1, disable=disable):
gamma = min(s_churn / (len(sigmas) - 1), 2 ** 0.5 - 1) if s_tmin <= sigmas[i] <= s_tmax else 0.
@@ -807,13 +823,18 @@ class UNet(DDPM):
d = to_d(x, sigma_hat, denoised)
if callback is not None:
callback({'x': x, 'i': i, 'sigma': sigmas[i], 'sigma_hat': sigma_hat, 'denoised': denoised})
+
- if callback: callback(i)
- if img_callback: img_callback(x, i)
+ if callback: yield from callback(i)
+ if img_callback: yield from img_callback(x, i)
+
dt = sigmas[i + 1] - sigma_hat
# Euler method
x = x + d * dt
- return x
+
+ yield from img_callback(x, len(sigmas)-1)
@torch.no_grad()
- def euler_ancestral_sampling(self,ac,x, S, cond, unconditional_conditioning = None, unconditional_guidance_scale = 1,extra_args=None, callback=None, disable=None):
+ def euler_ancestral_sampling(self,ac,x, S, cond, unconditional_conditioning = None, unconditional_guidance_scale = 1,extra_args=None, callback=None, disable=None,
+ img_callback=None):
"""Ancestral sampling with Euler method steps."""
extra_args = {} if extra_args is None else extra_args
def euler_ancestral_sampling(self,ac,x, S, cond, unconditional_conditioning = None, unconditional_guidance_scale = 1,extra_args=None, callback=None, disable=None, img_callback=None):
@@ -852,14 +852,14 @@ class UNet(DDPM):
denoised = e_t_uncond + unconditional_guidance_scale * (e_t - e_t_uncond)
@@ -822,6 +843,8 @@ class UNet(DDPM):
sigmas = cvd.get_sigmas(S)
x = x*sigmas[0]
+ print(f"Running Euler Ancestral Sampling with {len(sigmas) - 1} timesteps")
+
s_in = x.new_ones([x.shape[0]]).half()
for i in trange(len(sigmas) - 1, disable=disable):
@@ -837,17 +860,22 @@ class UNet(DDPM):
sigma_down, sigma_up = get_ancestral_step(sigmas[i], sigmas[i + 1])
if callback is not None:
callback({'x': x, 'i': i, 'sigma': sigmas[i], 'sigma_hat': sigmas[i], 'denoised': denoised})
+
- if callback: callback(i)
- if img_callback: img_callback(x, i)
+ if callback: yield from callback(i)
+ if img_callback: yield from img_callback(x, i)
+
d = to_d(x, sigmas[i], denoised)
# Euler method
dt = sigma_down - sigmas[i]
x = x + d * dt
x = x + torch.randn_like(x) * sigma_up
- return x
+
+ yield from img_callback(x, len(sigmas)-1)
@torch.no_grad()
- def heun_sampling(self, ac, x, S, cond, unconditional_conditioning = None, unconditional_guidance_scale = 1, extra_args=None, callback=None, disable=None, s_churn=0., s_tmin=0., s_tmax=float('inf'), s_noise=1.):
+ def heun_sampling(self, ac, x, S, cond, unconditional_conditioning = None, unconditional_guidance_scale = 1, extra_args=None, callback=None, disable=None, s_churn=0., s_tmin=0., s_tmax=float('inf'), s_noise=1.,
+ img_callback=None):
"""Implements Algorithm 2 (Heun steps) from Karras et al. (2022)."""
extra_args = {} if extra_args is None else extra_args
@@ -892,8 +892,8 @@ class UNet(DDPM):
denoised = e_t_uncond + unconditional_guidance_scale * (e_t - e_t_uncond)
@@ -855,6 +883,8 @@ class UNet(DDPM):
sigmas = cvd.get_sigmas(S)
x = x*sigmas[0]
+ print(f"Running Heun Sampling with {len(sigmas) - 1} timesteps")
+
s_in = x.new_ones([x.shape[0]]).half()
for i in trange(len(sigmas) - 1, disable=disable):
@@ -876,6 +906,9 @@ class UNet(DDPM):
d = to_d(x, sigma_hat, denoised)
if callback is not None:
callback({'x': x, 'i': i, 'sigma': sigmas[i], 'sigma_hat': sigma_hat, 'denoised': denoised})
+
- if callback: callback(i)
- if img_callback: img_callback(x, i)
+ if callback: yield from callback(i)
+ if img_callback: yield from img_callback(x, i)
+
dt = sigmas[i + 1] - sigma_hat
if sigmas[i + 1] == 0:
# Euler method
@@ -895,11 +928,13 @@ class UNet(DDPM):
@@ -913,7 +913,7 @@ class UNet(DDPM):
d_2 = to_d(x_2, sigmas[i + 1], denoised_2)
d_prime = (d + d_2) / 2
x = x + d_prime * dt
- return x
+
+ yield from img_callback(x, len(sigmas)-1)
@torch.no_grad()
- def dpm_2_sampling(self,ac,x, S, cond, unconditional_conditioning = None, unconditional_guidance_scale = 1,extra_args=None, callback=None, disable=None, s_churn=0., s_tmin=0., s_tmax=float('inf'), s_noise=1.):
+ def dpm_2_sampling(self,ac,x, S, cond, unconditional_conditioning = None, unconditional_guidance_scale = 1,extra_args=None, callback=None, disable=None, s_churn=0., s_tmin=0., s_tmax=float('inf'), s_noise=1.,
+ img_callback=None):
"""A sampler inspired by DPM-Solver-2 and Algorithm 2 from Karras et al. (2022)."""
extra_args = {} if extra_args is None else extra_args
@@ -907,6 +942,8 @@ class UNet(DDPM):
sigmas = cvd.get_sigmas(S)
x = x*sigmas[0]
+ print(f"Running DPM2 Sampling with {len(sigmas) - 1} timesteps")
+
s_in = x.new_ones([x.shape[0]]).half()
for i in trange(len(sigmas) - 1, disable=disable):
gamma = min(s_churn / (len(sigmas) - 1), 2 ** 0.5 - 1) if s_tmin <= sigmas[i] <= s_tmax else 0.
@@ -924,7 +961,7 @@ class UNet(DDPM):
@@ -944,8 +944,8 @@ class UNet(DDPM):
e_t_uncond, e_t = (x_in + eps * c_out).chunk(2)
denoised = e_t_uncond + unconditional_guidance_scale * (e_t - e_t_uncond)
-
- if callback: callback(i)
- if img_callback: img_callback(x, i)
+ if callback: yield from callback(i)
+ if img_callback: yield from img_callback(x, i)
d = to_d(x, sigma_hat, denoised)
# Midpoint method, where the midpoint is chosen according to a rho=3 Karras schedule
@@ -945,11 +982,13 @@ class UNet(DDPM):
@@ -966,7 +966,7 @@ class UNet(DDPM):
d_2 = to_d(x_2, sigma_mid, denoised_2)
x = x + d_2 * dt_2
- return x
+
+ yield from img_callback(x, len(sigmas)-1)
@torch.no_grad()
- def dpm_2_ancestral_sampling(self,ac,x, S, cond, unconditional_conditioning = None, unconditional_guidance_scale = 1, extra_args=None, callback=None, disable=None):
+ def dpm_2_ancestral_sampling(self,ac,x, S, cond, unconditional_conditioning = None, unconditional_guidance_scale = 1, extra_args=None, callback=None, disable=None,
+ img_callback=None):
"""Ancestral sampling with DPM-Solver inspired second-order steps."""
extra_args = {} if extra_args is None else extra_args
@@ -994,8 +994,8 @@ class UNet(DDPM):
@@ -957,6 +996,8 @@ class UNet(DDPM):
sigmas = cvd.get_sigmas(S)
x = x*sigmas[0]
+ print(f"Running DPM2 Ancestral Sampling with {len(sigmas) - 1} timesteps")
+
s_in = x.new_ones([x.shape[0]]).half()
for i in trange(len(sigmas) - 1, disable=disable):
@@ -973,6 +1014,9 @@ class UNet(DDPM):
sigma_down, sigma_up = get_ancestral_step(sigmas[i], sigmas[i + 1])
if callback is not None:
callback({'x': x, 'i': i, 'sigma': sigmas[i], 'sigma_hat': sigmas[i], 'denoised': denoised})
+
- if callback: callback(i)
- if img_callback: img_callback(x, i)
+ if callback: yield from callback(i)
+ if img_callback: yield from img_callback(x, i)
+
d = to_d(x, sigmas[i], denoised)
# Midpoint method, where the midpoint is chosen according to a rho=3 Karras schedule
sigma_mid = ((sigmas[i] ** (1 / 3) + sigma_down ** (1 / 3)) / 2) ** 3
@@ -993,11 +1037,13 @@ class UNet(DDPM):
@@ -1016,7 +1016,7 @@ class UNet(DDPM):
d_2 = to_d(x_2, sigma_mid, denoised_2)
x = x + d_2 * dt_2
x = x + torch.randn_like(x) * sigma_up
- return x
+
+ yield from img_callback(x, len(sigmas)-1)
@torch.no_grad()
- def lms_sampling(self,ac,x, S, cond, unconditional_conditioning = None, unconditional_guidance_scale = 1, extra_args=None, callback=None, disable=None, order=4):
+ def lms_sampling(self,ac,x, S, cond, unconditional_conditioning = None, unconditional_guidance_scale = 1, extra_args=None, callback=None, disable=None, order=4,
+ img_callback=None):
extra_args = {} if extra_args is None else extra_args
s_in = x.new_ones([x.shape[0]])
@@ -1005,6 +1051,8 @@ class UNet(DDPM):
sigmas = cvd.get_sigmas(S)
x = x*sigmas[0]
+ print(f"Running LMS Sampling with {len(sigmas) - 1} timesteps")
+
ds = []
for i in trange(len(sigmas) - 1, disable=disable):
@@ -1017,6 +1065,7 @@ class UNet(DDPM):
@@ -1042,8 +1042,8 @@ class UNet(DDPM):
e_t_uncond, e_t = (x_in + eps * c_out).chunk(2)
denoised = e_t_uncond + unconditional_guidance_scale * (e_t - e_t_uncond)
- if callback: callback(i)
- if img_callback: img_callback(x, i)
+ if callback: yield from callback(i)
+ if img_callback: yield from img_callback(x, i)
d = to_d(x, sigmas[i], denoised)
ds.append(d)
@@ -1027,4 +1076,5 @@ class UNet(DDPM):
@@ -1054,4 +1054,4 @@ class UNet(DDPM):
cur_order = min(i + 1, order)
coeffs = [linear_multistep_coeff(cur_order, sigmas.cpu(), i, j) for j in range(cur_order)]
x = x + sum(coeff * d for coeff, d in zip(coeffs, reversed(ds)))
- return x
+
+ yield from img_callback(x, len(sigmas)-1)
diff --git a/optimizedSD/openaimodelSplit.py b/optimizedSD/openaimodelSplit.py
index abc3098..7a32ffe 100644
--- a/optimizedSD/openaimodelSplit.py
+++ b/optimizedSD/openaimodelSplit.py
@@ -13,7 +13,7 @@ from ldm.modules.diffusionmodules.util import (
normalization,
timestep_embedding,
)
-from splitAttention import SpatialTransformer
+from .splitAttention import SpatialTransformer
class AttentionPool2d(nn.Module):

View File

@ -0,0 +1,84 @@
diff --git a/ldm/models/diffusion/ddim.py b/ldm/models/diffusion/ddim.py
index 27ead0e..6215939 100644
--- a/ldm/models/diffusion/ddim.py
+++ b/ldm/models/diffusion/ddim.py
@@ -100,7 +100,7 @@ class DDIMSampler(object):
size = (batch_size, C, H, W)
print(f'Data shape for DDIM sampling is {size}, eta {eta}')
- samples, intermediates = self.ddim_sampling(conditioning, size,
+ samples = self.ddim_sampling(conditioning, size,
callback=callback,
img_callback=img_callback,
quantize_denoised=quantize_x0,
@@ -117,7 +117,8 @@ class DDIMSampler(object):
dynamic_threshold=dynamic_threshold,
ucg_schedule=ucg_schedule
)
- return samples, intermediates
+ # return samples, intermediates
+ yield from samples
@torch.no_grad()
def ddim_sampling(self, cond, shape,
@@ -168,14 +169,15 @@ class DDIMSampler(object):
unconditional_conditioning=unconditional_conditioning,
dynamic_threshold=dynamic_threshold)
img, pred_x0 = outs
- if callback: callback(i)
- if img_callback: img_callback(pred_x0, i)
+ if callback: yield from callback(i)
+ if img_callback: yield from img_callback(pred_x0, i)
if index % log_every_t == 0 or index == total_steps - 1:
intermediates['x_inter'].append(img)
intermediates['pred_x0'].append(pred_x0)
- return img, intermediates
+ # return img, intermediates
+ yield from img_callback(pred_x0, len(iterator)-1)
@torch.no_grad()
def p_sample_ddim(self, x, c, t, index, repeat_noise=False, use_original_steps=False, quantize_denoised=False,
diff --git a/ldm/models/diffusion/plms.py b/ldm/models/diffusion/plms.py
index 7002a36..0951f39 100644
--- a/ldm/models/diffusion/plms.py
+++ b/ldm/models/diffusion/plms.py
@@ -96,7 +96,7 @@ class PLMSSampler(object):
size = (batch_size, C, H, W)
print(f'Data shape for PLMS sampling is {size}')
- samples, intermediates = self.plms_sampling(conditioning, size,
+ samples = self.plms_sampling(conditioning, size,
callback=callback,
img_callback=img_callback,
quantize_denoised=quantize_x0,
@@ -112,7 +112,8 @@ class PLMSSampler(object):
unconditional_conditioning=unconditional_conditioning,
dynamic_threshold=dynamic_threshold,
)
- return samples, intermediates
+ #return samples, intermediates
+ yield from samples
@torch.no_grad()
def plms_sampling(self, cond, shape,
@@ -165,14 +166,15 @@ class PLMSSampler(object):
old_eps.append(e_t)
if len(old_eps) >= 4:
old_eps.pop(0)
- if callback: callback(i)
- if img_callback: img_callback(pred_x0, i)
+ if callback: yield from callback(i)
+ if img_callback: yield from img_callback(pred_x0, i)
if index % log_every_t == 0 or index == total_steps - 1:
intermediates['x_inter'].append(img)
intermediates['pred_x0'].append(pred_x0)
- return img, intermediates
+ # return img, intermediates
+ yield from img_callback(pred_x0, len(iterator)-1)
@torch.no_grad()
def p_sample_plms(self, x, c, t, index, repeat_noise=False, use_original_steps=False, quantize_denoised=False,

View File

@ -101,7 +101,7 @@ def device_init(thread_data, device):
# Force full precision on 1660 and 1650 NVIDIA cards to avoid creating green images
device_name = thread_data.device_name.lower()
thread_data.force_full_precision = ('nvidia' in device_name or 'geforce' in device_name) and (' 1660' in device_name or ' 1650' in device_name)
thread_data.force_full_precision = (('nvidia' in device_name or 'geforce' in device_name) and (' 1660' in device_name or ' 1650' in device_name)) or ('Quadro T2000' in device_name)
if thread_data.force_full_precision:
print('forcing full precision on NVIDIA 16xx cards, to avoid green images. GPU detected: ', thread_data.device_name)
# Apply force_full_precision now before models are loaded.

View File

@ -1,13 +0,0 @@
diff --git a/environment.yaml b/environment.yaml
index 7f25da8..306750f 100644
--- a/environment.yaml
+++ b/environment.yaml
@@ -23,6 +23,8 @@ dependencies:
- torch-fidelity==0.3.0
- transformers==4.19.2
- torchmetrics==0.6.0
+ - pywavelets==1.3.0
+ - pandas==1.4.4
- kornia==0.6
- -e git+https://github.com/CompVis/taming-transformers.git@master#egg=taming-transformers
- -e git+https://github.com/openai/CLIP.git@main#egg=clip

View File

@ -0,0 +1,198 @@
# this is basically a cut down version of https://github.com/AUTOMATIC1111/stable-diffusion-webui/blob/c9a2cfdf2a53d37c2de1908423e4f548088667ef/modules/hypernetworks/hypernetwork.py, mostly for feature parity
# I, c0bra5, don't really understand how deep learning works. I just know how to port stuff.
import inspect
import torch
import optimizedSD.splitAttention
from . import runtime
from einops import rearrange
optimizer_dict = {optim_name : cls_obj for optim_name, cls_obj in inspect.getmembers(torch.optim, inspect.isclass) if optim_name != "Optimizer"}
loaded_hypernetwork = None
class HypernetworkModule(torch.nn.Module):
multiplier = 0.5
activation_dict = {
"linear": torch.nn.Identity,
"relu": torch.nn.ReLU,
"leakyrelu": torch.nn.LeakyReLU,
"elu": torch.nn.ELU,
"swish": torch.nn.Hardswish,
"tanh": torch.nn.Tanh,
"sigmoid": torch.nn.Sigmoid,
}
activation_dict.update({cls_name.lower(): cls_obj for cls_name, cls_obj in inspect.getmembers(torch.nn.modules.activation) if inspect.isclass(cls_obj) and cls_obj.__module__ == 'torch.nn.modules.activation'})
def __init__(self, dim, state_dict=None, layer_structure=None, activation_func=None, weight_init='Normal',
add_layer_norm=False, use_dropout=False, activate_output=False, last_layer_dropout=False):
super().__init__()
assert layer_structure is not None, "layer_structure must not be None"
assert layer_structure[0] == 1, "Multiplier Sequence should start with size 1!"
assert layer_structure[-1] == 1, "Multiplier Sequence should end with size 1!"
linears = []
for i in range(len(layer_structure) - 1):
# Add a fully-connected layer
linears.append(torch.nn.Linear(int(dim * layer_structure[i]), int(dim * layer_structure[i+1])))
# Add an activation func except last layer
if activation_func == "linear" or activation_func is None or (i >= len(layer_structure) - 2 and not activate_output):
pass
elif activation_func in self.activation_dict:
linears.append(self.activation_dict[activation_func]())
else:
raise RuntimeError(f'hypernetwork uses an unsupported activation function: {activation_func}')
# Add layer normalization
if add_layer_norm:
linears.append(torch.nn.LayerNorm(int(dim * layer_structure[i+1])))
# Add dropout except last layer
if use_dropout and (i < len(layer_structure) - 3 or last_layer_dropout and i < len(layer_structure) - 2):
linears.append(torch.nn.Dropout(p=0.3))
self.linear = torch.nn.Sequential(*linears)
self.fix_old_state_dict(state_dict)
self.load_state_dict(state_dict)
self.to(runtime.thread_data.device)
def fix_old_state_dict(self, state_dict):
changes = {
'linear1.bias': 'linear.0.bias',
'linear1.weight': 'linear.0.weight',
'linear2.bias': 'linear.1.bias',
'linear2.weight': 'linear.1.weight',
}
for fr, to in changes.items():
x = state_dict.get(fr, None)
if x is None:
continue
del state_dict[fr]
state_dict[to] = x
def forward(self, x: torch.Tensor):
return x + self.linear(x) * runtime.thread_data.hypernetwork_strength
def apply_hypernetwork(hypernetwork, context, layer=None):
hypernetwork_layers = hypernetwork.get(context.shape[2], None)
if hypernetwork_layers is None:
return context, context
if layer is not None:
layer.hyper_k = hypernetwork_layers[0]
layer.hyper_v = hypernetwork_layers[1]
context_k = hypernetwork_layers[0](context)
context_v = hypernetwork_layers[1](context)
return context_k, context_v
def get_kv(context, hypernetwork):
if hypernetwork is None:
return context, context
else:
return apply_hypernetwork(runtime.thread_data.hypernetwork, context)
# This might need updating as the optimisedSD code changes
# I think yall have a system for this (patch files in sd_internal) but idk how it works and no amount of searching gave me any clue
# just in case for attribution https://github.com/easydiffusion/diffusion-kit/blob/e8ea0cadd543056059cd951e76d4744de76327d2/optimizedSD/splitAttention.py#L171
def new_cross_attention_forward(self, x, context=None, mask=None):
h = self.heads
q = self.to_q(x)
# default context
context = context if context is not None else x() if inspect.isfunction(x) else x
# hypernetwork!
context_k, context_v = get_kv(context, runtime.thread_data.hypernetwork)
k = self.to_k(context_k)
v = self.to_v(context_v)
del context, x
q, k, v = map(lambda t: rearrange(t, 'b n (h d) -> (b h) n d', h=h), (q, k, v))
limit = k.shape[0]
att_step = self.att_step
q_chunks = list(torch.tensor_split(q, limit//att_step, dim=0))
k_chunks = list(torch.tensor_split(k, limit//att_step, dim=0))
v_chunks = list(torch.tensor_split(v, limit//att_step, dim=0))
q_chunks.reverse()
k_chunks.reverse()
v_chunks.reverse()
sim = torch.zeros(q.shape[0], q.shape[1], v.shape[2], device=q.device)
del k, q, v
for i in range (0, limit, att_step):
q_buffer = q_chunks.pop()
k_buffer = k_chunks.pop()
v_buffer = v_chunks.pop()
sim_buffer = torch.einsum('b i d, b j d -> b i j', q_buffer, k_buffer) * self.scale
del k_buffer, q_buffer
# attention, what we cannot get enough of, by chunks
sim_buffer = sim_buffer.softmax(dim=-1)
sim_buffer = torch.einsum('b i j, b j d -> b i d', sim_buffer, v_buffer)
del v_buffer
sim[i:i+att_step,:,:] = sim_buffer
del sim_buffer
sim = rearrange(sim, '(b h) n d -> b n (h d)', h=h)
return self.to_out(sim)
def load_hypernetwork(path: str):
state_dict = torch.load(path, map_location='cpu')
layer_structure = state_dict.get('layer_structure', [1, 2, 1])
activation_func = state_dict.get('activation_func', None)
weight_init = state_dict.get('weight_initialization', 'Normal')
add_layer_norm = state_dict.get('is_layer_norm', False)
use_dropout = state_dict.get('use_dropout', False)
activate_output = state_dict.get('activate_output', True)
last_layer_dropout = state_dict.get('last_layer_dropout', False)
# this is a bit verbose so leaving it commented out for the poor soul who ever has to debug this
# print(f"layer_structure: {layer_structure}")
# print(f"activation_func: {activation_func}")
# print(f"weight_init: {weight_init}")
# print(f"add_layer_norm: {add_layer_norm}")
# print(f"use_dropout: {use_dropout}")
# print(f"activate_output: {activate_output}")
# print(f"last_layer_dropout: {last_layer_dropout}")
layers = {}
for size, sd in state_dict.items():
if type(size) == int:
layers[size] = (
HypernetworkModule(size, sd[0], layer_structure, activation_func, weight_init, add_layer_norm,
use_dropout, activate_output, last_layer_dropout=last_layer_dropout),
HypernetworkModule(size, sd[1], layer_structure, activation_func, weight_init, add_layer_norm,
use_dropout, activate_output, last_layer_dropout=last_layer_dropout),
)
print(f"hypernetwork loaded")
return layers
# overriding of original function
old_cross_attention_forward = optimizedSD.splitAttention.CrossAttention.forward
# hijacks the cross attention forward function to add hyper network support
def hijack_cross_attention():
print("hypernetwork functionality added to cross attention")
optimizedSD.splitAttention.CrossAttention.forward = new_cross_attention_forward
# there was a cop on board
def unhijack_cross_attention_forward():
print("hypernetwork functionality removed from cross attention")
optimizedSD.splitAttention.CrossAttention.forward = old_cross_attention_forward
hijack_cross_attention()

View File

@ -7,6 +7,7 @@ Notes:
import json
import os, re
import traceback
import queue
import torch
import numpy as np
from gc import collect as gc_collect
@ -21,13 +22,17 @@ from torch import autocast
from contextlib import nullcontext
from einops import rearrange, repeat
from ldm.util import instantiate_from_config
from optimizedSD.optimUtils import split_weighted_subprompts
from transformers import logging
from gfpgan import GFPGANer
from basicsr.archs.rrdbnet_arch import RRDBNet
from realesrgan import RealESRGANer
from server import HYPERNETWORK_MODEL_EXTENSIONS# , STABLE_DIFFUSION_MODEL_EXTENSIONS, VAE_MODEL_EXTENSIONS
from threading import Lock
from safetensors.torch import load_file
import uuid
logging.set_verbosity_error()
@ -35,7 +40,7 @@ logging.set_verbosity_error()
# consts
config_yaml = "optimizedSD/v1-inference.yaml"
filename_regex = re.compile('[^a-zA-Z0-9]')
force_gfpgan_to_cuda0 = True # workaround: gfpgan currently works only on cuda:0
gfpgan_temp_device_lock = Lock() # workaround: gfpgan currently can only start on one device at a time.
# api stuff
from sd_internal import device_manager
@ -54,12 +59,15 @@ def thread_init(device):
thread_data.ckpt_file = None
thread_data.vae_file = None
thread_data.hypernetwork_file = None
thread_data.gfpgan_file = None
thread_data.real_esrgan_file = None
thread_data.model = None
thread_data.modelCS = None
thread_data.modelFS = None
thread_data.hypernetwork = None
thread_data.hypernetwork_strength = 1
thread_data.model_gfpgan = None
thread_data.model_real_esrgan = None
@ -76,11 +84,32 @@ def thread_init(device):
thread_data.force_full_precision = False
thread_data.reduced_memory = True
thread_data.test_sd2 = isSD2()
device_manager.device_init(thread_data, device)
# temp hack, will remove soon
def isSD2():
try:
SD_UI_DIR = os.getenv('SD_UI_PATH', None)
CONFIG_DIR = os.path.abspath(os.path.join(SD_UI_DIR, '..', 'scripts'))
config_json_path = os.path.join(CONFIG_DIR, 'config.json')
if not os.path.exists(config_json_path):
return False
with open(config_json_path, 'r', encoding='utf-8') as f:
config = json.load(f)
return config.get('test_sd2', False)
except Exception as e:
return False
def load_model_ckpt():
if not thread_data.ckpt_file: raise ValueError(f'Thread ckpt_file is undefined.')
if not os.path.exists(thread_data.ckpt_file + '.ckpt'): raise FileNotFoundError(f'Cannot find {thread_data.ckpt_file}.ckpt')
if os.path.exists(thread_data.ckpt_file + '.ckpt'):
thread_data.ckpt_file += '.ckpt'
elif os.path.exists(thread_data.ckpt_file + '.safetensors'):
thread_data.ckpt_file += '.safetensors'
elif not os.path.exists(thread_data.ckpt_file):
raise FileNotFoundError(f'Cannot find {thread_data.ckpt_file}.ckpt or .safetensors')
if not thread_data.precision:
thread_data.precision = 'full' if thread_data.force_full_precision else 'autocast'
@ -91,8 +120,15 @@ def load_model_ckpt():
if thread_data.device == 'cpu':
thread_data.precision = 'full'
print('loading', thread_data.ckpt_file + '.ckpt', 'to device', thread_data.device, 'using precision', thread_data.precision)
sd = load_model_from_config(thread_data.ckpt_file + '.ckpt')
print('loading', thread_data.ckpt_file, 'to device', thread_data.device, 'using precision', thread_data.precision)
if thread_data.test_sd2:
load_model_ckpt_sd2()
else:
load_model_ckpt_sd1()
def load_model_ckpt_sd1():
sd, model_ver = load_model_from_config(thread_data.ckpt_file)
li, lo = [], []
for key, value in sd.items():
sp = key.split(".")
@ -179,12 +215,46 @@ def load_model_ckpt():
thread_data.model_fs_is_half = False
print(f'''loaded model
model file: {thread_data.ckpt_file}.ckpt
model file: {thread_data.ckpt_file}
model.device: {model.device}
modelCS.device: {modelCS.cond_stage_model.device}
modelFS.device: {thread_data.modelFS.device}
using precision: {thread_data.precision}''')
def load_model_ckpt_sd2():
sd, model_ver = load_model_from_config(thread_data.ckpt_file)
config_file = 'configs/stable-diffusion/v2-inference-v.yaml' if model_ver == 'sd2' else "configs/stable-diffusion/v1-inference.yaml"
config = OmegaConf.load(config_file)
verbose = False
thread_data.model = instantiate_from_config(config.model)
m, u = thread_data.model.load_state_dict(sd, strict=False)
if len(m) > 0 and verbose:
print("missing keys:")
print(m)
if len(u) > 0 and verbose:
print("unexpected keys:")
print(u)
thread_data.model.to(thread_data.device)
thread_data.model.eval()
del sd
thread_data.model.cond_stage_model.device = torch.device(thread_data.device)
if thread_data.device != "cpu" and thread_data.precision == "autocast":
thread_data.model.half()
thread_data.model_is_half = True
thread_data.model_fs_is_half = True
else:
thread_data.model_is_half = False
thread_data.model_fs_is_half = False
print(f'''loaded model
model file: {thread_data.ckpt_file}
using precision: {thread_data.precision}''')
def unload_filters():
if thread_data.model_gfpgan is not None:
if thread_data.device != 'cpu': thread_data.model_gfpgan.gfpgan.to('cpu')
@ -204,10 +274,11 @@ def unload_models():
if thread_data.model is not None:
print('Unloading models...')
if thread_data.device != 'cpu':
thread_data.modelFS.to('cpu')
thread_data.modelCS.to('cpu')
thread_data.model.model1.to("cpu")
thread_data.model.model2.to("cpu")
if not thread_data.test_sd2:
thread_data.modelFS.to('cpu')
thread_data.modelCS.to('cpu')
thread_data.model.model1.to("cpu")
thread_data.model.model2.to("cpu")
del thread_data.model
del thread_data.modelCS
@ -253,12 +324,6 @@ def move_to_cpu(model):
def load_model_gfpgan():
if thread_data.gfpgan_file is None: raise ValueError(f'Thread gfpgan_file is undefined.')
# hack for a bug in facexlib: https://github.com/xinntao/facexlib/pull/19/files
from facexlib.detection import retinaface
retinaface.device = torch.device(thread_data.device)
print('forced retinaface.device to', thread_data.device)
model_path = thread_data.gfpgan_file + ".pth"
thread_data.model_gfpgan = GFPGANer(device=torch.device(thread_data.device), model_path=model_path, upscale=1, arch='clean', channel_multiplier=2, bg_upsampler=None)
print('loaded', thread_data.gfpgan_file, 'to', thread_data.model_gfpgan.device, 'precision', thread_data.precision)
@ -314,15 +379,23 @@ def apply_filters(filter_name, image_data, model_path=None):
image_data.to(thread_data.device)
if filter_name == 'gfpgan':
if model_path is not None and model_path != thread_data.gfpgan_file:
thread_data.gfpgan_file = model_path
load_model_gfpgan()
elif not thread_data.model_gfpgan:
load_model_gfpgan()
if thread_data.model_gfpgan is None: raise Exception('Model "gfpgan" not loaded.')
print('enhance with', thread_data.gfpgan_file, 'on', thread_data.model_gfpgan.device, 'precision', thread_data.precision)
_, _, output = thread_data.model_gfpgan.enhance(image_data[:,:,::-1], has_aligned=False, only_center_face=False, paste_back=True)
image_data = output[:,:,::-1]
# This lock is only ever used here. No need to use timeout for the request. Should never deadlock.
with gfpgan_temp_device_lock: # Wait for any other devices to complete before starting.
# hack for a bug in facexlib: https://github.com/xinntao/facexlib/pull/19/files
from facexlib.detection import retinaface
retinaface.device = torch.device(thread_data.device)
print('forced retinaface.device to', thread_data.device)
if model_path is not None and model_path != thread_data.gfpgan_file:
thread_data.gfpgan_file = model_path
load_model_gfpgan()
elif not thread_data.model_gfpgan:
load_model_gfpgan()
if thread_data.model_gfpgan is None: raise Exception('Model "gfpgan" not loaded.')
print('enhance with', thread_data.gfpgan_file, 'on', thread_data.model_gfpgan.device, 'precision', thread_data.precision)
_, _, output = thread_data.model_gfpgan.enhance(image_data[:,:,::-1], has_aligned=False, only_center_face=False, paste_back=True)
image_data = output[:,:,::-1]
if filter_name == 'real_esrgan':
if model_path is not None and model_path != thread_data.real_esrgan_file:
@ -337,47 +410,129 @@ def apply_filters(filter_name, image_data, model_path=None):
return image_data
def mk_img(req: Request):
def is_model_reload_necessary(req: Request):
# custom model support:
# the req.use_stable_diffusion_model needs to be a valid path
# to the ckpt file (without the extension).
if os.path.exists(req.use_stable_diffusion_model + '.ckpt'):
req.use_stable_diffusion_model += '.ckpt'
elif os.path.exists(req.use_stable_diffusion_model + '.safetensors'):
req.use_stable_diffusion_model += '.safetensors'
elif not os.path.exists(req.use_stable_diffusion_model):
raise FileNotFoundError(f'Cannot find {req.use_stable_diffusion_model}.ckpt or .safetensors')
needs_model_reload = False
if not thread_data.model or thread_data.ckpt_file != req.use_stable_diffusion_model or thread_data.vae_file != req.use_vae_model:
thread_data.ckpt_file = req.use_stable_diffusion_model
thread_data.vae_file = req.use_vae_model
needs_model_reload = True
if thread_data.device != 'cpu':
if (thread_data.precision == 'autocast' and (req.use_full_precision or not thread_data.model_is_half)) or \
(thread_data.precision == 'full' and not req.use_full_precision and not thread_data.force_full_precision):
thread_data.precision = 'full' if req.use_full_precision else 'autocast'
needs_model_reload = True
return needs_model_reload
def reload_model():
unload_models()
unload_filters()
load_model_ckpt()
def is_hypernetwork_reload_necessary(req: Request):
needs_model_reload = False
if thread_data.hypernetwork_file != req.use_hypernetwork_model:
thread_data.hypernetwork_file = req.use_hypernetwork_model
needs_model_reload = True
return needs_model_reload
def load_hypernetwork():
if thread_data.test_sd2:
# Not yet supported in SD2
return
from . import hypernetwork
if thread_data.hypernetwork_file is not None:
try:
loaded = False
for model_extension in HYPERNETWORK_MODEL_EXTENSIONS:
if os.path.exists(thread_data.hypernetwork_file + model_extension):
print(f"Loading hypernetwork weights from: {thread_data.hypernetwork_file}{model_extension}")
thread_data.hypernetwork = hypernetwork.load_hypernetwork(thread_data.hypernetwork_file + model_extension)
loaded = True
break
if not loaded:
print(f'Cannot find hypernetwork: {thread_data.hypernetwork_file}')
thread_data.hypernetwork_file = None
except:
print(traceback.format_exc())
print(f'Could not load hypernetwork: {thread_data.hypernetwork_file}')
thread_data.hypernetwork_file = None
def unload_hypernetwork():
if thread_data.hypernetwork is not None:
print('Unloading hypernetwork...')
if thread_data.device != 'cpu':
for i in thread_data.hypernetwork:
thread_data.hypernetwork[i][0].to('cpu')
thread_data.hypernetwork[i][1].to('cpu')
del thread_data.hypernetwork
thread_data.hypernetwork = None
gc()
def reload_hypernetwork():
unload_hypernetwork()
load_hypernetwork()
def mk_img(req: Request, data_queue: queue.Queue, task_temp_images: list, step_callback):
try:
yield from do_mk_img(req)
return do_mk_img(req, data_queue, task_temp_images, step_callback)
except Exception as e:
print(traceback.format_exc())
if thread_data.device != 'cpu':
if thread_data.device != 'cpu' and not thread_data.test_sd2:
thread_data.modelFS.to('cpu')
thread_data.modelCS.to('cpu')
thread_data.model.model1.to("cpu")
thread_data.model.model2.to("cpu")
gc() # Release from memory.
yield json.dumps({
data_queue.put(json.dumps({
"status": 'failed',
"detail": str(e)
})
}))
raise e
def update_temp_img(req, x_samples):
def update_temp_img(req, x_samples, task_temp_images: list):
partial_images = []
for i in range(req.num_outputs):
x_sample_ddim = thread_data.modelFS.decode_first_stage(x_samples[i].unsqueeze(0))
if thread_data.test_sd2:
x_sample_ddim = thread_data.model.decode_first_stage(x_samples[i].unsqueeze(0))
else:
x_sample_ddim = thread_data.modelFS.decode_first_stage(x_samples[i].unsqueeze(0))
x_sample = torch.clamp((x_sample_ddim + 1.0) / 2.0, min=0.0, max=1.0)
x_sample = 255.0 * rearrange(x_sample[0].cpu().numpy(), "c h w -> h w c")
x_sample = x_sample.astype(np.uint8)
img = Image.fromarray(x_sample)
buf = BytesIO()
img.save(buf, format='JPEG')
buf.seek(0)
buf = img_to_buffer(img, output_format='JPEG')
del img, x_sample, x_sample_ddim
# don't delete x_samples, it is used in the code that called this callback
thread_data.temp_images[str(req.session_id) + '/' + str(i)] = buf
partial_images.append({'path': f'/image/tmp/{req.session_id}/{i}'})
thread_data.temp_images[f'{req.request_id}/{i}'] = buf
task_temp_images[i] = buf
partial_images.append({'path': f'/image/tmp/{req.request_id}/{i}'})
return partial_images
# Build and return the apropriate generator for do_mk_img
def get_image_progress_generator(req, extra_props=None):
def get_image_progress_generator(req, data_queue: queue.Queue, task_temp_images: list, step_callback, extra_props=None):
if not req.stream_progress_updates:
def empty_callback(x_samples, i): return x_samples
def empty_callback(x_samples, i):
step_callback()
return empty_callback
thread_data.partial_x_samples = None
@ -394,46 +549,27 @@ def get_image_progress_generator(req, extra_props=None):
progress.update(extra_props)
if req.stream_image_progress and i % 5 == 0:
progress['output'] = update_temp_img(req, x_samples)
progress['output'] = update_temp_img(req, x_samples, task_temp_images)
yield json.dumps(progress)
data_queue.put(json.dumps(progress))
step_callback()
if thread_data.stop_processing:
raise UserInitiatedStop("User requested that we stop processing")
return img_callback
def do_mk_img(req: Request):
def do_mk_img(req: Request, data_queue: queue.Queue, task_temp_images: list, step_callback):
thread_data.stop_processing = False
res = Response()
res.request = req
res.images = []
thread_data.hypernetwork_strength = req.hypernetwork_strength
thread_data.temp_images.clear()
# custom model support:
# the req.use_stable_diffusion_model needs to be a valid path
# to the ckpt file (without the extension).
if not os.path.exists(req.use_stable_diffusion_model + '.ckpt'): raise FileNotFoundError(f'Cannot find {req.use_stable_diffusion_model}.ckpt')
needs_model_reload = False
if not thread_data.model or thread_data.ckpt_file != req.use_stable_diffusion_model or thread_data.vae_file != req.use_vae_model:
thread_data.ckpt_file = req.use_stable_diffusion_model
thread_data.vae_file = req.use_vae_model
needs_model_reload = True
if thread_data.device != 'cpu':
if (thread_data.precision == 'autocast' and (req.use_full_precision or not thread_data.model_is_half)) or \
(thread_data.precision == 'full' and not req.use_full_precision and not thread_data.force_full_precision):
thread_data.precision = 'full' if req.use_full_precision else 'autocast'
needs_model_reload = True
if needs_model_reload:
unload_models()
unload_filters()
load_model_ckpt()
if thread_data.turbo != req.turbo:
if thread_data.turbo != req.turbo and not thread_data.test_sd2:
thread_data.turbo = req.turbo
thread_data.model.turbo = req.turbo
@ -478,10 +614,14 @@ def do_mk_img(req: Request):
if thread_data.device != "cpu" and thread_data.precision == "autocast":
init_image = init_image.half()
thread_data.modelFS.to(thread_data.device)
if not thread_data.test_sd2:
thread_data.modelFS.to(thread_data.device)
init_image = repeat(init_image, '1 ... -> b ...', b=batch_size)
init_latent = thread_data.modelFS.get_first_stage_encoding(thread_data.modelFS.encode_first_stage(init_image)) # move to latent space
if thread_data.test_sd2:
init_latent = thread_data.model.get_first_stage_encoding(thread_data.model.encode_first_stage(init_image)) # move to latent space
else:
init_latent = thread_data.modelFS.get_first_stage_encoding(thread_data.modelFS.encode_first_stage(init_image)) # move to latent space
if req.mask is not None:
mask = load_mask(req.mask, req.width, req.height, init_latent.shape[2], init_latent.shape[3], True).to(thread_data.device)
@ -493,27 +633,26 @@ def do_mk_img(req: Request):
# Send to CPU and wait until complete.
# wait_model_move_to(thread_data.modelFS, 'cpu')
move_to_cpu(thread_data.modelFS)
if not thread_data.test_sd2:
move_to_cpu(thread_data.modelFS)
assert 0. <= req.prompt_strength <= 1., 'can only work with strength in [0.0, 1.0]'
t_enc = int(req.prompt_strength * req.num_inference_steps)
print(f"target t_enc is {t_enc} steps")
if req.save_to_disk_path is not None:
session_out_path = get_session_out_path(req.save_to_disk_path, req.session_id)
else:
session_out_path = None
with torch.no_grad():
for n in trange(opt_n_iter, desc="Sampling"):
for prompts in tqdm(data, desc="data"):
with precision_scope("cuda"):
if thread_data.reduced_memory:
if thread_data.reduced_memory and not thread_data.test_sd2:
thread_data.modelCS.to(thread_data.device)
uc = None
if req.guidance_scale != 1.0:
uc = thread_data.modelCS.get_learned_conditioning(batch_size * [req.negative_prompt])
if thread_data.test_sd2:
uc = thread_data.model.get_learned_conditioning(batch_size * [req.negative_prompt])
else:
uc = thread_data.modelCS.get_learned_conditioning(batch_size * [req.negative_prompt])
if isinstance(prompts, tuple):
prompts = list(prompts)
@ -526,15 +665,21 @@ def do_mk_img(req: Request):
weight = weights[i]
# if not skip_normalize:
weight = weight / totalWeight
c = torch.add(c, thread_data.modelCS.get_learned_conditioning(subprompts[i]), alpha=weight)
if thread_data.test_sd2:
c = torch.add(c, thread_data.model.get_learned_conditioning(subprompts[i]), alpha=weight)
else:
c = torch.add(c, thread_data.modelCS.get_learned_conditioning(subprompts[i]), alpha=weight)
else:
c = thread_data.modelCS.get_learned_conditioning(prompts)
if thread_data.test_sd2:
c = thread_data.model.get_learned_conditioning(prompts)
else:
c = thread_data.modelCS.get_learned_conditioning(prompts)
if thread_data.reduced_memory:
if thread_data.reduced_memory and not thread_data.test_sd2:
thread_data.modelFS.to(thread_data.device)
n_steps = req.num_inference_steps if req.init_image is None else t_enc
img_callback = get_image_progress_generator(req, {"total_steps": n_steps})
img_callback = get_image_progress_generator(req, data_queue, task_temp_images, step_callback, {"total_steps": n_steps})
# run the handler
try:
@ -542,14 +687,7 @@ def do_mk_img(req: Request):
if handler == _txt2img:
x_samples = _txt2img(req.width, req.height, req.num_outputs, req.num_inference_steps, req.guidance_scale, None, opt_C, opt_f, opt_ddim_eta, c, uc, opt_seed, img_callback, mask, req.sampler)
else:
x_samples = _img2img(init_latent, t_enc, batch_size, req.guidance_scale, c, uc, req.num_inference_steps, opt_ddim_eta, opt_seed, img_callback, mask)
if req.stream_progress_updates:
yield from x_samples
if hasattr(thread_data, 'partial_x_samples'):
if thread_data.partial_x_samples is not None:
x_samples = thread_data.partial_x_samples
del thread_data.partial_x_samples
x_samples = _img2img(init_latent, t_enc, batch_size, req.guidance_scale, c, uc, req.num_inference_steps, opt_ddim_eta, opt_seed, img_callback, mask, opt_C, req.height, req.width, opt_f)
except UserInitiatedStop:
if not hasattr(thread_data, 'partial_x_samples'):
continue
@ -562,7 +700,10 @@ def do_mk_img(req: Request):
print("decoding images")
img_data = [None] * batch_size
for i in range(batch_size):
x_samples_ddim = thread_data.modelFS.decode_first_stage(x_samples[i].unsqueeze(0))
if thread_data.test_sd2:
x_samples_ddim = thread_data.model.decode_first_stage(x_samples[i].unsqueeze(0))
else:
x_samples_ddim = thread_data.modelFS.decode_first_stage(x_samples[i].unsqueeze(0))
x_sample = torch.clamp((x_samples_ddim + 1.0) / 2.0, min=0.0, max=1.0)
x_sample = 255.0 * rearrange(x_sample[0].cpu().numpy(), "c h w -> h w c")
x_sample = x_sample.astype(np.uint8)
@ -586,14 +727,16 @@ def do_mk_img(req: Request):
if req.save_to_disk_path is not None:
if return_orig_img:
img_out_path = get_base_path(req.save_to_disk_path, req.session_id, prompts[0], img_id, req.output_format)
save_image(img, img_out_path)
save_image(img, img_out_path, req.output_format, req.output_quality)
meta_out_path = get_base_path(req.save_to_disk_path, req.session_id, prompts[0], img_id, 'txt')
save_metadata(meta_out_path, req, prompts[0], opt_seed)
if return_orig_img:
img_str = img_to_base64_str(img, req.output_format)
img_buffer = img_to_buffer(img, req.output_format, req.output_quality)
img_str = buffer_to_base64_str(img_buffer, req.output_format)
res_image_orig = ResponseImage(data=img_str, seed=opt_seed)
res.images.append(res_image_orig)
task_temp_images[i] = img_buffer
if req.save_to_disk_path is not None:
res_image_orig.path_abs = img_out_path
@ -609,12 +752,14 @@ def do_mk_img(req: Request):
filters_applied.append(req.use_upscale)
if (len(filters_applied) > 0):
filtered_image = Image.fromarray(img_data[i])
filtered_img_data = img_to_base64_str(filtered_image, req.output_format)
filtered_buffer = img_to_buffer(filtered_image, req.output_format, req.output_quality)
filtered_img_data = buffer_to_base64_str(filtered_buffer, req.output_format)
response_image = ResponseImage(data=filtered_img_data, seed=opt_seed)
res.images.append(response_image)
task_temp_images[i] = filtered_buffer
if req.save_to_disk_path is not None:
filtered_img_out_path = get_base_path(req.save_to_disk_path, req.session_id, prompts[0], img_id, req.output_format, "_".join(filters_applied))
save_image(filtered_image, filtered_img_out_path)
save_image(filtered_image, filtered_img_out_path, req.output_format, req.output_quality)
response_image.path_abs = filtered_img_out_path
del filtered_image
# Filter Applied, move to next seed
@ -622,18 +767,25 @@ def do_mk_img(req: Request):
# if thread_data.reduced_memory:
# unload_filters()
move_to_cpu(thread_data.modelFS)
if not thread_data.test_sd2:
move_to_cpu(thread_data.modelFS)
del img_data
gc()
if thread_data.device != 'cpu':
print(f'memory_final = {round(torch.cuda.memory_allocated(thread_data.device) / 1e6, 2)}Mb')
print('Task completed')
yield json.dumps(res.json())
res = res.json()
data_queue.put(json.dumps(res))
def save_image(img, img_out_path):
return res
def save_image(img, img_out_path, output_format="", output_quality=75):
try:
img.save(img_out_path)
if output_format.upper() == "JPEG":
img.save(img_out_path, quality=output_quality)
else:
img.save(img_out_path)
except:
print('could not save the file', traceback.format_exc())
@ -651,6 +803,8 @@ Sampler: {req.sampler}
Negative Prompt: {req.negative_prompt}
Stable Diffusion model: {req.use_stable_diffusion_model + '.ckpt'}
VAE model: {req.use_vae_model}
Hypernetwork Model: {req.use_hypernetwork_model}
Hypernetwork Strength: {req.hypernetwork_strength}
'''
try:
with open(meta_out_path, 'w', encoding='utf-8') as f:
@ -664,51 +818,109 @@ def _txt2img(opt_W, opt_H, opt_n_samples, opt_ddim_steps, opt_scale, start_code,
# Send to CPU and wait until complete.
# wait_model_move_to(thread_data.modelCS, 'cpu')
move_to_cpu(thread_data.modelCS)
if not thread_data.test_sd2:
move_to_cpu(thread_data.modelCS)
if sampler_name == 'ddim':
thread_data.model.make_schedule(ddim_num_steps=opt_ddim_steps, ddim_eta=opt_ddim_eta, verbose=False)
if thread_data.test_sd2 and sampler_name not in ('plms', 'ddim', 'dpm2'):
raise Exception('Only plms, ddim and dpm2 samplers are supported right now, in SD 2.0')
samples_ddim = thread_data.model.sample(
S=opt_ddim_steps,
conditioning=c,
seed=opt_seed,
shape=shape,
verbose=False,
unconditional_guidance_scale=opt_scale,
unconditional_conditioning=uc,
eta=opt_ddim_eta,
x_T=start_code,
img_callback=img_callback,
mask=mask,
sampler = sampler_name,
)
yield from samples_ddim
def _img2img(init_latent, t_enc, batch_size, opt_scale, c, uc, opt_ddim_steps, opt_ddim_eta, opt_seed, img_callback, mask):
# samples, _ = sampler.sample(S=opt.steps,
# conditioning=c,
# batch_size=opt.n_samples,
# shape=shape,
# verbose=False,
# unconditional_guidance_scale=opt.scale,
# unconditional_conditioning=uc,
# eta=opt.ddim_eta,
# x_T=start_code)
if thread_data.test_sd2:
if sampler_name == 'plms':
from ldm.models.diffusion.plms import PLMSSampler
sampler = PLMSSampler(thread_data.model)
elif sampler_name == 'ddim':
from ldm.models.diffusion.ddim import DDIMSampler
sampler = DDIMSampler(thread_data.model)
sampler.make_schedule(ddim_num_steps=opt_ddim_steps, ddim_eta=opt_ddim_eta, verbose=False)
elif sampler_name == 'dpm2':
from ldm.models.diffusion.dpm_solver import DPMSolverSampler
sampler = DPMSolverSampler(thread_data.model)
shape = [opt_C, opt_H // opt_f, opt_W // opt_f]
samples_ddim, intermediates = sampler.sample(
S=opt_ddim_steps,
conditioning=c,
batch_size=opt_n_samples,
seed=opt_seed,
shape=shape,
verbose=False,
unconditional_guidance_scale=opt_scale,
unconditional_conditioning=uc,
eta=opt_ddim_eta,
x_T=start_code,
img_callback=img_callback,
mask=mask,
sampler = sampler_name,
)
else:
if sampler_name == 'ddim':
thread_data.model.make_schedule(ddim_num_steps=opt_ddim_steps, ddim_eta=opt_ddim_eta, verbose=False)
samples_ddim = thread_data.model.sample(
S=opt_ddim_steps,
conditioning=c,
seed=opt_seed,
shape=shape,
verbose=False,
unconditional_guidance_scale=opt_scale,
unconditional_conditioning=uc,
eta=opt_ddim_eta,
x_T=start_code,
img_callback=img_callback,
mask=mask,
sampler = sampler_name,
)
return samples_ddim
def _img2img(init_latent, t_enc, batch_size, opt_scale, c, uc, opt_ddim_steps, opt_ddim_eta, opt_seed, img_callback, mask, opt_C=1, opt_H=1, opt_W=1, opt_f=1):
# encode (scaled latent)
z_enc = thread_data.model.stochastic_encode(
init_latent,
torch.tensor([t_enc] * batch_size).to(thread_data.device),
opt_seed,
opt_ddim_eta,
opt_ddim_steps,
)
x_T = None if mask is None else init_latent
# decode it
samples_ddim = thread_data.model.sample(
t_enc,
c,
z_enc,
unconditional_guidance_scale=opt_scale,
unconditional_conditioning=uc,
img_callback=img_callback,
mask=mask,
x_T=x_T,
sampler = 'ddim'
)
yield from samples_ddim
if thread_data.test_sd2:
from ldm.models.diffusion.ddim import DDIMSampler
sampler = DDIMSampler(thread_data.model)
sampler.make_schedule(ddim_num_steps=opt_ddim_steps, ddim_eta=opt_ddim_eta, verbose=False)
z_enc = sampler.stochastic_encode(init_latent, torch.tensor([t_enc] * batch_size).to(thread_data.device))
samples_ddim = sampler.decode(z_enc, c, t_enc, unconditional_guidance_scale=opt_scale,unconditional_conditioning=uc, img_callback=img_callback)
else:
z_enc = thread_data.model.stochastic_encode(
init_latent,
torch.tensor([t_enc] * batch_size).to(thread_data.device),
opt_seed,
opt_ddim_eta,
opt_ddim_steps,
)
# decode it
samples_ddim = thread_data.model.sample(
t_enc,
c,
z_enc,
unconditional_guidance_scale=opt_scale,
unconditional_conditioning=uc,
img_callback=img_callback,
mask=mask,
x_T=x_T,
sampler = 'ddim'
)
return samples_ddim
def gc():
gc_collect()
@ -725,13 +937,26 @@ def chunk(it, size):
def load_model_from_config(ckpt, verbose=False):
print(f"Loading model from {ckpt}")
pl_sd = torch.load(ckpt, map_location="cpu")
model_ver = 'sd1'
if ckpt.endswith(".safetensors"):
print("Loading from safetensors")
pl_sd = load_file(ckpt, device="cpu")
else:
pl_sd = torch.load(ckpt, map_location="cpu")
if "global_step" in pl_sd:
print(f"Global Step: {pl_sd['global_step']}")
sd = pl_sd["state_dict"]
return sd
# utils
if "state_dict" in pl_sd:
# check for a key that only seems to be present in SD2 models
if 'cond_stage_model.model.ln_final.bias' in pl_sd['state_dict'].keys():
model_ver = 'sd2'
return pl_sd["state_dict"], model_ver
else:
return pl_sd, model_ver
class UserInitiatedStop(Exception):
pass
@ -775,9 +1000,20 @@ def load_mask(mask_str, h0, w0, newH, newW, invert=False):
return image
# https://stackoverflow.com/a/61114178
def img_to_base64_str(img, output_format="PNG"):
def img_to_base64_str(img, output_format="PNG", output_quality=75):
buffered = img_to_buffer(img, output_format, quality=output_quality)
return buffer_to_base64_str(buffered, output_format)
def img_to_buffer(img, output_format="PNG", output_quality=75):
buffered = BytesIO()
img.save(buffered, format=output_format)
if ( output_format.upper() == "JPEG" ):
img.save(buffered, format=output_format, quality=output_quality)
else:
img.save(buffered, format=output_format)
buffered.seek(0)
return buffered
def buffer_to_base64_str(buffered, output_format="PNG"):
buffered.seek(0)
img_byte = buffered.getvalue()
mime_type = "image/png" if output_format.lower() == "png" else "image/jpeg"
@ -795,3 +1031,48 @@ def base64_str_to_img(img_str):
buffered = base64_str_to_buffer(img_str)
img = Image.open(buffered)
return img
def split_weighted_subprompts(text):
"""
grabs all text up to the first occurrence of ':'
uses the grabbed text as a sub-prompt, and takes the value following ':' as weight
if ':' has no value defined, defaults to 1.0
repeats until no text remaining
"""
remaining = len(text)
prompts = []
weights = []
while remaining > 0:
if ":" in text:
idx = text.index(":") # first occurrence from start
# grab up to index as sub-prompt
prompt = text[:idx]
remaining -= idx
# remove from main text
text = text[idx+1:]
# find value for weight
if " " in text:
idx = text.index(" ") # first occurence
else: # no space, read to end
idx = len(text)
if idx != 0:
try:
weight = float(text[:idx])
except: # couldn't treat as float
print(f"Warning: '{text[:idx]}' is not a value, are you missing a space?")
weight = 1.0
else: # no value found
weight = 1.0
# remove from main text
remaining -= idx
text = text[idx+1:]
# append the sub-prompt and its weight
prompts.append(prompt)
weights.append(weight)
else: # no : found
if len(text) > 0: # there is still text though
# take remainder as weight 1
prompts.append(text)
weights.append(1.0)
remaining = 0
return prompts, weights

View File

@ -37,7 +37,8 @@ class ServerStates:
class RenderTask(): # Task with output queue and completion lock.
def __init__(self, req: Request):
self.request: Request = req # Initial Request
req.request_id = id(self)
self.request: Request = req # Initial Request
self.response: Any = None # Copy of the last reponse
self.render_device = None # Select the task affinity. (Not used to change active devices).
self.temp_images:list = [None] * req.num_outputs * (1 if req.show_only_filtered_image else 2)
@ -51,6 +52,22 @@ class RenderTask(): # Task with output queue and completion lock.
self.buffer_queue.task_done()
yield res
except queue.Empty as e: yield
@property
def status(self):
if self.lock.locked():
return 'running'
if isinstance(self.error, StopAsyncIteration):
return 'stopped'
if self.error:
return 'error'
if not self.buffer_queue.empty():
return 'buffer'
if self.response:
return 'completed'
return 'pending'
@property
def is_pending(self):
return bool(not self.response and not self.error)
# defaults from https://huggingface.co/blog/stable_diffusion
class ImageRequest(BaseModel):
@ -77,8 +94,11 @@ class ImageRequest(BaseModel):
use_upscale: str = None # or "RealESRGAN_x4plus" or "RealESRGAN_x4plus_anime_6B"
use_stable_diffusion_model: str = "sd-v1-4"
use_vae_model: str = None
use_hypernetwork_model: str = None
hypernetwork_strength: float = None
show_only_filtered_image: bool = False
output_format: str = "jpeg" # or "png"
output_quality: int = 75
stream_progress_updates: bool = False
stream_image_progress: bool = False
@ -95,9 +115,10 @@ class FilterRequest(BaseModel):
render_device: str = None
use_full_precision: bool = False
output_format: str = "jpeg" # or "png"
output_quality: int = 75
# Temporary cache to allow to query tasks results for a short time after they are completed.
class TaskCache():
class DataCache():
def __init__(self):
self._base = dict()
self._lock: threading.Lock = threading.Lock()
@ -106,7 +127,7 @@ class TaskCache():
def _is_expired(self, timestamp: int) -> bool:
return int(time.time()) >= timestamp
def clean(self) -> None:
if not self._lock.acquire(blocking=True, timeout=LOCK_TIMEOUT): raise Exception('TaskCache.clean' + ERR_LOCK_FAILED)
if not self._lock.acquire(blocking=True, timeout=LOCK_TIMEOUT): raise Exception('DataCache.clean' + ERR_LOCK_FAILED)
try:
# Create a list of expired keys to delete
to_delete = []
@ -116,16 +137,22 @@ class TaskCache():
to_delete.append(key)
# Remove Items
for key in to_delete:
(_, val) = self._base[key]
if isinstance(val, RenderTask):
print(f'RenderTask {key} expired. Data removed.')
elif isinstance(val, SessionState):
print(f'Session {key} expired. Data removed.')
else:
print(f'Key {key} expired. Data removed.')
del self._base[key]
print(f'Session {key} expired. Data removed.')
finally:
self._lock.release()
def clear(self) -> None:
if not self._lock.acquire(blocking=True, timeout=LOCK_TIMEOUT): raise Exception('TaskCache.clear' + ERR_LOCK_FAILED)
if not self._lock.acquire(blocking=True, timeout=LOCK_TIMEOUT): raise Exception('DataCache.clear' + ERR_LOCK_FAILED)
try: self._base.clear()
finally: self._lock.release()
def delete(self, key: Hashable) -> bool:
if not self._lock.acquire(blocking=True, timeout=LOCK_TIMEOUT): raise Exception('TaskCache.delete' + ERR_LOCK_FAILED)
if not self._lock.acquire(blocking=True, timeout=LOCK_TIMEOUT): raise Exception('DataCache.delete' + ERR_LOCK_FAILED)
try:
if key not in self._base:
return False
@ -134,7 +161,7 @@ class TaskCache():
finally:
self._lock.release()
def keep(self, key: Hashable, ttl: int) -> bool:
if not self._lock.acquire(blocking=True, timeout=LOCK_TIMEOUT): raise Exception('TaskCache.keep' + ERR_LOCK_FAILED)
if not self._lock.acquire(blocking=True, timeout=LOCK_TIMEOUT): raise Exception('DataCache.keep' + ERR_LOCK_FAILED)
try:
if key in self._base:
_, value = self._base.get(key)
@ -144,7 +171,7 @@ class TaskCache():
finally:
self._lock.release()
def put(self, key: Hashable, value: Any, ttl: int) -> bool:
if not self._lock.acquire(blocking=True, timeout=LOCK_TIMEOUT): raise Exception('TaskCache.put' + ERR_LOCK_FAILED)
if not self._lock.acquire(blocking=True, timeout=LOCK_TIMEOUT): raise Exception('DataCache.put' + ERR_LOCK_FAILED)
try:
self._base[key] = (
self._get_ttl_time(ttl), value
@ -158,7 +185,7 @@ class TaskCache():
finally:
self._lock.release()
def tryGet(self, key: Hashable) -> Any:
if not self._lock.acquire(blocking=True, timeout=LOCK_TIMEOUT): raise Exception('TaskCache.tryGet' + ERR_LOCK_FAILED)
if not self._lock.acquire(blocking=True, timeout=LOCK_TIMEOUT): raise Exception('DataCache.tryGet' + ERR_LOCK_FAILED)
try:
ttl, value = self._base.get(key, (None, None))
if ttl is not None and self._is_expired(ttl):
@ -175,28 +202,61 @@ current_state = ServerStates.Init
current_state_error:Exception = None
current_model_path = None
current_vae_path = None
current_hypernetwork_path = None
tasks_queue = []
task_cache = TaskCache()
session_cache = DataCache()
task_cache = DataCache()
default_model_to_load = None
default_vae_to_load = None
default_hypernetwork_to_load = None
weak_thread_data = weakref.WeakKeyDictionary()
idle_event: threading.Event = threading.Event()
def preload_model(ckpt_file_path=None, vae_file_path=None):
global current_state, current_state_error, current_model_path, current_vae_path
class SessionState():
def __init__(self, id: str):
self._id = id
self._tasks_ids = []
@property
def id(self):
return self._id
@property
def tasks(self):
tasks = []
for task_id in self._tasks_ids:
task = task_cache.tryGet(task_id)
if task:
tasks.append(task)
return tasks
def put(self, task, ttl=TASK_TTL):
task_id = id(task)
self._tasks_ids.append(task_id)
if not task_cache.put(task_id, task, ttl):
return False
while len(self._tasks_ids) > len(render_threads) * 2:
self._tasks_ids.pop(0)
return True
def preload_model(ckpt_file_path=None, vae_file_path=None, hypernetwork_file_path=None):
global current_state, current_state_error, current_model_path, current_vae_path, current_hypernetwork_path
if ckpt_file_path == None:
ckpt_file_path = default_model_to_load
if vae_file_path == None:
vae_file_path = default_vae_to_load
if hypernetwork_file_path == None:
hypernetwork_file_path = default_hypernetwork_to_load
if ckpt_file_path == current_model_path and vae_file_path == current_vae_path:
return
current_state = ServerStates.LoadingModel
try:
from . import runtime
runtime.thread_data.hypernetwork_file = hypernetwork_file_path
runtime.thread_data.ckpt_file = ckpt_file_path
runtime.thread_data.vae_file = vae_file_path
runtime.load_model_ckpt()
runtime.load_hypernetwork()
current_model_path = ckpt_file_path
current_vae_path = vae_file_path
current_hypernetwork_path = hypernetwork_file_path
current_state_error = None
current_state = ServerStates.Online
except Exception as e:
@ -238,7 +298,7 @@ def thread_get_next_task():
manager_lock.release()
def thread_render(device):
global current_state, current_state_error, current_model_path, current_vae_path
global current_state, current_state_error, current_model_path, current_vae_path, current_hypernetwork_path
from . import runtime
try:
runtime.thread_init(device)
@ -257,6 +317,7 @@ def thread_render(device):
preload_model()
current_state = ServerStates.Online
while True:
session_cache.clean()
task_cache.clean()
if not weak_thread_data[threading.current_thread()]['alive']:
print(f'Shutting down thread for device {runtime.thread_data.device}')
@ -268,7 +329,8 @@ def thread_render(device):
return
task = thread_get_next_task()
if task is None:
time.sleep(0.05)
idle_event.clear()
idle_event.wait(timeout=1)
continue
if task.error is not None:
print(task.error)
@ -283,53 +345,42 @@ def thread_render(device):
print(f'Session {task.request.session_id} starting task {id(task)} on {runtime.thread_data.device_name}')
if not task.lock.acquire(blocking=False): raise Exception('Got locked task from queue.')
try:
if runtime.thread_data.device == 'cpu' and is_alive() > 1:
# CPU is not the only device. Keep track of active time to unload resources later.
runtime.thread_data.lastActive = time.time()
# Open data generator.
res = runtime.mk_img(task.request)
if current_model_path == task.request.use_stable_diffusion_model:
current_state = ServerStates.Rendering
else:
if runtime.is_hypernetwork_reload_necessary(task.request):
runtime.reload_hypernetwork()
current_hypernetwork_path = task.request.use_hypernetwork_model
if runtime.is_model_reload_necessary(task.request):
current_state = ServerStates.LoadingModel
# Start reading from generator.
dataQueue = None
if task.request.stream_progress_updates:
dataQueue = task.buffer_queue
for result in res:
if current_state == ServerStates.LoadingModel:
current_state = ServerStates.Rendering
current_model_path = task.request.use_stable_diffusion_model
current_vae_path = task.request.use_vae_model
runtime.reload_model()
current_model_path = task.request.use_stable_diffusion_model
current_vae_path = task.request.use_vae_model
def step_callback():
global current_state_error
if isinstance(current_state_error, SystemExit) or isinstance(current_state_error, StopAsyncIteration) or isinstance(task.error, StopAsyncIteration):
runtime.thread_data.stop_processing = True
if isinstance(current_state_error, StopAsyncIteration):
task.error = current_state_error
current_state_error = None
print(f'Session {task.request.session_id} sent cancel signal for task {id(task)}')
if dataQueue:
dataQueue.put(result)
if isinstance(result, str):
result = json.loads(result)
task.response = result
if 'output' in result:
for out_obj in result['output']:
if 'path' in out_obj:
img_id = out_obj['path'][out_obj['path'].rindex('/') + 1:]
task.temp_images[int(img_id)] = runtime.thread_data.temp_images[out_obj['path'][11:]]
elif 'data' in out_obj:
buf = runtime.base64_str_to_buffer(out_obj['data'])
task.temp_images[result['output'].index(out_obj)] = buf
# Before looping back to the generator, mark cache as still alive.
task_cache.keep(task.request.session_id, TASK_TTL)
current_state = ServerStates.Rendering
task.response = runtime.mk_img(task.request, task.buffer_queue, task.temp_images, step_callback)
# Before looping back to the generator, mark cache as still alive.
task_cache.keep(id(task), TASK_TTL)
session_cache.keep(task.request.session_id, TASK_TTL)
except Exception as e:
task.error = e
task.response = {"status": 'failed', "detail": str(task.error)}
task.buffer_queue.put(json.dumps(task.response))
print(traceback.format_exc())
continue
finally:
# Task completed
task.lock.release()
task_cache.keep(task.request.session_id, TASK_TTL)
task_cache.keep(id(task), TASK_TTL)
session_cache.keep(task.request.session_id, TASK_TTL)
if isinstance(task.error, StopAsyncIteration):
print(f'Session {task.request.session_id} task {id(task)} cancelled!')
elif task.error is not None:
@ -338,12 +389,21 @@ def thread_render(device):
print(f'Session {task.request.session_id} task {id(task)} completed by {runtime.thread_data.device_name}.')
current_state = ServerStates.Online
def get_cached_task(session_id:str, update_ttl:bool=False):
def get_cached_task(task_id:str, update_ttl:bool=False):
# By calling keep before tryGet, wont discard if was expired.
if update_ttl and not task_cache.keep(session_id, TASK_TTL):
if update_ttl and not task_cache.keep(task_id, TASK_TTL):
# Failed to keep task, already gone.
return None
return task_cache.tryGet(session_id)
return task_cache.tryGet(task_id)
def get_cached_session(session_id:str, update_ttl:bool=False):
if update_ttl:
session_cache.keep(session_id, TASK_TTL)
session = session_cache.tryGet(session_id)
if not session:
session = SessionState(session_id)
session_cache.put(session_id, session, TASK_TTL)
return session
def get_devices():
devices = {
@ -490,14 +550,16 @@ def shutdown_event(): # Signal render thread to close on shutdown
current_state_error = SystemExit('Application shutting down.')
def render(req : ImageRequest):
if is_alive() <= 0: # Render thread is dead
current_thread_count = is_alive()
if current_thread_count <= 0: # Render thread is dead
raise ChildProcessError('Rendering thread has died.')
# Alive, check if task in cache
task = task_cache.tryGet(req.session_id)
if task and not task.response and not task.error and not task.lock.locked():
# Unstarted task pending, deny queueing more than one.
raise ConnectionRefusedError(f'Session {req.session_id} has an already pending task.')
#
session = get_cached_session(req.session_id, update_ttl=True)
pending_tasks = list(filter(lambda t: t.is_pending, session.tasks))
if current_thread_count < len(pending_tasks):
raise ConnectionRefusedError(f'Session {req.session_id} already has {len(pending_tasks)} pending tasks out of {current_thread_count}.')
from . import runtime
r = Request()
r.session_id = req.session_id
@ -521,8 +583,11 @@ def render(req : ImageRequest):
r.use_face_correction = req.use_face_correction
r.use_stable_diffusion_model = req.use_stable_diffusion_model
r.use_vae_model = req.use_vae_model
r.use_hypernetwork_model = req.use_hypernetwork_model
r.hypernetwork_strength = req.hypernetwork_strength
r.show_only_filtered_image = req.show_only_filtered_image
r.output_format = req.output_format
r.output_quality = req.output_quality
r.stream_progress_updates = True # the underlying implementation only supports streaming
r.stream_image_progress = req.stream_image_progress
@ -531,13 +596,13 @@ def render(req : ImageRequest):
r.stream_image_progress = False
new_task = RenderTask(r)
if task_cache.put(r.session_id, new_task, TASK_TTL):
if session.put(new_task, TASK_TTL):
# Use twice the normal timeout for adding user requests.
# Tries to force task_cache.put to fail before tasks_queue.put would.
# Tries to force session.put to fail before tasks_queue.put would.
if manager_lock.acquire(blocking=True, timeout=LOCK_TIMEOUT * 2):
try:
tasks_queue.append(new_task)
idle_event.set()
return new_task
finally:
manager_lock.release()

View File

@ -7,6 +7,7 @@ import traceback
import sys
import os
import socket
import picklescan.scanner
import rich
@ -23,8 +24,9 @@ USER_UI_PLUGINS_DIR = os.path.abspath(os.path.join(SD_DIR, '..', 'plugins', 'ui'
CORE_UI_PLUGINS_DIR = os.path.abspath(os.path.join(SD_UI_DIR, 'plugins', 'ui'))
UI_PLUGINS_SOURCES = ((CORE_UI_PLUGINS_DIR, 'core'), (USER_UI_PLUGINS_DIR, 'user'))
STABLE_DIFFUSION_MODEL_EXTENSIONS = ['.ckpt']
STABLE_DIFFUSION_MODEL_EXTENSIONS = ['.ckpt', '.safetensors']
VAE_MODEL_EXTENSIONS = ['.vae.pt', '.ckpt']
HYPERNETWORK_MODEL_EXTENSIONS = ['.pt']
OUTPUT_DIRNAME = "Stable Diffusion UI" # in the user's home folder
TASK_TTL = 15 * 60 # Discard last session's task timeout
@ -47,14 +49,12 @@ from fastapi.staticfiles import StaticFiles
from starlette.responses import FileResponse, JSONResponse, StreamingResponse
from pydantic import BaseModel
import logging
#import queue, threading, time
from typing import Any, Generator, Hashable, List, Optional, Union
from sd_internal import Request, Response, task_manager
app = FastAPI()
modifiers_cache = None
outpath = os.path.join(os.path.expanduser("~"), OUTPUT_DIRNAME)
os.makedirs(USER_UI_PLUGINS_DIR, exist_ok=True)
@ -116,6 +116,8 @@ def setConfig(config):
bind_ip = '0.0.0.0' if config['net']['listen_to_network'] else '127.0.0.1'
config_bat.append(f"@set SD_UI_BIND_IP={bind_ip}")
config_bat.append(f"@set test_sd2={'Y' if config.get('test_sd2', False) else 'N'}")
if len(config_bat) > 0:
with open(config_bat_path, 'w', encoding='utf-8') as f:
f.write('\r\n'.join(config_bat))
@ -133,6 +135,8 @@ def setConfig(config):
bind_ip = '0.0.0.0' if config['net']['listen_to_network'] else '127.0.0.1'
config_sh.append(f"export SD_UI_BIND_IP={bind_ip}")
config_sh.append(f"export test_sd2=\"{'Y' if config.get('test_sd2', False) else 'N'}\"")
if len(config_sh) > 1:
with open(config_sh_path, 'w', encoding='utf-8') as f:
f.write('\n'.join(config_sh))
@ -140,12 +144,19 @@ def setConfig(config):
print(traceback.format_exc())
def resolve_model_to_use(model_name:str, model_type:str, model_dir:str, model_extensions:list, default_models=[]):
config = getConfig()
model_dirs = [os.path.join(MODELS_DIR, model_dir), SD_DIR]
if not model_name: # When None try user configured model.
config = getConfig()
# config = getConfig()
if 'model' in config and model_type in config['model']:
model_name = config['model'][model_type]
if model_name:
is_sd2 = config.get('test_sd2', False)
if model_name.startswith('sd2_') and not is_sd2: # temp hack, until SD2 is unified with 1.4
print('ERROR: Cannot use SD 2.0 models with SD 1.0 code. Using the sd-v1-4 model instead!')
model_name = 'sd-v1-4'
# Check models directory
models_dir_path = os.path.join(MODELS_DIR, model_dir, model_name)
for model_extension in model_extensions:
@ -181,6 +192,12 @@ def resolve_vae_to_use(model_name:str=None):
except:
return None
def resolve_hypernetwork_to_use(model_name:str=None):
try:
return resolve_model_to_use(model_name, model_type='hypernetwork', model_dir='hypernetwork', model_extensions=HYPERNETWORK_MODEL_EXTENSIONS, default_models=[])
except:
return None
class SetAppConfigRequest(BaseModel):
update_branch: str = None
render_devices: Union[List[str], List[int], str, int] = None
@ -188,6 +205,7 @@ class SetAppConfigRequest(BaseModel):
ui_open_browser_on_start: bool = None
listen_to_network: bool = None
listen_port: int = None
test_sd2: bool = None
@app.post('/app_config')
async def setAppConfig(req : SetAppConfigRequest):
@ -208,6 +226,8 @@ async def setAppConfig(req : SetAppConfigRequest):
if 'net' not in config:
config['net'] = {}
config['net']['listen_port'] = int(req.listen_port)
if req.test_sd2 is not None:
config['test_sd2'] = req.test_sd2
try:
setConfig(config)
@ -230,18 +250,20 @@ def is_malicious_model(file_path):
return False
except Exception as e:
print('error while scanning', file_path, 'error:', e)
return False
known_models = {}
def getModels():
models = {
'active': {
'stable-diffusion': 'sd-v1-4',
'vae': '',
'hypernetwork': '',
},
'options': {
'stable-diffusion': ['sd-v1-4'],
'vae': [],
'hypernetwork': [],
},
}
@ -255,9 +277,14 @@ def getModels():
if not file.endswith(model_extension):
continue
if is_malicious_model(os.path.join(models_dir, file)):
models['scan-error'] = file
return
model_path = os.path.join(models_dir, file)
mtime = os.path.getmtime(model_path)
mod_time = known_models[model_path] if model_path in known_models else -1
if mod_time != mtime:
if is_malicious_model(model_path):
models['scan-error'] = file
return
known_models[model_path] = mtime
model_name = file[:-len(model_extension)]
models['options'][model_type].append(model_name)
@ -268,7 +295,7 @@ def getModels():
# custom models
listModels(models_dirname='stable-diffusion', model_type='stable-diffusion', model_extensions=STABLE_DIFFUSION_MODEL_EXTENSIONS)
listModels(models_dirname='vae', model_type='vae', model_extensions=VAE_MODEL_EXTENSIONS)
listModels(models_dirname='hypernetwork', model_type='hypernetwork', model_extensions=HYPERNETWORK_MODEL_EXTENSIONS)
# legacy
custom_weight_path = os.path.join(SD_DIR, 'custom-model.ckpt')
if os.path.exists(custom_weight_path):
@ -286,6 +313,17 @@ def getUIPlugins():
return plugins
def getIPConfig():
try:
ips = socket.gethostbyname_ex(socket.gethostname())
ips[2].append(ips[0])
return ips[2]
except Exception as e:
print(e)
print(traceback.format_exc())
return []
@app.get('/get/{key:path}')
def read_web_data(key:str=None):
if not key: # /get without parameters, stable-diffusion easter egg.
@ -295,11 +333,14 @@ def read_web_data(key:str=None):
if config is None:
config = APP_CONFIG_DEFAULTS
return JSONResponse(config, headers=NOCACHE_HEADERS)
elif key == 'devices':
elif key == 'system_info':
config = getConfig()
devices = task_manager.get_devices()
devices['config'] = config.get('render_devices', "auto")
return JSONResponse(devices, headers=NOCACHE_HEADERS)
system_info = {
'devices': task_manager.get_devices(),
'hosts': getIPConfig(),
}
system_info['devices']['config'] = config.get('render_devices', "auto")
return JSONResponse(system_info, headers=NOCACHE_HEADERS)
elif key == 'models':
return JSONResponse(getModels(), headers=NOCACHE_HEADERS)
elif key == 'modifiers': return FileResponse(os.path.join(SD_UI_DIR, 'modifiers.json'), headers=NOCACHE_HEADERS)
@ -317,34 +358,24 @@ def ping(session_id:str=None):
# Alive
response = {'status': str(task_manager.current_state)}
if session_id:
task = task_manager.get_cached_task(session_id, update_ttl=True)
if task:
response['task'] = id(task)
if task.lock.locked():
response['session'] = 'running'
elif isinstance(task.error, StopAsyncIteration):
response['session'] = 'stopped'
elif task.error:
response['session'] = 'error'
elif not task.buffer_queue.empty():
response['session'] = 'buffer'
elif task.response:
response['session'] = 'completed'
else:
response['session'] = 'pending'
session = task_manager.get_cached_session(session_id, update_ttl=True)
response['tasks'] = {id(t): t.status for t in session.tasks}
response['devices'] = task_manager.get_devices()
return JSONResponse(response, headers=NOCACHE_HEADERS)
def save_model_to_config(ckpt_model_name, vae_model_name):
def save_model_to_config(ckpt_model_name, vae_model_name, hypernetwork_model_name):
config = getConfig()
if 'model' not in config:
config['model'] = {}
config['model']['stable-diffusion'] = ckpt_model_name
config['model']['vae'] = vae_model_name
config['model']['hypernetwork'] = hypernetwork_model_name
if vae_model_name is None or vae_model_name == "":
del config['model']['vae']
if hypernetwork_model_name is None or hypernetwork_model_name == "":
del config['model']['hypernetwork']
setConfig(config)
@ -360,30 +391,33 @@ def update_render_devices_in_config(config, render_devices):
@app.post('/render')
def render(req : task_manager.ImageRequest):
try:
save_model_to_config(req.use_stable_diffusion_model, req.use_vae_model)
save_model_to_config(req.use_stable_diffusion_model, req.use_vae_model, req.use_hypernetwork_model)
req.use_stable_diffusion_model = resolve_ckpt_to_use(req.use_stable_diffusion_model)
req.use_vae_model = resolve_vae_to_use(req.use_vae_model)
req.use_hypernetwork_model = resolve_hypernetwork_to_use(req.use_hypernetwork_model)
new_task = task_manager.render(req)
response = {
'status': str(task_manager.current_state),
'queue': len(task_manager.tasks_queue),
'stream': f'/image/stream/{req.session_id}/{id(new_task)}',
'stream': f'/image/stream/{id(new_task)}',
'task': id(new_task)
}
return JSONResponse(response, headers=NOCACHE_HEADERS)
except ChildProcessError as e: # Render thread is dead
raise HTTPException(status_code=500, detail=f'Rendering thread has died.') # HTTP500 Internal Server Error
except ConnectionRefusedError as e: # Unstarted task pending, deny queueing more than one.
raise HTTPException(status_code=503, detail=f'Session {req.session_id} has an already pending task.') # HTTP503 Service Unavailable
except ConnectionRefusedError as e: # Unstarted task pending limit reached, deny queueing too many.
raise HTTPException(status_code=503, detail=str(e)) # HTTP503 Service Unavailable
except Exception as e:
print(e)
print(traceback.format_exc())
raise HTTPException(status_code=500, detail=str(e))
@app.get('/image/stream/{session_id:str}/{task_id:int}')
def stream(session_id:str, task_id:int):
@app.get('/image/stream/{task_id:int}')
def stream(task_id:int):
#TODO Move to WebSockets ??
task = task_manager.get_cached_task(session_id, update_ttl=True)
if not task: raise HTTPException(status_code=410, detail='No request received.') # HTTP410 Gone
if (id(task) != task_id): raise HTTPException(status_code=409, detail=f'Wrong task id received. Expected:{id(task)}, Received:{task_id}') # HTTP409 Conflict
task = task_manager.get_cached_task(task_id, update_ttl=True)
if not task: raise HTTPException(status_code=404, detail=f'Request {task_id} not found.') # HTTP404 NotFound
#if (id(task) != task_id): raise HTTPException(status_code=409, detail=f'Wrong task id received. Expected:{id(task)}, Received:{task_id}') # HTTP409 Conflict
if task.buffer_queue.empty() and not task.lock.locked():
if task.response:
#print(f'Session {session_id} sending cached response')
@ -393,22 +427,23 @@ def stream(session_id:str, task_id:int):
return StreamingResponse(task.read_buffer_generator(), media_type='application/json')
@app.get('/image/stop')
def stop(session_id:str=None):
if not session_id:
def stop(task: int):
if not task:
if task_manager.current_state == task_manager.ServerStates.Online or task_manager.current_state == task_manager.ServerStates.Unavailable:
raise HTTPException(status_code=409, detail='Not currently running any tasks.') # HTTP409 Conflict
task_manager.current_state_error = StopAsyncIteration('')
return {'OK'}
task = task_manager.get_cached_task(session_id, update_ttl=False)
if not task: raise HTTPException(status_code=404, detail=f'Session {session_id} has no active task.') # HTTP404 Not Found
if isinstance(task.error, StopAsyncIteration): raise HTTPException(status_code=409, detail=f'Session {session_id} task is already stopped.') # HTTP409 Conflict
task.error = StopAsyncIteration('')
task_id = task
task = task_manager.get_cached_task(task_id, update_ttl=False)
if not task: raise HTTPException(status_code=404, detail=f'Task {task_id} was not found.') # HTTP404 Not Found
if isinstance(task.error, StopAsyncIteration): raise HTTPException(status_code=409, detail=f'Task {task_id} is already stopped.') # HTTP409 Conflict
task.error = StopAsyncIteration(f'Task {task_id} stop requested.')
return {'OK'}
@app.get('/image/tmp/{session_id}/{img_id:int}')
def get_image(session_id, img_id):
task = task_manager.get_cached_task(session_id, update_ttl=True)
if not task: raise HTTPException(status_code=410, detail=f'Session {session_id} has not submitted a task.') # HTTP410 Gone
@app.get('/image/tmp/{task_id:int}/{img_id:int}')
def get_image(task_id: int, img_id: int):
task = task_manager.get_cached_task(task_id, update_ttl=True)
if not task: raise HTTPException(status_code=410, detail=f'Task {task_id} could not be found.') # HTTP404 NotFound
if not task.temp_images[img_id]: raise HTTPException(status_code=425, detail='Too Early, task data is not available yet.') # HTTP425 Too Early
try:
img_data = task.temp_images[img_id]
@ -435,9 +470,13 @@ class LogSuppressFilter(logging.Filter):
return True
logging.getLogger('uvicorn.access').addFilter(LogSuppressFilter())
# Check models and prepare cache for UI open
getModels()
# Start the task_manager
task_manager.default_model_to_load = resolve_ckpt_to_use()
task_manager.default_vae_to_load = resolve_vae_to_use()
task_manager.default_hypernetwork_to_load = resolve_hypernetwork_to_use()
def update_render_threads():
config = getConfig()