Compare commits

...

282 Commits

Author SHA1 Message Date
06b41aee58 Fix - reduce the amount of VRAM occupied when the program starts up, this caused a regression and failures on GPUs with 4 gb or less of VRAM 2022-11-16 19:29:04 +05:30
2c861c65d4 Merge pull request #485 from cmdr2/beta
UI setting for preventing browser autostart
2022-11-16 12:45:51 +05:30
a59bac4b40 UI setting for preventing browser autostart 2022-11-16 12:43:46 +05:30
cf214bf367 Merge pull request #484 from cmdr2/main
Merge main
2022-11-16 12:29:09 +05:30
75724797f7 Don't show a 500 error when the config json file doesn't exist 2022-11-16 12:20:25 +05:30
d04aeb55ad Fix default render device 2022-11-16 12:16:46 +05:30
47bd6dc6b8 Fix render devices auto 2022-11-16 12:14:06 +05:30
5e0f525932 Merge pull request #483 from cmdr2/beta
v2.4.4
2022-11-16 12:10:01 +05:30
1f66daf2f3 Write the config script files only if necessary 2022-11-16 11:40:51 +05:30
ded9cb0358 Check if config contains update_branch before trying to write it to a script file 2022-11-16 11:36:04 +05:30
04f201933b space apart the stop button 2022-11-16 11:33:05 +05:30
f5ec1cb3a4 Don't show the list of files that have been copied on startup 2022-11-16 11:31:16 +05:30
6c23e3f534 Bump version 2022-11-16 11:19:42 +05:30
e99d54d1f6 Merge main 2022-11-16 11:19:10 +05:30
3c71200eb4 Update index.html 2022-11-15 16:06:50 +05:30
f124cf8318 Make the task config summary labels bold 2022-11-15 16:06:35 +05:30
9d2b944063 Remove unused variable 2022-11-15 13:18:00 +05:30
8e1ec5903b Don't throw an exception when an invalid device is being checked for compatibility. Report and return false 2022-11-15 12:41:10 +05:30
5cf763d51f Add a 'Save' button in settings, to avoid starting/stopping threads while a user is still modifying their GPU settings 2022-11-15 12:22:55 +05:30
3546859fe5 Bump version 2022-11-15 11:05:39 +05:30
6530e45178 Merge pull request #478 from madrang/beta
Changed update_render_threads to use SetAppConfigRequest.
2022-11-15 11:04:52 +05:30
07f0036b2b Merge pull request #476 from JeLuF/patch-2
Incr. Server State Validtiy to 90s
2022-11-15 10:19:45 +05:30
5237f55a71 Removed extra line, use only save_render_devices_to_config 2022-11-14 22:29:55 -05:00
a108e5067d Typos in comments. 2022-11-14 22:20:21 -05:00
a4a24b1a1a Fixed calling get_device_delta with a single cuda device inside config.json at boot. 2022-11-14 22:14:03 -05:00
ffe0eb1544 Changed update_render_threads to use SetAppConfigRequest to set which devices are active.
Keep ImageRequest.render_device for affinity only. (Send a task to an already active device.)
2022-11-14 21:54:24 -05:00
288e8a65f3 Incr. Server State Validtiy to 90s
By default, healthCheck() is run every 5s. On background tabs, this may get extended. My tests have shown pings every 60s. The ping was older than 10s, so the condition in line 490 evaluates to `false` and the client tries to access the stream before the server is ready. By increasing the validity this can be avoided - at least until the browser runs the healthcheck even less often.

See https://discord.com/channels/1014774730907209781/1041811939380178964/1041812021018120262 for the analysis.
2022-11-14 23:18:03 +01:00
0ebfbca93e Merge pull request #475 from JeLuF/beta
🔥Fix system info for CPU mode
2022-11-14 22:41:38 +05:30
f22f57495e Fix system info for CPU mode 2022-11-14 17:55:36 +01:00
8786a9d21d Fix border color of the image task container 2022-11-14 21:25:57 +05:30
f06a97d30b Move system info into settings 2022-11-14 21:21:48 +05:30
2329c47faf Bump version 2022-11-14 21:13:38 +05:30
2967261acb Ensure that we only pick better GPUs than the current one, during the subsequent tasks 2022-11-14 21:13:24 +05:30
64ff1ecbb6 Formatting for mem free 2022-11-14 21:02:17 +05:30
8707f88c07 Show mem free info 2022-11-14 20:35:47 +05:30
36846618ec Allow configuring whether the browser is opened by default 2022-11-14 20:15:54 +05:30
0cb2f19e29 Mark multi GPU as experimental in the UI 2022-11-14 20:06:20 +05:30
125a50ae87 Include the gpu id in the gpu list and system info 2022-11-14 20:01:57 +05:30
9d37ea23f8 Bump version 2022-11-14 19:53:55 +05:30
31617ae340 Show a system info tab, which shows the active GPUs 2022-11-14 19:53:40 +05:30
950614fb81 Bump version 2022-11-14 19:42:57 +05:30
14bbd7b7ae Merge pull request #474 from JeLuF/beta
Add paste button next to copy button
2022-11-14 19:06:52 +05:30
257cd34101 Merge branch 'beta' into beta 2022-11-14 19:06:35 +05:30
39814a89b6 Fix - setting can be null sometimes (autosave) 2022-11-14 18:09:25 +05:30
24fbbf8aa8 Remove unused variables 2022-11-14 16:26:16 +05:30
338ceffa6d Use 'auto' as the default render_device 2022-11-14 15:14:58 +05:30
371e104b00 Pick the device id 2022-11-14 13:43:37 +05:30
d5aba8eaf1 Show free/total mem while starting up 2022-11-14 13:40:55 +05:30
1d2b3a4ed8 Hide/show the GPUs list depending on whether auto is selected 2022-11-14 13:14:33 +05:30
f904945d40 Disable the GPU list if auto is enabled 2022-11-14 13:02:36 +05:30
027b2e1b88 Use the 65 percentile of free_mem for GPU selection, instead of 75 percentile 2022-11-14 12:26:21 +05:30
d79eb5e1a6 Typo 2022-11-14 11:51:56 +05:30
f6651b03b5 Workaround to run gfpgan on cuda:0 even if it's not enabled in the multi-gpu setup 2022-11-14 11:51:18 +05:30
5f880a179c Remove idle CPU unloading (when GPUs are active), because now a CPU can never be used along with GPUs 2022-11-14 11:24:30 +05:30
ea03fd22db Start on multiple GPUs by default (top 75 percentile by free_mem); UI selection for 'cpu' or 'auto' or a list of specific GPUs, which is now linked to the backend; Dynamically start/stop render threads for the devices, without requiring a full program restart 2022-11-14 11:23:22 +05:30
e561e4de0b Visual feedback for the copy and paste icons 2022-11-14 01:58:24 +01:00
1c3d5cd851 Add paste button next to copy button 2022-11-14 01:23:04 +01:00
a19ba40672 Typo 2022-11-12 13:31:59 +05:30
3983cb001f Save the VAE model to the metadata text file 2022-11-12 13:29:24 +05:30
78b464b404 Merge pull request #464 from madrang/beta
Always return a byte buffer. Sending the picture as URL text fails in some browsers.
2022-11-12 11:51:52 +05:30
aa21115e26 Always return a byte buffer. Sending the picture as URL text fails in some browsers. 2022-11-11 20:44:39 -05:00
a39f845835 current_vae_path needs to be global 2022-11-11 19:30:33 +05:30
3fdd8d91e2 Handle device init failures and record that as an error, if the GPU has less than 3 gb of VRAM 2022-11-11 16:13:27 +05:30
c13bccc7ae Fix the error where a device named 'None' would get assigned for incompatible GPUs 2022-11-11 15:43:20 +05:30
b4f7d6bf25 Bump js version 2022-11-11 15:12:04 +05:30
fa0c2f7138 Temp change to get beta working and use a single GPU until the rest of the changes come through 2022-11-11 15:09:25 +05:30
453cc2a951 Bump version 2022-11-11 14:46:27 +05:30
bd56795c62 Switch to using cuda:N instead of N (integer device ids) 2022-11-11 14:46:05 +05:30
2c54b7f289 Remove the WIP line for render devices 2022-11-11 14:43:14 +05:30
cd5f847b55 Merge branch 'beta' of github.com:cmdr2/stable-diffusion-ui into beta 2022-11-11 12:03:14 +05:30
a25544baea Fix the editor width on Chrome 2022-11-11 12:02:58 +05:30
f954542dda Merge pull request #461 from JeLuF/dontleave
Add event listener beforeunload
2022-11-11 10:58:15 +05:30
9fec7d236c Merge branch 'beta' of github.com:cmdr2/stable-diffusion-ui into beta 2022-11-11 10:48:47 +05:30
67656accf8 Bump css version. This is annoying 2022-11-11 10:48:30 +05:30
64952a536c Merge branch 'beta' of github.com:cmdr2/stable-diffusion-ui into beta 2022-11-11 10:47:52 +05:30
65e0d5f511 Attempt to fix horizontal resizing of the prompt textbox, thanks @Bilbo 2022-11-11 10:44:52 +05:30
5a06946469 Add event listener beforeunload
When closing the window, a warning is shown if there are any render results.
2022-11-10 23:23:20 +01:00
baef31b2c7 Send 'auto' as the render_device from the UI
, if no GPU is selected and CPU is unchecked)
2022-11-10 22:23:15 +05:30
b9a12d1562 Restrict device selection id to 'cpu' or integers (and 'auto' in the initial device selection functions) 2022-11-10 20:03:11 +05:30
3f26d03166 Show GPU list in the UI only if the PC has more than 1 GPU 2022-11-10 16:34:01 +05:30
1fed3ad532 Don't propagate events in the Stop Task button 2022-11-10 15:33:39 +05:30
929b245f5f Merge branch 'beta' of github.com:cmdr2/stable-diffusion-ui into beta 2022-11-10 14:59:11 +05:30
0da6354825 Press Ctrl+Enter to start a task 2022-11-10 14:59:01 +05:30
716a28891d Merge pull request #460 from JeLuF/helppage
Add Wiki TOC to the Help&Community tab
2022-11-10 13:17:26 +05:30
93a2e91694 Use theme variable for bottom border design 2022-11-10 00:44:42 +01:00
4913dc1aad Replace hr by border-bottom 2022-11-09 23:57:48 +01:00
087df18fea Add Wiki links to help&community page 2022-11-09 23:42:56 +01:00
058ce6fe82 UI-side changes for selecting multiple GPUs, and keeping the Use CPU option synchronized with the backend. This change isn't ready to be shipped, it still needs python-side changes to support the req.render_device config 2022-11-09 19:17:44 +05:30
087c10d52d Sort models by name 2022-11-09 17:35:55 +05:30
18292e447c Make the models dir if required 2022-11-09 16:28:58 +05:30
6c1dda47c0 Don't change the page when something other than an image or text file is dropped into the page (or an image is dropped outside the init image box) 2022-11-09 15:21:41 +05:30
ad1fc8f3d8 Bump version 2022-11-09 13:47:21 +05:30
bca98269bb Fix a bug where the custom image modifiers button would close the modifiers panel 2022-11-09 13:46:50 +05:30
1bebaf933d Bring back the old style panels for image settings and modifiers 2022-11-09 13:43:43 +05:30
166eb996a9 Bump versions 2022-11-09 12:26:27 +05:30
10fae34754 Bump js/css versions 2022-11-09 12:25:16 +05:30
aa4d97e8df Merge pull request #458 from mdiller/mdiller_ui_reorganize
UI Reorganization & Adding Tabs
2022-11-09 12:22:38 +05:30
dbbb9d7877 Temporarily remove the default-on behavior for GFPGAN, until the CPU version is fixed 2022-11-09 11:25:50 +05:30
3ff213b3e8 removed status on mobile 2022-11-08 21:51:12 -08:00
69c7f22053 Merge branch 'beta' into mdiller_ui_reorganize 2022-11-08 21:22:22 -08:00
75a964167a hid the now shown collapsible handle 2022-11-08 21:00:56 -08:00
c5768c81e1 Merge pull request #453 from patriceac/beta
Fix modifier and system settings popup position
2022-11-09 10:23:12 +05:30
4eb2b818e7 shrank system settings a bit so it fits on monible 2022-11-08 20:47:55 -08:00
f742aad810 Merge pull request #446 from JeLuF/sd-ui-bind
Protect SD_UI_BIND_PORT and SD_UI_BIND_IP in config files
2022-11-09 10:09:28 +05:30
d061eb2c64 updated to work decently for mobile 2022-11-08 20:37:49 -08:00
69aa115178 updated the about tab to be help and community, and fixed footer to act nicely 2022-11-08 20:19:31 -08:00
e175b87384 updated so tabs work now, and we have a settings tab and an about tab 2022-11-08 19:54:41 -08:00
f216ee739a updated with latest updates for this support 2022-11-08 19:22:14 -08:00
16d6644573 Merge pull request #455 from JeLuF/util-js
Fix: Uncaught TypeError: Cannot set properties of null
2022-11-09 08:22:46 +05:30
38afc6e6f8 Fix: Uncaught TypeError: Cannot set properties of null 2022-11-08 19:05:28 +01:00
8f1d214b12 Bring back CPU unloading of models when idle for a while (applies only when GPUs are present) 2022-11-08 19:23:35 +05:30
51fb1a43de Temporarily disable the idle CPU unloading behavior, since it's not clear whether it'll reload the model if a future request for the CPU is received after it has unloaded the model 2022-11-08 19:02:21 +05:30
a86b6bfbd6 Fix a bug with drag-and-drop where the upscale dropdown would not get enabled/disabled based on the setting 2022-11-08 18:46:55 +05:30
1176ddcc85 Fix a bug in drag-and-drop where an empty Negative Prompt line would result in the next line getting assigned to negative prompts; Simplify the drag-and-drop text file parsing logic to use a single algorithm, the files are small enough that we don't need over-optimization and confuse new developers 2022-11-08 18:44:11 +05:30
fa080e380c Fix a bug where images could no longer be dragged and dropped onto the initial image box 2022-11-08 18:14:26 +05:30
57c3acd9d8 Single line comment for Live Preview 2022-11-08 17:51:51 +05:30
302cf5b10b Show a tooltip over the ? help buttons 2022-11-08 17:49:46 +05:30
e2a9e81dbc Show tooltips on 'Copy Image Settings' 2022-11-08 17:40:47 +05:30
b1cf7391ce Add links to help docs for certain UI elements 2022-11-08 17:19:20 +05:30
9bc7521de0 Make custom VAE an Image Setting, rather than a System Setting; Don't load a VAE into memory by default 2022-11-08 16:54:15 +05:30
a68ebd2b76 Fixing the popup position on larger screens
Fixing the popup position on larger screens; Smaller screens still get the current rendering experience.
2022-11-08 02:17:26 -08:00
47f7c938ae Update main.css 2022-11-07 23:15:59 -08:00
67cca3bc00 Print the devices for which rendering threads have started; Prettier print of the model data 2022-11-07 18:26:10 +05:30
90b1609d4e device_selection is already a string, since we've used string functions before this line 2022-11-07 18:08:43 +05:30
abbfae2fc0 Simplify the logic used for displaying the GFPGAN warning 2022-11-07 17:55:27 +05:30
b52b854270 Merge pull request #444 from madrang/beta
Remove prompt_strength and init_image when not using in-painting
2022-11-07 14:43:48 +05:30
58b759f652 Fix tabs to spaces 2022-11-06 16:50:23 +01:00
74ca756a53 Protect SD_UI_BIND_PORT and SD_UI_BIND_IP in config files 2022-11-06 00:27:11 +01:00
3576214920 Remove prompt_strength and init_image when not using in-painting 2022-11-05 13:39:19 -04:00
f964fe3750 Add on/off support for parsing boolean. 2022-11-05 13:33:38 -04:00
749c72e6a6 Fix https://github.com/cmdr2/stable-diffusion-ui/issues/441 - numerical validation 2022-11-04 19:48:34 +05:30
c3129a40f1 Merge pull request #440 from madrang/dragNdrop
Drag&Drop Fixes
2022-11-04 09:13:14 +05:30
d04aa89812 Fix 'Use Upscaling' dropdown getting blank on False. 2022-11-03 20:34:51 -04:00
d5f854d376 Fix use_face_correction not disabling on false 2022-11-03 20:34:12 -04:00
6c57fa078b Merge pull request #437 from madrang/dragNdrop
Requested fixes for Drag&Drop
2022-11-03 13:13:41 +05:30
c3cc75feff Adds a list of properties to not export by default. 2022-11-03 03:16:20 -04:00
d2e6011089 Windows paths... 2022-11-03 03:12:11 -04:00
5a18144366 Enable/disable seedField when updating randomSeedField.checked 2022-11-03 03:11:58 -04:00
8a0a22bfb0 Merge pull request #427 from madrang/dragNdrop
Add support for drag&drop for the text files made by the backend
2022-11-03 11:41:21 +05:30
950b226374 Moved copy icon css to main.css 2022-11-03 02:09:42 -04:00
74e64a4387 Merge pull request #435 from JeLuF/sanitize
Sanitize session id's before using them as path components
2022-11-03 10:57:12 +05:30
59e4c1cf79 Sanitize session id's before using them as path components 2022-11-03 00:43:44 +01:00
045ad78bb9 Added calls to update sliders. 2022-11-02 10:53:48 -04:00
c0350e5be7 Moved file ext to a var. 2022-11-02 10:45:51 -04:00
2b3e38f77e Merge pull request #421 from madrang/beta
Fix plugins needing to specify many params or they would be missing in the render request.
2022-11-02 12:34:45 +05:30
d04fe5d582 Increase CSS version 2022-11-02 12:23:36 +05:30
17ab4caa5e Merge pull request #426 from ayunami2000/beta
Improve UI on mobile devices
2022-11-02 12:22:18 +05:30
976bc727dd Merge pull request #422 from madrang/device-select
Implement complete device selection in the backend.
2022-11-02 12:05:59 +05:30
484e53cc08 made first large swathe of changes for ui reorganization 2022-11-01 23:03:05 -07:00
b09b80933d Print device name on task start and complete to avoid doubt from users on what device selected the task. 2022-11-01 22:28:10 -04:00
93b3419737 Better human formatted JSON 2022-11-01 04:54:38 -04:00
19290fe467 Merge pull request #431 from JeLuF/patch-8
Copy CUDA_VISIBLE_DEVICES to config.*, it it has been set
2022-11-01 13:40:11 +05:30
d2f679030b Don't put CUDA_VISIBLE_DEVICES hints if it's already set 2022-11-01 01:16:29 +01:00
268d7495cc Naming... 2022-10-31 01:13:04 -04:00
ce16e61e63 Adds a copy as JSON button. 2022-10-31 01:02:23 -04:00
f92bca58fa Lines endings... 2022-10-31 01:01:56 -04:00
83d541b60d Fixed model parsing... 2022-10-30 23:41:26 -04:00
965efc3a13 Restore old values if invalid values for the dropdown was used. 2022-10-30 23:35:42 -04:00
d656c34bd4 Add support for drag&drop for the text files made by the backend and also supports JSON. 2022-10-30 23:21:39 -04:00
7f151cbeba Copy CUDA_VISIBLE_DEVICES to config.*, it it has been set
Don't delete CUDA_VISIBLE_DEVICES settings when generating a new config file
2022-10-31 00:48:18 +01:00
bc2f9204e9 Improve UI on mobile devices 2022-10-30 18:16:31 -04:00
a922a93016 Can work with one or more params, don't need a minimum of two.
Still works just the same.
2022-10-30 14:09:12 -04:00
eb596ba866 Allow start_render_thread to proceed faster in case of failure. 2022-10-30 06:04:06 -04:00
2208545612 Don't display this warning if on CPU. 2022-10-30 05:39:45 -04:00
c687091ce9 Only return valid data for alive threads. 2022-10-30 01:38:32 -04:00
eb994716e6 Indentation... 2022-10-30 01:33:17 -04:00
70acc8a7c0 Syntax... 2022-10-29 19:02:07 -04:00
bf97781232 Don't let users register the same device twice. 2022-10-29 18:57:31 -04:00
099727d671 Added auto unload to CPU if GPUs are active. 2022-10-29 18:57:10 -04:00
6229cdb1ba Added a missing device_name 2022-10-29 17:47:45 -04:00
b7a663ed20 Implement complete device selection in the backend. 2022-10-29 17:34:53 -04:00
3bd97352ba Don't reset reqBody, only replace using req as we use a new task object created from UI inputs.
Fix plugins needing to specify many params or they would be missing in the render request.
2022-10-29 14:47:58 -04:00
5e22360cb1 Change the JS/CSS version 2022-10-29 18:10:23 +05:30
840348b4eb fix: change to the correct working directory
changes to the directory containing `start.sh` prior to activating the conda environment

this allows you to run the program without first changing to the correct directory, eg: `$ ~/bin/stable-diffusion-ui/start.sh`
2022-10-29 15:09:04 +05:30
cf04738594 Merge pull request #420 from madrang/beta
Missing .lower() cause CUDA:0 to fail check where cuda:0 works.
2022-10-29 15:01:13 +05:30
03757632cf Missing .lower() cause CUDA:0 to fail check where cuda:0 works. 2022-10-29 04:33:14 -04:00
e818f5a93f Bump version 2022-10-29 12:29:28 +05:30
ab9b08770a Merge pull request #417 from mdiller/mdiller_parameters
Moved System Settings & Reworked into "Parameters"
2022-10-29 12:04:21 +05:30
40df8b68ad Merge pull request #418 from madrang/beta
Changed failure in start_render_thread to return false instead of throwing exception
2022-10-29 11:33:36 +05:30
9f5202fee3 Improved readability and comments. 2022-10-29 00:43:02 -04:00
902ccbd203 Don't try to start cuda:0 if auto used cpu mode. 2022-10-29 00:36:26 -04:00
4675da4d16 Display warning on start failure.
Removes spam from exception and continue starting other devices.
2022-10-28 22:53:55 -04:00
86da27a7a1 Moved wait outside lock and now returns false on failure. 2022-10-28 22:52:00 -04:00
fc2a6567da Moved import before use of runtime.thread_data.device 2022-10-28 22:51:04 -04:00
7c611d9b62 added some shadow and animation to popups 2022-10-28 18:41:41 -07:00
784c7465d1 updated settings labels 2022-10-28 18:31:46 -07:00
301af7bd7a added parameters 2022-10-28 18:25:54 -07:00
09c11a385d normalized popups 2022-10-28 16:48:32 -07:00
ef6f491d94 Write lines, please 2022-10-28 22:42:11 +05:30
9dcef00fbb New lines for config.sh 2022-10-28 22:35:04 +05:30
e781e5dd43 Need to wrap the filter() output in a list 2022-10-28 22:30:05 +05:30
d3e672d811 Replace os-specific newlines with writelines() 2022-10-28 22:23:52 +05:30
dad1554ec2 Fix a bug where config.bat would not get written properly 2022-10-28 21:07:18 +05:30
30bf96c6cd Fix a bug where beta wouldn't switch properly because the config.bat/sh files weren't being written 2022-10-28 21:00:25 +05:30
a8c16e39b8 Support custom VAE files; Use vae-ft-mse-840000-ema-pruned as the default VAE, which can be overridden by putting a .vae.pt file inside models/stable-diffusion with the same name as the ckpt model file. The UI / System Settings allows setting the default VAE model to use 2022-10-28 20:06:44 +05:30
79a7cd2938 Merge pull request #414 from madrang/beta
Set online after preload. Move ident to include in if check.
2022-10-28 13:54:44 +05:30
26562e445f Set online after preload. Move ident to include in if check. 2022-10-28 04:09:34 -04:00
2432491bfc Merge pull request #411 from madrang/beta
Apply force_full_precision if was set on device_select.
2022-10-28 11:50:16 +05:30
a09ce3e026 Merge pull request #412 from mdiller/mdiller_progressbar
Quick Fix for Progressbar Behavior
2022-10-28 11:49:42 +05:30
c52fc843f6 Comment... 2022-10-28 02:09:11 -04:00
02240bda25 Moved up to not duplicate if statement. 2022-10-28 02:05:48 -04:00
0185ef7c83 Apply force_full_precision if was set on device_select. 2022-10-28 02:02:09 -04:00
7d29b9901c updated progressbar to end more consistently 2022-10-27 22:47:08 -07:00
ae553dfed3 Merge pull request #410 from madrang/beta
Only default to cpu on auto or current when cuda not available.
2022-10-28 10:43:28 +05:30
71c6beadb4 Only default to cpu on auto or current.
Not when a specific device was requested.
2022-10-28 01:09:38 -04:00
d939629c09 Bump version 2022-10-28 10:39:23 +05:30
0a569146a8 Merge pull request #406 from mdiller/mdiller_progressbar
Fixed/Implemented Progressbar
2022-10-28 10:34:03 +05:30
d5a012d49f Merge pull request #407 from madrang/beta
Wait until device is fully ready before proceeding.
2022-10-28 10:28:16 +05:30
22a11769fa Enable preload on cpu when no other devices are alive. 2022-10-27 21:57:50 -04:00
7dc7ba9977 Removed old comments. 2022-10-27 21:47:44 -04:00
fa4059a4b9 Removed all async code since now start_render_thread wait for init to complete making this useless. 2022-10-27 21:40:16 -04:00
7f4786f9dd Wait until device is fully ready before proceding. 2022-10-27 20:27:21 -04:00
5a6e7a46d1 added progressbar 2022-10-27 17:03:09 -07:00
0dfaf9159d Put back the check to only preload on GPU 2022-10-28 00:04:33 +05:30
5d8bda1178 Merge pull request #404 from cmdr2/multi-gpu
Support for multiple GPUs, and improvements to RAM and VRAM usage
2022-10-27 23:52:11 +05:30
9ad1e0d529 Allow the user to specify any disk path to the model, in the config 2022-10-27 23:39:29 +05:30
389e3397ec Preload the model even in the CPU mode 2022-10-27 23:17:41 +05:30
284b95213e Fix a bug where the device wouldn't get set if no cuda-compatible hardware was found 2022-10-27 22:59:55 +05:30
952854f64e Revert 554650c18d 2022-10-27 22:59:17 +05:30
554650c18d Fix a bug where the device wouldn't get set if no cuda-compatible hardware was found 2022-10-27 22:51:45 +05:30
01a2fa7c2d Fix a bug where the default model would not load if the user hadn't already configured a custom model (e.g. in a fresh installation); Check for the model in the models/stable-diffusion folder first, before checking in the direct folder 2022-10-27 22:34:23 +05:30
7d03719816 Merge pull request #403 from cmdr2/main
Merge main
2022-10-27 20:50:09 +05:30
7c5bbca2fa Bump version 2022-10-27 20:49:05 +05:30
3fb5d886dc Merge pull request #398 from madrang/mGpu-crashHandling
mGpu crash handling
2022-10-27 13:44:26 +05:30
b57cd8d5c2 Merge pull request #397 from madrang/mGpu-fixtype
Fix TypeError: string indices must be integers
2022-10-27 13:44:19 +05:30
d3df113fb0 When reduced_memory is True, on crash only move model back to Cpu. 2022-10-26 16:52:31 -04:00
06c2ab045a Fix TypeError: string indices must be integers 2022-10-26 16:14:29 -04:00
ec14429238 Merge pull request #348 from madrang/multi-gpu
Multi gpu
2022-10-26 19:33:44 +05:30
ae40b6ba8c Missed a is_alive check in the conversion. 2022-10-25 03:00:50 -04:00
d482427e0d Merge branch 'beta' of https://github.com/cmdr2/stable-diffusion-ui.git into multi-gpu 2022-10-25 02:51:31 -04:00
c41baf3aeb Moved img_id creation inside save image loop. 2022-10-25 02:10:52 -04:00
189d31cc29 Specify update_ttl on all get_cached_task calls. 2022-10-24 05:12:08 -04:00
fc875651d3 Removed unused vars 2022-10-23 05:00:21 -04:00
0d62123a0b Replaced missing gpu_name by device_name 2022-10-22 21:28:12 -04:00
28fed6281f Merge branch 'beta' into multi-gpu 2022-10-22 21:20:02 -04:00
8adf965d0b Formatting changes. 2022-10-22 19:02:02 -04:00
026dd38480 Merge branch 'beta' of https://github.com/cmdr2/stable-diffusion-ui.git into multi-gpu 2022-10-22 18:07:22 -04:00
364e364429 Added get_cached_task to replace task_cache.tryGet in server.py
Now updated cache TTL on /stream and temp images endpoints.
Keep images alive longer when browser keeps reading the endpoints.
2022-10-22 13:52:13 -04:00
46a46877ed Missing model_path replaced by model_name 2022-10-22 13:49:23 -04:00
2c1a897c4e Missing newline. 2022-10-22 12:50:53 -04:00
344dd92c85 Improved checks on '/render' requests 2022-10-22 12:29:01 -04:00
4167c65acf Merge branch 'beta' of https://github.com/cmdr2/stable-diffusion-ui.git into multi-gpu 2022-10-22 12:17:12 -04:00
cd6d49860f Missing a 'r' in progress 2022-10-22 01:23:39 -04:00
8a10fcf7ea updated print statement. 2022-10-22 00:34:33 -04:00
3b5f96a133 Fixed stopping tasks and more cleaning. 2022-10-21 22:45:19 -04:00
ce2b711b1f Newlines... 2022-10-21 21:44:15 -04:00
667fb438cb Merge branch 'beta' of https://github.com/cmdr2/stable-diffusion-ui.git into multi-gpu
# Conflicts:
#	ui/media/js/main.js
2022-10-21 21:10:02 -04:00
7befa94e6d More comments and cleanup. 2022-10-21 20:56:24 -04:00
88ef1a3c5b Moved time before model.to 2022-10-21 20:22:34 -04:00
ccb7a553c2 Memory improvements 2022-10-21 19:34:29 -04:00
1442748f58 When starting with profiler cuda devices are slower to init. 2022-10-21 03:53:26 -04:00
56ed4fe6f2 Fix VisualStudio Type Warning. 2022-10-21 01:30:49 -04:00
849d1d7ebd Merge branch 'beta' of https://github.com/cmdr2/stable-diffusion-ui.git into multi-gpu
# Conflicts:
#	ui/media/js/main.js
#	ui/sd_internal/runtime.py
#	ui/server.py
2022-10-20 20:08:23 -04:00
fc8660df78 Faster response on invalid settings when CPU was specified with GFPGANer. 2022-10-19 05:19:16 -04:00
4e5ddca3bd Display the failure detail when there is one at that step.
Was checking the json object, not the server response.
2022-10-19 05:10:37 -04:00
3bdc90451a Dont preload on cpu. 2022-10-19 04:34:54 -04:00
a036b2981a Removed forgotten mention of CPU in message to user. 2022-10-19 04:31:57 -04:00
8fae83dab7 Print value to console for better debug from logs. 2022-10-19 04:26:09 -04:00
ef68e5b13d Added warning about validating config. 2022-10-19 04:16:46 -04:00
21afe077d7 Removed Cpu from the devices allowed to run GFPGANer.
Added clear error for the user.
2022-10-19 03:02:26 -04:00
3fc66ec525 Removed empty lines left over from merge. 2022-10-19 00:27:51 -04:00
0da0c6bd77 Merge branch 'beta' of https://github.com/cmdr2/stable-diffusion-ui.git into multi-gpu 2022-10-19 00:26:09 -04:00
6098b196dc Text header, comments and better validations. 2022-10-18 23:58:55 -04:00
53cdeeff03 More fixes to devices changing names. 2022-10-18 21:08:04 -04:00
fcdb086daf Fixed is_alive to work with devices that can change name after init. 2022-10-18 20:33:37 -04:00
cfd6751777 Merge branch 'beta' of https://github.com/cmdr2/stable-diffusion-ui.git into multi-gpu 2022-10-18 13:21:26 -04:00
5e461e9b6b Fixed is_alive with render_threads that can update the device name after starting. 2022-10-18 13:21:15 -04:00
940236b4a4 Merge branch 'beta' of https://github.com/cmdr2/stable-diffusion-ui.git into multi-gpu 2022-10-18 03:23:42 -04:00
3c8692d06c Merge branch 'beta' of https://github.com/cmdr2/stable-diffusion-ui.git into multi-gpu 2022-10-18 02:52:50 -04:00
04fe81e001 Merge branch 'HTTPException-fix' into multi-gpu
# Conflicts:
#	ui/server.py
2022-10-18 02:36:57 -04:00
578b3ba4f4 Force encoding to utf-8 on text file operations Fixes #332
# Conflicts:
#	ui/server.py
2022-10-17 23:15:36 -04:00
e24be913e5 Merge branch 'beta' into multi-gpu 2022-10-17 21:35:24 -04:00
4d3358ba66 Fixed file path bugs introduced by mistake and made img_id sequential based on time for better sorting of renders. 2022-10-17 21:29:14 -04:00
87f93b34a3 Fixed a typo when adding a comment. 2022-10-17 14:44:53 -04:00
c92129ac63 Improved detection of missing cuda:0 and added warning to console about how to fix. 2022-10-17 03:32:23 -04:00
554b67a2f0 Fixing bug in is_alive. 2022-10-17 01:05:51 -04:00
012243a880 Process GPU tasks on CPU when there are no cuda devices at all. 2022-10-17 01:05:27 -04:00
d4a348a2b2 Process GFPGANer on cuda:0 when possible, otherwise use cpu. 2022-10-16 23:12:46 -04:00
1d4c5cc96f Added clear error response when submitting tasks that requires GFPGANer if cuda:0 and cpu rendering is disabled. 2022-10-16 23:07:55 -04:00
41bfb96b6b Fixed bug in task_manager.is_alive and added way to check for first device. 2022-10-16 23:06:41 -04:00
994d62ac65 Added a clear error message when targeting CPU if not enabled in config. 2022-10-16 22:26:05 -04:00
7c72608e1c First draft for Multi-GPU support 2022-10-16 21:41:39 -04:00
19 changed files with 2703 additions and 919 deletions

View File

@ -50,7 +50,7 @@ if "%update_branch%"=="" (
)
)
@xcopy sd-ui-files\ui ui /s /i /Y
@xcopy sd-ui-files\ui ui /s /i /Y /q
@copy sd-ui-files\scripts\on_sd_start.bat scripts\ /Y
@copy sd-ui-files\scripts\bootstrap.bat scripts\ /Y
@copy "sd-ui-files\scripts\Start Stable Diffusion UI.cmd" . /Y

View File

@ -199,7 +199,9 @@ call WHERE uvicorn > .tmp
if not exist "..\models\stable-diffusion" mkdir "..\models\stable-diffusion"
if not exist "..\models\vae" mkdir "..\models\vae"
echo. > "..\models\stable-diffusion\Put your custom ckpt files here.txt"
echo. > "..\models\vae\Put your VAE files here.txt"
@if exist "sd-v1-4.ckpt" (
for %%I in ("sd-v1-4.ckpt") do if "%%~zI" EQU "4265380512" (
@ -329,6 +331,36 @@ echo. > "..\models\stable-diffusion\Put your custom ckpt files here.txt"
@if exist "..\models\vae\vae-ft-mse-840000-ema-pruned.ckpt" (
for %%I in ("..\models\vae\vae-ft-mse-840000-ema-pruned.ckpt") do if "%%~zI" EQU "334695179" (
echo "Data files (weights) necessary for the default VAE (sd-vae-ft-mse-original) were already downloaded"
) else (
echo. & echo "The default VAE (sd-vae-ft-mse-original) file present at models\vae\vae-ft-mse-840000-ema-pruned.ckpt is invalid. It is only %%~zI bytes in size. Re-downloading.." & echo.
del "..\models\vae\vae-ft-mse-840000-ema-pruned.ckpt"
)
)
@if not exist "..\models\vae\vae-ft-mse-840000-ema-pruned.ckpt" (
@echo. & echo "Downloading data files (weights) for the default VAE (sd-vae-ft-mse-original).." & echo.
@call curl -L -k https://huggingface.co/stabilityai/sd-vae-ft-mse-original/resolve/main/vae-ft-mse-840000-ema-pruned.ckpt > ..\models\vae\vae-ft-mse-840000-ema-pruned.ckpt
@if exist "..\models\vae\vae-ft-mse-840000-ema-pruned.ckpt" (
for %%I in ("..\models\vae\vae-ft-mse-840000-ema-pruned.ckpt") do if "%%~zI" NEQ "334695179" (
echo. & echo "Error: The downloaded default VAE (sd-vae-ft-mse-original) file was invalid! Bytes downloaded: %%~zI" & echo.
echo. & echo "Error downloading the data files (weights) for the default VAE (sd-vae-ft-mse-original). Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo.
pause
exit /b
)
) else (
@echo. & echo "Error downloading the data files (weights) for the default VAE (sd-vae-ft-mse-original). Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo.
pause
exit /b
)
)
@>nul findstr /m "sd_install_complete" ..\scripts\install_status.txt
@if "%ERRORLEVEL%" NEQ "0" (
@echo sd_weights_downloaded >> ..\scripts\install_status.txt

View File

@ -159,7 +159,9 @@ fi
mkdir -p "../models/stable-diffusion"
mkdir -p "../models/vae"
echo "" > "../models/stable-diffusion/Put your custom ckpt files here.txt"
echo "" > "../models/vae/Put your VAE files here.txt"
if [ -f "sd-v1-4.ckpt" ]; then
model_size=`find "sd-v1-4.ckpt" -printf "%s"`
@ -269,6 +271,38 @@ if [ ! -f "RealESRGAN_x4plus_anime_6B.pth" ]; then
fi
if [ -f "../models/vae/vae-ft-mse-840000-ema-pruned.ckpt" ]; then
model_size=`find ../models/vae/vae-ft-mse-840000-ema-pruned.ckpt -printf "%s"`
if [ "$model_size" -eq "334695179" ]; then
echo "Data files (weights) necessary for the default VAE (sd-vae-ft-mse-original) were already downloaded"
else
printf "\n\nThe model file present at models/vae/vae-ft-mse-840000-ema-pruned.ckpt is invalid. It is only $model_size bytes in size. Re-downloading.."
rm ../models/vae/vae-ft-mse-840000-ema-pruned.ckpt
fi
fi
if [ ! -f "../models/vae/vae-ft-mse-840000-ema-pruned.ckpt" ]; then
echo "Downloading data files (weights) for the default VAE (sd-vae-ft-mse-original).."
curl -L -k https://huggingface.co/stabilityai/sd-vae-ft-mse-original/resolve/main/vae-ft-mse-840000-ema-pruned.ckpt > ../models/vae/vae-ft-mse-840000-ema-pruned.ckpt
if [ -f "../models/vae/vae-ft-mse-840000-ema-pruned.ckpt" ]; then
model_size=`find ../models/vae/vae-ft-mse-840000-ema-pruned.ckpt -printf "%s"`
if [ ! "$model_size" -eq "334695179" ]; then
printf "\n\nError: The downloaded default VAE (sd-vae-ft-mse-original) file was invalid! Bytes downloaded: $model_size\n\n"
printf "\n\nError downloading the data files (weights) for the default VAE (sd-vae-ft-mse-original). Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n"
read -p "Press any key to continue"
exit
fi
else
printf "\n\nError downloading the data files (weights) for the default VAE (sd-vae-ft-mse-original). Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n"
read -p "Press any key to continue"
exit
fi
fi
if [ `grep -c sd_install_complete ../scripts/install_status.txt` -gt "0" ]; then
echo sd_weights_downloaded >> ../scripts/install_status.txt
echo sd_install_complete >> ../scripts/install_status.txt

View File

@ -1,14 +1,15 @@
<!DOCTYPE html>
<html>
<head>
<title>Stable Diffusion UI</title>
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<link rel="icon" type="image/png" href="/media/images/favicon-16x16.png" sizes="16x16">
<link rel="icon" type="image/png" href="/media/images/favicon-32x32.png" sizes="32x32">
<link rel="stylesheet" href="/media/css/fonts.css?v=1">
<link rel="stylesheet" href="/media/css/themes.css?v=1">
<link rel="stylesheet" href="/media/css/main.css?v=3">
<link rel="stylesheet" href="/media/css/auto-save.css?v=2">
<link rel="stylesheet" href="/media/css/modifier-thumbnails.css?v=2">
<link rel="stylesheet" href="/media/css/themes.css?v=3">
<link rel="stylesheet" href="/media/css/main.css?v=17">
<link rel="stylesheet" href="/media/css/auto-save.css?v=5">
<link rel="stylesheet" href="/media/css/modifier-thumbnails.css?v=4">
<link rel="stylesheet" href="/media/css/fontawesome-all.min.css?v=1">
<link rel="stylesheet" href="/media/css/drawingboard.min.css">
<script src="/media/js/jquery-3.6.1.min.js"></script>
@ -18,59 +19,39 @@
<div id="container">
<div id="top-nav">
<div id="logo">
<h1>Stable Diffusion UI <small>v2.3.5 <span id="updateBranchLabel"></span></small></h1>
<h1>Stable Diffusion UI <small>v2.4.6 <span id="updateBranchLabel"></span></small></h1>
</div>
<ul id="top-nav-items">
<li class="dropdown">
<span><i class="fa fa-comments icon"></i> Help & Community</span>
<ul id="community-links" class="dropdown-content">
<li><a href="https://github.com/cmdr2/stable-diffusion-ui/wiki/" target="_blank"><i class="fa-solid fa-book fa-fw"></i> User guide</a></li>
<li><a href="https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting" target="_blank"><i class="fa-solid fa-circle-question fa-fw"></i> Usual problems and solutions</a></li>
<li><a href="https://discord.com/invite/u9yhsFmEkB" target="_blank"><i class="fa-brands fa-discord fa-fw"></i> Discord user community</a></li>
<li><a href="https://www.reddit.com/r/StableDiffusionUI/" target="_blank"><i class="fa-brands fa-reddit fa-fw"></i> Reddit community</a></li>
<li><a href="https://github.com/cmdr2/stable-diffusion-ui" target="_blank"><i class="fa-brands fa-github fa-fw"></i> Source code on GitHub</a></li>
</ul>
</li>
<li class="dropdown">
<div id="server-status">
<div id="server-status-color"></div>
<span id="server-status-msg">Stable Diffusion is starting..</span>
</div>
<div id="tab-container">
<span id="tab-main" class="tab active">
<span><i class="fa fa-image icon"></i> Generate</span>
</span>
<span id="tab-settings" class="tab">
<span><i class="fa fa-gear icon"></i> Settings</span>
<div id="system-settings" class="panel-box settings-box dropdown-content">
<ul id="system-settings-entries">
<li><b class="settings-subheader">System Settings</b></li>
<br/>
<li><label for="theme">Theme: </label><select id="theme" name="theme"><option value="theme-default">Default</option></select></li>
<li><input id="save_to_disk" name="save_to_disk" type="checkbox"> <label for="save_to_disk">Automatically save to <input id="diskPath" name="diskPath" size="40" disabled></label></li>
<li><input id="sound_toggle" name="sound_toggle" type="checkbox" checked> <label for="sound_toggle">Play sound on task completion</label></li>
<li><input id="turbo" name="turbo" type="checkbox" checked> <label for="turbo">Turbo mode <small>(generates images faster, but uses an additional 1 GB of GPU memory)</small></label></li>
<li><input id="use_cpu" name="use_cpu" type="checkbox"> <label for="use_cpu">Use CPU instead of GPU <small>(warning: this will be *very* slow)</small></label></li>
<li><input id="use_full_precision" name="use_full_precision" type="checkbox"> <label for="use_full_precision">Use full precision <small>(for GPU-only. warning: this will consume more VRAM)</small></label></li>
<li>
<input id="auto_save_settings" name="auto_save_settings" checked type="checkbox">
<label for="auto_save_settings">Automatically save settings <small>(settings restored on browser load)</small></label>
<br/>
<button id="configureSettingsSaveBtn">Configure</button>
</li>
<!-- <li><input id="allow_nsfw" name="allow_nsfw" type="checkbox"> <label for="allow_nsfw">Allow NSFW Content (You confirm you are above 18 years of age)</label></li> -->
<br/>
<li><input id="use_beta_channel" name="use_beta_channel" type="checkbox"> <label for="use_beta_channel">🔥Beta channel. Get the latest features immediately (but could be less stable). Please restart the program after changing this.</label></li>
</ul>
</div>
</li>
</ul>
</span>
<span id="tab-about" class="tab">
<span><i class="fa fa-comments icon"></i> Help & Community</span>
</span>
</div>
</div>
<div class="flex-container">
<div id="editor" class="col-fixed-10">
<div id="server-status">
<div id="server-status-color"></div>
<span id="server-status-msg">Stable Diffusion is starting..</span>
</div>
<div id="tab-content-wrapper">
<div id="tab-content-main" class="tab-content active flex-container">
<div id="editor">
<div id="editor-inputs">
<div id="editor-inputs-prompt" class="row">
<label for="prompt"><b>Enter Prompt</b></label> <small>or</small> <button id="promptsFromFileBtn">Load from a file</button>
<textarea id="prompt" class="col-free">a photograph of an astronaut riding a horse</textarea>
<input id="prompt_from_file" name="prompt_from_file" type="file" /> <!-- hidden -->
<label for="negative_prompt" class="collapsible" id="negative_prompt_handle">Negative Prompt <small>(optional)</small></label>
<label for="negative_prompt" class="collapsible" id="negative_prompt_handle">
Negative Prompt
<a href="https://github.com/cmdr2/stable-diffusion-ui/wiki/Writing-prompts#negative-prompts" target="_blank"><i class="fa-solid fa-circle-question help-btn"><span class="simple-tooltip right">Click to learn more about Negative Prompts</span></i></a>
<small>(optional)</small>
</label>
<div class="collapsible-content">
<input id="negative_prompt" name="negative_prompt" placeholder="list the things to remove from the image (e.g. fog, green)">
</div>
@ -87,7 +68,12 @@
</div>
<br/>
<input id="enable_mask" name="enable_mask" type="checkbox"> <label for="enable_mask">In-Painting (beta) <small>(select the area which the AI will paint into)</small></label>
<input id="enable_mask" name="enable_mask" type="checkbox">
<label for="enable_mask">
In-Painting (beta)
<a href="https://github.com/cmdr2/stable-diffusion-ui/wiki/Inpainting" target="_blank"><i class="fa-solid fa-circle-question help-btn"><span class="simple-tooltip right">Click to learn more about InPainting</span></i></a>
<small>(select the area which the AI will paint into)</small>
</label>
<div id="inpaintingEditor"></div>
</div>
</div>
@ -101,19 +87,19 @@
<button id="stopImage" class="secondaryButton">Stop All</button>
</div>
<div class="line-separator">&nbsp;</div>
<span class="line-separator"></span>
<div id="editor-settings" class="panel-box settings-box">
<div id="editor-settings" class="settings-box panel-box">
<h4 class="collapsible">
Image Settings
<i id="reset-image-settings" class="fa-solid fa-arrow-rotate-left">
<span class="simple-tooltip right">
<i id="reset-image-settings" class="fa-solid fa-arrow-rotate-left section-button">
<span class="simple-tooltip left">
Reset Image Settings
</span>
</i>
</h4>
<ul id="editor-settings-entries" class="collapsible-content">
<li><table>
<div id="editor-settings-entries" class="collapsible-content">
<div><table>
<tr><b class="settings-subheader">Image Settings</b></tr>
<tr class="pl-5"><td><label for="seed">Seed:</label></td><td><input id="seed" name="seed" size="10" value="30000" onkeypress="preventNonNumericalInput(event)"> <input id="random_seed" name="random_seed" type="checkbox" checked><label for="random_seed">Random</label></td></tr>
<tr class="pl-5"><td><label for="num_outputs_total">Number of Images:</label></td><td><input id="num_outputs_total" name="num_outputs_total" value="1" size="1" onkeypress="preventNonNumericalInput(event)"> <label><small>(total)</small></label> <input id="num_outputs_parallel" name="num_outputs_parallel" value="1" size="1" onkeypress="preventNonNumericalInput(event)"> <label for="num_outputs_parallel"><small>(in parallel)</small></label></td></tr>
@ -121,6 +107,13 @@
<select id="stable_diffusion_model" name="stable_diffusion_model">
<!-- <option value="sd-v1-4" selected>sd-v1-4</option> -->
</select>
<a href="https://github.com/cmdr2/stable-diffusion-ui/wiki/Custom-Models" target="_blank"><i class="fa-solid fa-circle-question help-btn"><span class="simple-tooltip right">Click to learn more about custom models</span></i></a>
</td></tr>
<tr class="pl-5"><td><label for="vae_model">Custom VAE:</i></label></td><td>
<select id="vae_model" name="vae_model">
<!-- <option value="" selected>None</option> -->
</select>
<a href="https://github.com/cmdr2/stable-diffusion-ui/wiki/VAE-Variational-Auto-Encoder" target="_blank"><i class="fa-solid fa-circle-question help-btn"><span class="simple-tooltip right">Click to learn more about VAEs</span></i></a>
</td></tr>
<tr id="samplerSelection" class="pl-5"><td><label for="sampler">Sampler:</label></td><td>
<select id="sampler" name="sampler">
@ -133,6 +126,7 @@
<option value="dpm2_a">dpm2_a</option>
<option value="lms">lms</option>
</select>
<a href="https://github.com/cmdr2/stable-diffusion-ui/wiki/How-to-Use#samplers" target="_blank"><i class="fa-solid fa-circle-question help-btn"><span class="simple-tooltip right">Click to learn more about samplers</span></i></a>
</td></tr>
<tr class="pl-5"><td><label>Image Size: </label></td><td>
<select id="width" name="width" value="512">
@ -189,12 +183,11 @@
<option value="png">png</option>
</select>
</td></tr>
</li></table>
<br/>
</table></div>
<div><ul>
<li><b class="settings-subheader">Render Settings</b></li>
<li class="pl-5"><input id="stream_image_progress" name="stream_image_progress" type="checkbox"> <label for="stream_image_progress">Show a live preview <small>(uses more VRAM, slightly slower image creation)</small></label></li>
<li class="pl-5"><input id="stream_image_progress" name="stream_image_progress" type="checkbox"> <label for="stream_image_progress">Show a live preview <small>(uses more VRAM, and slower image creation)</small></label></li>
<li class="pl-5"><input id="use_face_correction" name="use_face_correction" type="checkbox"> <label for="use_face_correction">Fix incorrect faces and eyes <small>(uses GFPGAN)</small></label></li>
<li class="pl-5">
<input id="use_upscale" name="use_upscale" type="checkbox"> <label for="use_upscale">Upscale image by 4x with </label>
@ -204,12 +197,19 @@
</select>
</li>
<li class="pl-5"><input id="show_only_filtered_image" name="show_only_filtered_image" type="checkbox" checked> <label for="show_only_filtered_image">Show only the corrected/upscaled image</label></li>
</ul>
</ul></div>
</div>
</div>
<div id="editor-modifiers" class="panel-box">
<button id="modifier-settings-btn" title="Add custom modifiers"><i class="fa fa-gear"></i></button>
<h4 class="collapsible">Image Modifiers (art styles, tags etc)</h4>
<h4 class="collapsible">
Image Modifiers (art styles, tags etc)
<i id="modifier-settings-btn" class="fa-solid fa-gear section-button">
<span class="simple-tooltip left">
Add Custom Modifiers
</span>
</i>
</h4>
<div id="editor-modifiers-entries" class="collapsible-content">
<div id="editor-modifiers-entries-toolbar">
<label for="preview-image">Image Style:</label>
@ -236,21 +236,76 @@
<button id="clear-all-previews" class="secondaryButton"><i class="fa-solid fa-trash-can"></i> Clear All</button>
</div>
</div>
</div>
</div>
<div id="save-settings-config" style="display:none">
<div id="tab-content-settings" class="tab-content">
<div id="system-settings" class="tab-content-inner">
<h1>System Settings</h1>
<table class="form-table"></table>
<br/>
<button id="save-system-settings-btn" class="primaryButton">Save</button>
<br/><br/>
<div>
<h3><i class="fa fa-microchip icon"></i> System Info</h3>
<div id="system-info"></div>
</div>
</div>
</div>
<div id="tab-content-about" class="tab-content">
<div class="tab-content-inner">
<div class="float-container">
<div class="float-child">
<h1>Help</h1>
<ul id="help-links">
<li><span class="help-section">Using the software</span>
<ul>
<li> <a href="https://github.com/cmdr2/stable-diffusion-ui/wiki/How-To-Use" target="_blank"><i class="fa-solid fa-book fa-fw"></i> How to use</a>
<li> <a href="https://github.com/cmdr2/stable-diffusion-ui/wiki/UI-Overview" target="_blank"><i class="fa-solid fa-list fa-fw"></i> UI Overview</a>
<li> <a href="https://github.com/cmdr2/stable-diffusion-ui/wiki/Writing-Prompts" target="_blank"><i class="fa-solid fa-pen-to-square fa-fw"></i> Writing prompts</a>
<li> <a href="https://github.com/cmdr2/stable-diffusion-ui/wiki/Inpainting" target="_blank"><i class="fa-solid fa-paintbrush fa-fw"></i> Inpainting</a>
</ul>
<li><span class="help-section">Installation</span>
<ul>
<li> <a href="https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting" target="_blank"><i class="fa-solid fa-circle-question fa-fw"></i> Troubleshooting</a>
</ul>
<li><span class="help-section">Downloadable Content</span>
<ul>
<li> <a href="https://github.com/cmdr2/stable-diffusion-ui/wiki/Custom-Models" target="_blank"><i class="fa-solid fa-images fa-fw"></i> Custom Models</a>
<li> <a href="https://github.com/cmdr2/stable-diffusion-ui/wiki/UI-Plugins" target="_blank"><i class="fa-solid fa-puzzle-piece fa-fw"></i> UI Plugins</a>
<li> <a href="https://github.com/cmdr2/stable-diffusion-ui/wiki/VAE-Variational-Auto-Encoder" target="_blank"><i class="fa-solid fa-hand-sparkles fa-fw"></i> VAE Variational Auto Encoder</a>
</ul>
</ul>
</div>
<div class="float-child">
<h1>Community</h1>
<ul id="community-links">
<li><a href="https://discord.com/invite/u9yhsFmEkB" target="_blank"><i class="fa-brands fa-discord fa-fw"></i> Discord user community</a></li>
<li><a href="https://www.reddit.com/r/StableDiffusionUI/" target="_blank"><i class="fa-brands fa-reddit fa-fw"></i> Reddit community</a></li>
<li><a href="https://github.com/cmdr2/stable-diffusion-ui" target="_blank"><i class="fa-brands fa-github fa-fw"></i> Source code on GitHub</a></li>
</ul>
</div>
</div>
</div>
</div>
</div>
<div id="save-settings-config" class="popup">
<div>
<span id="save-settings-config-close-btn">X</span>
<i class="close-button fa-solid fa-xmark"></i>
<h1>Save Settings Configuration</h1>
<p>Select which settings should be remembered when restarting the browser</p>
<table id="save-settings-config-table">
<table id="save-settings-config-table" class="form-table">
</table>
</div>
</div>
<div id="modifier-settings-config" style="display:none">
<div id="modifier-settings-config" class="popup">
<div>
<span id="modifier-settings-config-close-btn">X</span>
<i class="close-button fa-solid fa-xmark"></i>
<h1>Modifier Settings</h1>
<p>Set your custom modifiers (one per line)</p>
<textarea id="custom-modifiers-input" placeholder="Enter your custom modifiers, one-per-line"></textarea>
@ -258,9 +313,9 @@
</div>
</div>
<div class="line-separator">&nbsp;</div>
<div id="footer" class="panel-box">
<div id="footer-spacer"></div>
<div id="footer">
<div class="line-separator">&nbsp;</div>
<p>If you found this project useful and want to help keep it alive, please <a href="https://ko-fi.com/cmdr2_stablediffusion_ui" target="_blank"><img src="/media/images/kofi.png" id="coffeeButton"></a> to help cover the cost of development and maintenance! Thank you for your support!</p>
<p>Please feel free to join the <a href="https://discord.com/invite/u9yhsFmEkB" target="_blank">discord community</a> or <a href="https://github.com/cmdr2/stable-diffusion-ui/issues" target="_blank">file an issue</a> if you have any problems or suggestions in using this interface.</p>
<div id="footer-legal">
@ -272,13 +327,15 @@
</div>
</body>
<script src="media/js/parameters.js?v=9"></script>
<script src="media/js/plugins.js?v=1"></script>
<script src="media/js/utils.js?v=5"></script>
<script src="media/js/utils.js?v=6"></script>
<script src="media/js/inpainting-editor.js?v=1"></script>
<script src="media/js/image-modifiers.js?v=3"></script>
<script src="media/js/auto-save.js?v=2.3"></script>
<script src="media/js/main.js?v=5"></script>
<script src="media/js/themes.js?v=2"></script>
<script src="media/js/image-modifiers.js?v=6"></script>
<script src="media/js/auto-save.js?v=8"></script>
<script src="media/js/main.js?v=22.1"></script>
<script src="media/js/themes.js?v=4"></script>
<script src="media/js/dnd.js?v=9"></script>
<script>
async function init() {
await initSettings()
@ -287,6 +344,7 @@ async function init() {
await getAppConfig()
await loadModifiers()
await loadUIPlugins()
await getDevices()
setInterval(healthCheck, HEALTH_PING_INTERVAL * 1000)
healthCheck()

View File

@ -6,69 +6,43 @@
display: none;
}
#save-settings-config {
position: absolute;
background: rgba(32, 33, 36, 50%);
top: 0px;
left: 0px;
right: 0px;
bottom: 0px;
z-index: 1000;
}
#save-settings-config > div {
background: var(--background-color3);
max-width: 600px;
margin: auto;
margin-top: 50px;
border-radius: 6px;
padding: 30px;
text-align: center;
}
#save-settings-config-table {
.form-table {
margin: auto;
}
#save-settings-config-table th {
.form-table th {
padding-top: 15px;
padding-bottom: 5px;
}
#save-settings-config-table td:first-child,
#save-settings-config-table th:first-child {
.form-table td:first-child > *,
.form-table th:first-child > * {
float: right;
white-space: nowrap;
}
#save-settings-config-table td:last-child,
#save-settings-config-table th:last-child {
.form-table td:last-child > *,
.form-table th:last-child > * {
float: left;
}
#save-settings-config-table td small {
.form-table small {
color: rgb(153, 153, 153);
}
#save-settings-config-close-btn {
float: right;
cursor: pointer;
padding: 10px;
transform: translate(50%, -50%) scaleX(130%);
#system-settings .form-table td {
height: 24px;
}
#reset-image-settings {
cursor: pointer;
float: right;
padding: 8px;
opacity: 1;
transition: opacity 0.5;
#system-settings .form-table td:last-child div {
display: flex;
align-items: center;
}
#system-settings .form-table td:last-child div > :not([type="checkbox"]):first-child {
margin-left: 3px;
}
.collapsible:not(.active) #reset-image-settings {
display: none;
}
#reset-image-settings.hidden {
opacity: 0;
pointer-events: none;
}
#system-settings .form-table td:last-child div small {
padding-left: 5px;
text-align: left;
}

View File

@ -8,6 +8,7 @@ html {
}
body {
margin: 0;
font-size: 11pt;
background-color: var(--background-color1);
color: var(--text-color);
@ -26,9 +27,10 @@ label {
height: 65pt;
font-size: 13px;
margin-bottom: 6px;
margin-top: 5px;
display: block;
}
.image_preview_container {
/* display: none; */
margin-top: 10pt;
}
.image_clear_btn {
@ -64,17 +66,17 @@ label {
font-size: small;
padding-bottom: 3pt;
}
#progressBar {
font-size: small;
}
#footer {
font-size: small;
padding-left: 10pt;
padding: 10pt;
background: none;
}
#footer-legal {
font-size: 8pt;
}
#footer-spacer {
flex: 0.7
}
.imgSeedLabel {
font-size: 0.8em;
background-color: var(--background-color2);
@ -107,33 +109,42 @@ label {
margin-bottom: 7px;
}
#container {
width: 95%;
margin-left: auto;
margin-right: auto;
}
@media screen and (max-width: 1800px) {
#container {
width: 100%;
}
min-height: 100vh;
width: 100%;
margin: 0px;
display: flex;
flex-direction: column;
}
#logo small {
font-size: 11pt;
}
#editor {
padding: 5px;
background: var(--background-color1);
padding: 16px;
display: flex;
flex-direction: column;
flex: 0 0 370pt;
}
#editor label {
font-weight: normal;
}
#editor h4 {
margin: 0px;
white-space: nowrap;
}
#editor .collapsible-content {
width: 100%;
}
.settings-box label small {
color: rgb(153, 153, 153);
margin-right: 10px;
}
#preview {
padding: 5px;
padding: 8px;
background: var(--background-color1);
}
#editor-inputs {
margin-bottom: 20px;
#preview .collapsible-content {
padding: 0px 15px;
}
#editor-inputs-prompt {
flex: 1;
@ -151,7 +162,7 @@ label {
#makeImage {
flex: 0 0 70px;
background: var(--accent-color);
border: var(--make-image-border);
border: var(--primary-button-border);
color: rgb(255, 221, 255);
width: 100%;
height: 30pt;
@ -168,6 +179,7 @@ label {
height: 30pt;
border-radius: 6px;
display: none;
margin-top: 2pt;
}
#stopImage:hover {
background: rgb(177, 27, 0);
@ -176,12 +188,6 @@ label {
display: flex;
width: 100%;
}
.col-50 {
flex: 50%;
}
.col-fixed-10 {
flex: 0 0 350pt;
}
.col-free {
flex: 1;
}
@ -189,7 +195,7 @@ label {
cursor: pointer;
}
.collapsible-content {
display: none;
display: block;
padding-left: 15px;
}
.collapsible-content h5 {
@ -201,50 +207,40 @@ label {
color: white;
padding-right: 5px;
}
.panel-box {
background: var(--background-color2);
border: 1px solid var(--background-color3);
border-radius: 7px;
padding: 5px;
margin-bottom: 15px;
box-shadow: 0 4px 8px 0 rgba(0, 0, 0, 0.15), 0 6px 20px 0 rgba(0, 0, 0, 0.15);
.collapsible:not(.active) ~ .collapsible-content {
display: none !important;
}
.panel-box h4 {
margin: 0;
padding: 2px 0;
#editor-modifiers {
max-width: 600px;
overflow-y: auto;
overflow-x: hidden;
}
#editor-modifiers .editor-modifiers-leaf {
padding-top: 10pt;
padding-bottom: 10pt;
}
#preview {
margin-left: 10pt;
}
img {
box-shadow: 0 4px 8px 0 rgba(0, 0, 0, 0.15), 0 6px 20px 0 rgba(0, 0, 0, 0.15);
}
.line-separator {
background: rgb(56, 56, 56);
background: var(--background-color3);
height: 1pt;
margin: 15pt 0;
margin: 16px 0px;
}
#editor-inputs-tags-container {
margin-top: 5pt;
display: none;
}
#server-status {
display: inline;
float: right;
transform: translateY(-5pt);
position: absolute;
right: 16px;
top: 50%;
transform: translateY(-50%);
text-align: right;
}
#server-status-color {
/* width: 8pt;
height: 8pt;
border-radius: 4pt; */
font-size: 14pt;
color: rgb(200, 139, 0);
/* background-color: rgb(197, 1, 1); */
/* transform: translateY(15%); */
display: inline;
}
#server-status-msg {
@ -288,16 +284,19 @@ img {
}
#top-nav {
padding-top: 3pt;
padding-bottom: 15pt;
position: relative;
background: var(--background-color4);
display: flex;
}
#top-nav .icon {
.tab .icon {
padding-right: 4pt;
font-size: 14pt;
transform: translateY(1pt);
}
#logo {
display: inline;
padding: 12px;
white-space: nowrap;
}
#logo h1 {
display: inline;
@ -311,6 +310,8 @@ img {
float: left;
display: inline;
padding-left: 20pt;
}
#top-nav-items > li:first-child {
cursor: default;
}
#initial-text {
@ -324,26 +325,12 @@ img {
.pl-5 {
padding-left: 5pt;
}
#system-settings {
width: 360pt;
transform: translateX(-100%) translateX(70pt);
padding-top: 10pt;
padding-bottom: 10pt;
}
#system-settings ul {
margin: 0;
padding: 0;
}
#system-settings li {
padding-left: 5pt;
}
#community-links {
display: inline-block;
list-style-type: none;
margin: 0;
padding: 12pt;
padding-bottom: 0pt;
transform: translateX(-15%);
text-align: left;
margin: auto;
padding: 0px;
}
#community-links li {
padding-bottom: 12pt;
@ -357,6 +344,35 @@ img {
color: var(--text-color);
text-decoration: none;
}
.float-child h1 {
border-bottom: var(--button-border);
}
#help-links {
display: inline-block;
list-style-type: none;
text-align: left;
margin: auto;
padding: 0px;
}
#help-links li {
padding-bottom: 12pt;
display: block;
font-size: 10pt;
}
#help-links li .fa-fw {
padding-right: 2pt;
}
#help-links li a {
color: var(--text-color);
text-decoration: none;
}
#help-links li ul {
padding-inline-start: 10px;
margin-top: 8px;
}
.help-section {
font-size: 130%;
}
.dropdown {
overflow: hidden;
}
@ -383,6 +399,9 @@ img {
border-radius: 5pt;
box-shadow: 0 20px 28px 0 rgba(0, 0, 0, 0.15), 0 6px 20px 0 rgba(0, 0, 0, 0.15);
}
.imageTaskContainer > div > .collapsible-handle {
display: none;
}
.taskStatusLabel {
float: left;
font-size: 8pt;
@ -402,6 +421,12 @@ img {
border: 1px solid rgb(107, 75, 0);
color:rgb(255, 242, 211)
}
.primaryButton {
flex: 0 0 70px;
background: var(--accent-color);
border: var(--primary-button-border);
color: rgb(255, 221, 255);
}
.secondaryButton {
background: rgb(132, 8, 0);
border: 1px solid rgb(122, 29, 0);
@ -433,6 +458,8 @@ img {
#init_image_preview {
max-width: 150px;
max-height: 150px;
height: 100%;
width: 100%;
object-fit: contain;
border-radius: 6px;
transition: all 1s ease-in-out;
@ -441,6 +468,7 @@ img {
#init_image_preview:hover {
max-width: 500px;
max-height: 1000px;
transition: all 1s 0.5s ease-in-out;
}
@ -462,6 +490,20 @@ img {
border-radius: 6px 0px;
}
#editor-settings-entries {
display: flex;
flex-direction: column;
}
#editor-settings-entries > div {
margin-top: 15px;
}
#editor-settings-entries ul {
margin: 0px;
padding: 0px;
}
#editor-settings-entries table td {
padding: 0px;
line-height: 28px;
@ -477,6 +519,7 @@ img {
width: 100%;
}
/* INPUTS STYLING */
button,
input[type="file"],
input[type="checkbox"],
@ -536,13 +579,16 @@ input::file-selector-button {
height: 19px;
}
/* MOBILE SUPPORT */
@media screen and (max-width: 700px) {
#top-nav {
flex-direction: column;
}
body {
margin: 0px;
}
#container {
margin: 0px;
padding: 10px
}
.flex-container {
flex-direction: column;
@ -571,21 +617,98 @@ input::file-selector-button {
left: 0px;
right: 0px;
}
#editor {
padding: 16px 8px;
}
.tab-content-inner {
margin: 0px;
}
.tab {
font-size: 0;
}
.tab .icon {
padding-right: 0px;
}
#server-status {
display: none;
}
.popup > div {
padding-left: 5px !important;
padding-right: 5px !important;
}
.popup > div input, .popup > div select {
max-width: 40vw;
}
.popup .close-button {
padding: 0px !important;
margin: 24px !important;
}
.simple-tooltip.right {
right: initial;
left: 0px;
top: 50%;
transform: translate(calc(-100% + 15%), -50%);
}
:hover > .simple-tooltip.right {
transform: translate(100%, -50%);
}
}
@media (min-width: 700px) {
/* #editor {
max-width: 480px;
} */
.float-container {
padding: 20px;
}
.float-child {
width: 50%;
float: left;
padding: 20px;
}
}
.help-btn {
position: relative;
}
#promptsFromFileBtn {
font-size: 9pt;
}
#reset-image-settings {
.section-button {
position: relative;
transform: translateY(-13%);
}
.collapsible:not(.active) #copy-image-settings {
display: none;
}
.section-button {
cursor: pointer;
float: right;
padding: 8px;
opacity: 1;
transition: opacity 0.5;
}
.section-button {
cursor: pointer;
float: right;
padding: 8px;
opacity: 1;
transition: opacity 0.5;
}
.collapsible:not(.active) .section-button {
display: none;
}
/* SIMPLE TOOTIP */
.simple-tooltip {
border-radius: 3px;
font-weight: bold;
font-size: 16px;
font-size: 12px;
background-color: var(--background-color3);
visibility: hidden;
@ -604,8 +727,6 @@ input::file-selector-button {
visibility: visible;
}
}
/* position specific */
.simple-tooltip.right {
right: 0px;
top: 50%;
@ -641,3 +762,154 @@ input::file-selector-button {
:hover > .simple-tooltip.bottom {
transform: translate(-50%, 100%);
}
/* PROGRESS BAR */
.progress-bar {
background: var(--background-color3);
border-radius: 4px;
border: 2px solid var(--background-color3);
height: 16px;
position: relative;
transition: 0.25s 1s border, 0.25s 1s height;
}
.progress-bar > div {
background: var(--accent-color);
border-radius: 4px;
position: absolute;
left: 0;
top: 0;
bottom: 0;
width: 0%;
transition: width 1s ease-in-out;
}
.progress-bar.active {
background: repeating-linear-gradient(-65deg,
var(--background-color2),
var(--background-color2) 4px,
var(--background-color3) 5px,
var(--background-color3) 9px,
var(--background-color2) 10px);
background-size: 200% auto;
background-position: 0 100%;
animation: progress-anim 2s infinite;
animation-fill-mode: forwards;
animation-timing-function: linear;
}
@keyframes progress-anim {
0% { background-position: -55px 0; }
100% { background-position: 0 0; }
}
/* POPUPS */
.popup:not(.active) {
visibility: hidden;
opacity: 0;
}
.popup {
position: absolute;
background: rgba(32, 33, 36, 50%);
top: 0px;
left: 0px;
right: 0px;
bottom: 0px;
z-index: 1000;
opacity: 1;
transition: 0s visibility, 0.3s opacity;
}
@media only screen and (min-height: 1050px) {
.popup {
position: fixed;
}
}
.popup > div {
position: relative;
background: var(--background-color2);
border: solid 1px var(--background-color3);
max-width: 700px;
margin: auto;
margin-top: 50px;
border-radius: 6px;
padding: 30px;
text-align: center;
box-shadow: 0px 0px 30px black;
}
.popup .close-button {
position: absolute;
right: 0px;
top: 0px;
transform: scale(150%);
cursor: pointer;
padding: 24px;
}
/* TABS */
#tab-container {
display: flex;
align-items: flex-end;
}
.tab {
padding: 8px 16px;
border-radius: 4px 4px 0px 0px;
margin-left: 8px;
cursor: pointer;
background: var(--background-color1);
opacity: 50%;
transition: opacity 0.25s;
}
.tab:hover {
opacity: 75%;
}
.tab.active {
opacity: 100%;
}
.tab-content:not(.active) {
display: none;
}
#tab-content-wrapper {
border-top: 8px solid var(--background-color1);
}
.tab-content-inner {
margin: auto;
max-width: 600px;
text-align: center;
padding: 20px 10px;
}
.panel-box {
background: var(--background-color2);
border: 1px solid var(--background-color3);
border-radius: 7px;
padding: 7px;
margin-bottom: 15px;
box-shadow: 0 4px 8px 0 rgba(0, 0, 0, 0.15), 0 6px 20px 0 rgba(0, 0, 0, 0.15);
}
i.active {
background: var(--accent-color);
}
#system-info {
max-width: 800px;
font-size: 10pt;
}
#system-info .value {
text-align: left;
padding-left: 10pt;
}
#system-info label {
float: right;
font-weight: bold;
}
#save-system-settings-btn {
padding: 4pt 8pt;
}

View File

@ -217,32 +217,6 @@
#modifier-settings-btn {
float: right;
}
#modifier-settings-config {
position: fixed;
background: rgba(32, 33, 36, 50%);
top: 0px;
left: 0px;
width: 100vw;
height: 100vh;
z-index: 1000;
}
#modifier-settings-config > div {
background: var(--background-color2);
max-width: 600px;
margin: auto;
margin-top: 100px;
border-radius: 6px;
padding: 30px;
text-align: center;
}
#modifier-settings-config-close-btn {
float: right;
cursor: pointer;
padding: 10px;
transform: translate(50%, -50%) scaleX(130%);
}
#modifier-settings-config textarea {
width: 90%;
height: 150px;

View File

@ -23,12 +23,12 @@
--input-border-size: 1px;
--accent-color: hsl(var(--accent-hue), 100%, var(--accent-lightness));
--accent-color-hover: hsl(var(--accent-hue), 100%, var(--accent-lightness-hover));
--make-image-border: 2px solid hsl(var(--accent-hue), 100%, calc(var(--accent-lightness) - 21%));
--primary-button-border: none;
}
.theme-light {
--background-color1: white;
--background-color2: #dddddd;
--background-color2: #ececec;
--background-color3: #e7e9eb;
--background-color4: #cccccc;
@ -47,7 +47,7 @@
--accent-hue: 235;
--accent-lightness: 65%;
--make-image-border: none;
--primary-button-border: none;
--button-color: var(--accent-color);
--button-border: none;
@ -61,7 +61,7 @@
.theme-cool-blue {
--main-hue: 222;
--main-saturation: 18%;
--value-base: 19%;
--value-base: 18%;
--value-step: 3%;
--background-color1: hsl(var(--main-hue), var(--main-saturation), var(--value-base));
--background-color2: hsl(var(--main-hue), var(--main-saturation), calc(var(--value-base) - (1 * var(--value-step))));
@ -69,7 +69,7 @@
--background-color4: hsl(var(--main-hue), var(--main-saturation), calc(var(--value-base) - (3 * var(--value-step))));
--accent-hue: 212;
--make-image-border: none;
--primary-button-border: none;
--button-color: var(--accent-color);
--button-border: none;
@ -91,7 +91,7 @@
--background-color3: hsl(var(--main-hue), var(--main-saturation), calc(var(--value-base) - (2 * var(--value-step))));
--background-color4: hsl(var(--main-hue), var(--main-saturation), calc(var(--value-base) - (3 * var(--value-step))));
--make-image-border: none;
--primary-button-border: none;
--button-color: var(--accent-color);
--button-border: none;
@ -110,9 +110,9 @@
--background-color1: hsl(var(--main-hue), var(--main-saturation), var(--value-base));
--background-color2: hsl(var(--main-hue), var(--main-saturation), calc(var(--value-base) + (1 * var(--value-step))));
--background-color3: hsl(var(--main-hue), var(--main-saturation), calc(var(--value-base) + (2 * var(--value-step))));
--background-color4: hsl(var(--main-hue), var(--main-saturation), calc(var(--value-base) + (3 * var(--value-step))));
--background-color4: hsl(var(--main-hue), var(--main-saturation), calc(var(--value-base) + (1.4 * var(--value-step))));
--make-image-border: none;
--primary-button-border: none;
--button-color: var(--accent-color);
--button-border: none;
@ -134,7 +134,7 @@
--background-color4: hsl(var(--main-hue), var(--main-saturation), calc(var(--value-base) - (3 * var(--value-step))));
--accent-hue: 212;
--make-image-border: none;
--primary-button-border: none;
--button-color: var(--accent-color);
--button-border: none;

View File

@ -13,6 +13,7 @@ const SETTINGS_IDS_LIST = [
"num_outputs_total",
"num_outputs_parallel",
"stable_diffusion_model",
"vae_model",
"sampler",
"width",
"height",
@ -33,7 +34,6 @@ const SETTINGS_IDS_LIST = [
"diskPath",
"sound_toggle",
"turbo",
"use_cpu",
"use_full_precision",
"auto_save_settings"
]
@ -52,7 +52,9 @@ const SETTINGS_SECTIONS = [ // gets the "keys" property filled in with an ordere
async function initSettings() {
SETTINGS_IDS_LIST.forEach(id => {
var element = document.getElementById(id)
var label = document.querySelector(`label[for='${element.id}']`)
if (!element) {
console.error(`Missing settings element ${id}`)
}
SETTINGS[id] = {
key: id,
element: element,
@ -68,7 +70,8 @@ async function initSettings() {
SETTINGS_SECTIONS.forEach(section => {
var name = section.name
var element = document.getElementById(section.id)
var children = Array.from(element.querySelectorAll(unsorted_settings_ids.map(id => `#${id}`).join(",")))
var unsorted_ids = unsorted_settings_ids.map(id => `#${id}`).join(",")
var children = unsorted_ids == "" ? [] : Array.from(element.querySelectorAll(unsorted_ids));
section.keys = []
children.forEach(e => {
section.keys.push(e.id)
@ -126,10 +129,11 @@ function loadSettings() {
return
}
CURRENTLY_LOADING_SETTINGS = true
saved_settings.map(saved_setting => {
saved_settings.forEach(saved_setting => {
var setting = SETTINGS[saved_setting.key]
if (setting === undefined) {
return
if (!setting) {
console.warn(`Attempted to load setting ${saved_setting.key}, but no setting found`);
return null;
}
setting.ignore = saved_setting.ignore
if (!setting.ignore) {
@ -211,20 +215,22 @@ function fillSaveSettingsConfigTable() {
})
}
document.getElementById("save-settings-config-close-btn").addEventListener('click', () => {
saveSettingsConfigOverlay.style.display = 'none'
// configureSettingsSaveBtn
var autoSaveSettings = document.getElementById("auto_save_settings")
var configSettingsButton = document.createElement("button")
configSettingsButton.textContent = "Configure"
configSettingsButton.style.margin = "0px 5px"
autoSaveSettings.insertAdjacentElement("afterend", configSettingsButton)
autoSaveSettings.addEventListener("change", () => {
configSettingsButton.style.display = autoSaveSettings.checked ? "block" : "none"
})
document.getElementById("configureSettingsSaveBtn").addEventListener('click', () => {
configSettingsButton.addEventListener('click', () => {
fillSaveSettingsConfigTable()
saveSettingsConfigOverlay.style.display = 'block'
})
saveSettingsConfigOverlay.addEventListener('click', (event) => {
if (event.target.id == saveSettingsConfigOverlay.id) {
saveSettingsConfigOverlay.style.display = 'none'
}
})
document.getElementById("save-settings-config-close-btn").addEventListener('click', () => {
saveSettingsConfigOverlay.style.display = 'none'
saveSettingsConfigOverlay.classList.add("active")
})
resetImageSettingsButton.addEventListener('click', event => {
loadDefaultSettingsSection("editor-settings");
@ -276,9 +282,11 @@ function tryLoadOldSettings() {
Object.keys(individual_settings_map).forEach(localStorageKey => {
var localStorageValue = localStorage.getItem(localStorageKey);
if (localStorageValue !== null) {
var setting = SETTINGS[individual_settings_map[localStorageKey]]
if (setting == null || setting == undefined) {
return
let key = individual_settings_map[localStorageKey]
var setting = SETTINGS[key]
if (!setting) {
console.warn(`Attempted to map old setting ${key}, but no setting found`);
return null;
}
if (setting.element.type == "checkbox" && (typeof localStorageValue === "string" || localStorageValue instanceof String)) {
localStorageValue = localStorageValue == "true"

469
ui/media/js/dnd.js Normal file
View File

@ -0,0 +1,469 @@
"use strict" // Opt in to a restricted variant of JavaScript
const EXT_REGEX = /(?:\.([^.]+))?$/
const TEXT_EXTENSIONS = ['txt', 'json']
const IMAGE_EXTENSIONS = ['jpg', 'jpeg', 'png', 'bmp', 'tiff', 'tif', 'tga']
function parseBoolean(stringValue) {
if (typeof stringValue === 'boolean') {
return stringValue
}
if (typeof stringValue === 'number') {
return stringValue !== 0
}
if (typeof stringValue !== 'string') {
return false
}
switch(stringValue?.toLowerCase()?.trim()) {
case "true":
case "yes":
case "on":
case "1":
return true;
case "false":
case "no":
case "off":
case "0":
case null:
case undefined:
return false;
}
try {
return Boolean(JSON.parse(stringValue));
} catch {
return Boolean(stringValue)
}
}
const TASK_MAPPING = {
prompt: { name: 'Prompt',
setUI: (prompt) => {
promptField.value = prompt
},
readUI: () => promptField.value,
parse: (val) => val
},
negative_prompt: { name: 'Negative Prompt',
setUI: (negative_prompt) => {
negativePromptField.value = negative_prompt
},
readUI: () => negativePromptField.value,
parse: (val) => val
},
width: { name: 'Width',
setUI: (width) => {
const oldVal = widthField.value
widthField.value = width
if (!widthField.value) {
widthField.value = oldVal
}
},
readUI: () => parseInt(widthField.value),
parse: (val) => parseInt(val)
},
height: { name: 'Height',
setUI: (height) => {
const oldVal = heightField.value
heightField.value = height
if (!heightField.value) {
heightField.value = oldVal
}
},
readUI: () => parseInt(heightField.value),
parse: (val) => parseInt(val)
},
seed: { name: 'Seed',
setUI: (seed) => {
if (!seed) {
randomSeedField.checked = true
seedField.disabled = true
return
}
randomSeedField.checked = false
seedField.disabled = false
seedField.value = seed
},
readUI: () => (randomSeedField.checked ? Math.floor(Math.random() * 10000000) : parseInt(seedField.value)),
parse: (val) => parseInt(val)
},
num_inference_steps: { name: 'Steps',
setUI: (num_inference_steps) => {
numInferenceStepsField.value = num_inference_steps
},
readUI: () => parseInt(numInferenceStepsField.value),
parse: (val) => parseInt(val)
},
guidance_scale: { name: 'Guidance Scale',
setUI: (guidance_scale) => {
guidanceScaleField.value = guidance_scale
updateGuidanceScaleSlider()
},
readUI: () => parseFloat(guidanceScaleField.value),
parse: (val) => parseFloat(val)
},
prompt_strength: { name: 'Prompt Strength',
setUI: (prompt_strength) => {
promptStrengthField.value = prompt_strength
updatePromptStrengthSlider()
},
readUI: () => parseFloat(promptStrengthField.value),
parse: (val) => parseFloat(val)
},
init_image: { name: 'Initial Image',
setUI: (init_image) => {
initImagePreview.src = init_image
},
readUI: () => initImagePreview.src,
parse: (val) => val
},
mask: { name: 'Mask',
setUI: (mask) => {
inpaintingEditor.setImg(mask)
maskSetting.checked = Boolean(mask)
},
readUI: () => (maskSetting.checked ? inpaintingEditor.getImg() : undefined),
parse: (val) => val
},
use_face_correction: { name: 'Use Face Correction',
setUI: (use_face_correction) => {
useFaceCorrectionField.checked = parseBoolean(use_face_correction)
},
readUI: () => useFaceCorrectionField.checked,
parse: (val) => parseBoolean(val)
},
use_upscale: { name: 'Use Upscaling',
setUI: (use_upscale) => {
const oldVal = upscaleModelField.value
upscaleModelField.value = use_upscale
if (upscaleModelField.value) { // Is a valid value for the field.
useUpscalingField.checked = true
upscaleModelField.disabled = false
} else { // Not a valid value, restore the old value and disable the filter.
upscaleModelField.disabled = true
upscaleModelField.value = oldVal
useUpscalingField.checked = false
}
},
readUI: () => (useUpscalingField.checked ? upscaleModelField.value : undefined),
parse: (val) => val
},
sampler: { name: 'Sampler',
setUI: (sampler) => {
samplerField.value = sampler
},
readUI: () => samplerField.value,
parse: (val) => val
},
use_stable_diffusion_model: { name: 'Stable Diffusion model',
setUI: (use_stable_diffusion_model) => {
const oldVal = stableDiffusionModelField.value
let pathIdx = use_stable_diffusion_model.lastIndexOf('/') // Linux, Mac paths
if (pathIdx < 0) {
pathIdx = use_stable_diffusion_model.lastIndexOf('\\') // Windows paths.
}
if (pathIdx >= 0) {
use_stable_diffusion_model = use_stable_diffusion_model.slice(pathIdx + 1)
}
const modelExt = '.ckpt'
if (use_stable_diffusion_model.endsWith(modelExt)) {
use_stable_diffusion_model = use_stable_diffusion_model.slice(0, use_stable_diffusion_model.length - modelExt.length)
}
stableDiffusionModelField.value = use_stable_diffusion_model
if (!stableDiffusionModelField.value) {
stableDiffusionModelField.value = oldVal
}
},
readUI: () => stableDiffusionModelField.value,
parse: (val) => val
},
numOutputsParallel: { name: 'Parallel Images',
setUI: (numOutputsParallel) => {
numOutputsParallelField.value = numOutputsParallel
},
readUI: () => parseInt(numOutputsParallelField.value),
parse: (val) => val
},
use_cpu: { name: 'Use CPU',
setUI: (use_cpu) => {
useCPUField.checked = use_cpu
},
readUI: () => useCPUField.checked,
parse: (val) => val
},
turbo: { name: 'Turbo',
setUI: (turbo) => {
turboField.checked = turbo
},
readUI: () => turboField.checked,
parse: (val) => Boolean(val)
},
use_full_precision: { name: 'Use Full Precision',
setUI: (use_full_precision) => {
useFullPrecisionField.checked = use_full_precision
},
readUI: () => useFullPrecisionField.checked,
parse: (val) => Boolean(val)
},
stream_image_progress: { name: 'Stream Image Progress',
setUI: (stream_image_progress) => {
streamImageProgressField.checked = (parseInt(numOutputsTotalField.value) > 50 ? false : stream_image_progress)
},
readUI: () => streamImageProgressField.checked,
parse: (val) => Boolean(val)
},
show_only_filtered_image: { name: 'Show only the corrected/upscaled image',
setUI: (show_only_filtered_image) => {
showOnlyFilteredImageField.checked = show_only_filtered_image
},
readUI: () => showOnlyFilteredImageField.checked,
parse: (val) => Boolean(val)
},
output_format: { name: 'Output Format',
setUI: (output_format) => {
outputFormatField.value = output_format
},
readUI: () => outputFormatField.value,
parse: (val) => val
},
save_to_disk_path: { name: 'Save to disk path',
setUI: (save_to_disk_path) => {
saveToDiskField.checked = Boolean(save_to_disk_path)
diskPathField.value = save_to_disk_path
},
readUI: () => diskPathField.value,
parse: (val) => val
}
}
function restoreTaskToUI(task) {
if ('numOutputsTotal' in task) {
numOutputsTotalField.value = task.numOutputsTotal
}
if ('seed' in task) {
randomSeedField.checked = false
seedField.value = task.seed
}
if (!('reqBody' in task)) {
return
}
for (const key in TASK_MAPPING) {
if (key in task.reqBody) {
TASK_MAPPING[key].setUI(task.reqBody[key])
}
}
}
function readUI() {
const reqBody = {}
for (const key in TASK_MAPPING) {
reqBody[key] = TASK_MAPPING[key].readUI()
}
return {
'numOutputsTotal': parseInt(numOutputsTotalField.value),
'seed': TASK_MAPPING['seed'].readUI(),
'reqBody': reqBody
}
}
const TASK_TEXT_MAPPING = {
width: 'Width',
height: 'Height',
seed: 'Seed',
num_inference_steps: 'Steps',
guidance_scale: 'Guidance Scale',
prompt_strength: 'Prompt Strength',
use_face_correction: 'Use Face Correction',
use_upscale: 'Use Upscaling',
sampler: 'Sampler',
negative_prompt: 'Negative Prompt',
use_stable_diffusion_model: 'Stable Diffusion model'
}
const afterPromptRe = /^\s*Width\s*:\s*\d+\s*(?:\r\n|\r|\n)+\s*Height\s*:\s*\d+\s*(\r\n|\r|\n)+Seed\s*:\s*\d+\s*$/igm
function parseTaskFromText(str) {
const taskReqBody = {}
// Prompt
afterPromptRe.lastIndex = 0
const match = afterPromptRe.exec(str)
if (match) {
let prompt = str.slice(0, match.index)
str = str.slice(prompt.length)
taskReqBody.prompt = prompt.trim()
console.log('Prompt:', taskReqBody.prompt)
}
for (const key in TASK_TEXT_MAPPING) {
const name = TASK_TEXT_MAPPING[key];
let val = undefined
const reName = new RegExp(`${name}\\ *:\\ *(.*)(?:\\r\\n|\\r|\\n)*`, 'igm')
const match = reName.exec(str);
if (match) {
str = str.slice(0, match.index) + str.slice(match.index + match[0].length)
val = match[1]
}
if (val !== undefined) {
taskReqBody[key] = TASK_MAPPING[key].parse(val.trim())
console.log(TASK_MAPPING[key].name + ':', taskReqBody[key])
if (!str) {
break
}
}
}
if (Object.keys(taskReqBody).length <= 0) {
return undefined
}
const task = { reqBody: taskReqBody }
if ('seed' in taskReqBody) {
task.seed = taskReqBody.seed
}
return task
}
async function readFile(file, i) {
const fileContent = (await file.text()).trim()
// JSON File.
if (fileContent.startsWith('{') && fileContent.endsWith('}')) {
try {
const task = JSON.parse(fileContent)
restoreTaskToUI(task)
} catch (e) {
console.warn(`file[${i}]:${file.name} - File couldn't be parsed.`, e)
}
return
}
// Normal txt file.
const task = parseTaskFromText(fileContent)
if (task) {
restoreTaskToUI(task)
} else {
console.warn(`file[${i}]:${file.name} - File couldn't be parsed.`)
}
}
function dropHandler(ev) {
console.log('Content dropped...')
let items = []
if (ev?.dataTransfer?.items) { // Use DataTransferItemList interface
items = Array.from(ev.dataTransfer.items)
items = items.filter(item => item.kind === 'file')
items = items.map(item => item.getAsFile())
} else if (ev?.dataTransfer?.files) { // Use DataTransfer interface
items = Array.from(ev.dataTransfer.files)
}
items.forEach(item => {item.file_ext = EXT_REGEX.exec(item.name.toLowerCase())[1]})
let text_items = items.filter(item => TEXT_EXTENSIONS.includes(item.file_ext))
let image_items = items.filter(item => IMAGE_EXTENSIONS.includes(item.file_ext))
if (image_items.length > 0 && ev.target == initImageSelector) {
return // let the event bubble up, so that the Init Image filepicker can receive this
}
ev.preventDefault() // Prevent default behavior (Prevent file/content from being opened)
text_items.forEach(readFile)
}
function dragOverHandler(ev) {
console.log('Content in drop zone')
// Prevent default behavior (Prevent file/content from being opened)
ev.preventDefault()
ev.dataTransfer.dropEffect = "copy"
let img = new Image()
img.src = location.host + '/media/images/favicon-32x32.png'
ev.dataTransfer.setDragImage(img, 16, 16)
}
document.addEventListener("drop", dropHandler)
document.addEventListener("dragover", dragOverHandler)
const TASK_REQ_NO_EXPORT = [
"use_cpu",
"turbo",
"use_full_precision",
"save_to_disk_path"
]
// Retrieve clipboard content and try to parse it
async function pasteFromClipboard() {
//const text = await navigator.clipboard.readText()
let text = await navigator.clipboard.readText();
text=text.trim();
if (text.startsWith('{') && text.endsWith('}')) {
try {
const task = JSON.parse(text)
restoreTaskToUI(task)
} catch (e) {
console.warn(`Clipboard JSON couldn't be parsed.`, e)
}
return
}
// Normal txt file.
const task = parseTaskFromText(text)
if (task) {
restoreTaskToUI(task)
} else {
console.warn(`Clipboard content - File couldn't be parsed.`)
}
}
// Adds a copy and a paste icon if the browser grants permission to write to clipboard.
function checkWriteToClipboardPermission (result) {
if (result.state == "granted" || result.state == "prompt") {
const resetSettings = document.getElementById('reset-image-settings')
// COPY ICON
const copyIcon = document.createElement('i')
copyIcon.className = 'fa-solid fa-clipboard section-button'
copyIcon.innerHTML = `<span class="simple-tooltip right">Copy Image Settings</span>`
copyIcon.addEventListener('click', (event) => {
event.stopPropagation()
// Add css class 'active'
copyIcon.classList.add('active')
// In 1000 ms remove the 'active' class
asyncDelay(1000).then(() => copyIcon.classList.remove('active'))
const uiState = readUI()
TASK_REQ_NO_EXPORT.forEach((key) => delete uiState.reqBody[key])
if (uiState.reqBody.init_image && !IMAGE_REGEX.test(uiState.reqBody.init_image)) {
delete uiState.reqBody.init_image
delete uiState.reqBody.prompt_strength
}
navigator.clipboard.writeText(JSON.stringify(uiState, undefined, 4))
})
resetSettings.parentNode.insertBefore(copyIcon, resetSettings)
// PASTE ICON
const pasteIcon = document.createElement('i')
pasteIcon.className = 'fa-solid fa-paste section-button'
pasteIcon.innerHTML = `<span class="simple-tooltip right">Paste Image Settings</span>`
pasteIcon.addEventListener('click', (event) => {
event.stopPropagation()
// Add css class 'active'
pasteIcon.classList.add('active')
// In 1000 ms remove the 'active' class
asyncDelay(1000).then(() => pasteIcon.classList.remove('active'))
pasteFromClipboard()
})
resetSettings.parentNode.insertBefore(pasteIcon, resetSettings)
}
}
// Determine which access we have to the clipboard. Clipboard access is only available on localhost or via TLS.
navigator.permissions.query({ name: "clipboard-write" }).then(checkWriteToClipboardPermission, (e) => {
if (e instanceof TypeError && typeof navigator?.clipboard?.writeText === 'function') {
// Fix for firefox https://bugzilla.mozilla.org/show_bug.cgi?id=1560373
checkWriteToClipboardPermission({state:"granted"})
}
})

View File

@ -75,7 +75,6 @@ function createModifierGroup(modifierGroup, initiallyExpanded) {
if (initiallyExpanded === true) {
titleEl.className += ' active'
modifiersEl.style.display = 'block'
}
modifiers.forEach(modObj => {
@ -245,16 +244,9 @@ function resizeModifierCards(val) {
modifierCardSizeSlider.onchange = () => resizeModifierCards(modifierCardSizeSlider.value)
previewImageField.onchange = () => changePreviewImages(previewImageField.value)
modifierSettingsBtn.addEventListener('click', function() {
modifierSettingsOverlay.style.display = 'block'
})
document.getElementById("modifier-settings-config-close-btn").addEventListener('click', () => {
modifierSettingsOverlay.style.display = 'none'
})
modifierSettingsOverlay.addEventListener('click', (event) => {
if (event.target.id == modifierSettingsOverlay.id) {
modifierSettingsOverlay.style.display = 'none'
}
modifierSettingsBtn.addEventListener('click', function(e) {
modifierSettingsOverlay.classList.add("active")
e.stopPropagation()
})
function saveCustomModifiers() {

View File

@ -1,6 +1,7 @@
"use strict" // Opt in to a restricted variant of JavaScript
const HEALTH_PING_INTERVAL = 5 // seconds
const MAX_INIT_IMAGE_DIMENSION = 768
const MIN_GPUS_TO_SHOW_SELECTION = 2
const IMAGE_REGEX = new RegExp('data:image/[A-Za-z]+;base64')
@ -24,13 +25,6 @@ let initImagePreview = document.querySelector("#init_image_preview")
let initImageSizeBox = document.querySelector("#init_image_size_box")
let maskImageSelector = document.querySelector("#mask")
let maskImagePreview = document.querySelector("#mask_preview")
let turboField = document.querySelector('#turbo')
let useCPUField = document.querySelector('#use_cpu')
let useFullPrecisionField = document.querySelector('#use_full_precision')
let saveToDiskField = document.querySelector('#save_to_disk')
let diskPathField = document.querySelector('#diskPath')
// let allowNSFWField = document.querySelector("#allow_nsfw")
let useBetaChannelField = document.querySelector("#use_beta_channel")
let promptStrengthSlider = document.querySelector('#prompt_strength_slider')
let promptStrengthField = document.querySelector('#prompt_strength')
let samplerField = document.querySelector('#sampler')
@ -39,6 +33,7 @@ let useFaceCorrectionField = document.querySelector("#use_face_correction")
let useUpscalingField = document.querySelector("#use_upscale")
let upscaleModelField = document.querySelector("#upscale_model")
let stableDiffusionModelField = document.querySelector('#stable_diffusion_model')
let vaeModelField = document.querySelector('#vae_model')
let outputFormatField = document.querySelector('#output_format')
let showOnlyFilteredImageField = document.querySelector("#show_only_filtered_image")
let updateBranchLabel = document.querySelector("#updateBranchLabel")
@ -56,22 +51,10 @@ let initialText = document.querySelector("#initial-text")
let previewTools = document.querySelector("#preview-tools")
let clearAllPreviewsBtn = document.querySelector("#clear-all-previews")
// let maskSetting = document.querySelector('#editor-inputs-mask_setting')
// let maskImagePreviewContainer = document.querySelector('#mask_preview_container')
// let maskImageClearBtn = document.querySelector('#mask_clear')
let maskSetting = document.querySelector('#enable_mask')
let imagePreview = document.querySelector("#preview")
// let previewPrompt = document.querySelector('#preview-prompt')
let showConfigToggle = document.querySelector('#configToggleBtn')
// let configBox = document.querySelector('#config')
// let outputMsg = document.querySelector('#outputMsg')
// let progressBar = document.querySelector("#progressBar")
let soundToggle = document.querySelector('#sound_toggle')
let serverStatusColor = document.querySelector('#server-status-color')
let serverStatusMsg = document.querySelector('#server-status-msg')
@ -85,7 +68,6 @@ maskResetButton.style.fontWeight = 'normal'
maskResetButton.style.fontSize = '10pt'
let serverState = {'status': 'Offline', 'time': Date.now()}
let lastPromptUsed = ''
let bellPending = false
let taskQueue = []
@ -187,6 +169,34 @@ function playSound() {
})
}
}
function setSystemInfo(devices) {
let cpu = devices.all.cpu.name
let allGPUs = Object.keys(devices.all).filter(d => d != 'cpu')
let activeGPUs = Object.keys(devices.active)
function ID_TO_TEXT(d) {
let info = devices.all[d]
if ("mem_free" in info && "mem_total" in info) {
return `${info.name} <small>(${d}) (${info.mem_free.toFixed(1)}Gb free / ${info.mem_total.toFixed(1)} Gb total)</small>`
} else {
return `${info.name} <small>(${d}) (no memory info)</small>`
}
}
allGPUs = allGPUs.map(ID_TO_TEXT)
activeGPUs = activeGPUs.map(ID_TO_TEXT)
let systemInfo = `
<table>
<tr><td><label>Processor:</label></td><td class="value">${cpu}</td></tr>
<tr><td><label>Compatible Graphics Cards (all):</label></td><td class="value">${allGPUs.join('</br>')}</td></tr>
<tr><td></td><td>&nbsp;</td></tr>
<tr><td><label>Used for rendering 🔥:</label></td><td class="value">${activeGPUs.join('</br>')}</td></tr>
</table>`
let systemInfoEl = document.querySelector('#system-info')
systemInfoEl.innerHTML = systemInfo
}
async function healthCheck() {
try {
@ -220,8 +230,12 @@ async function healthCheck() {
setServerStatus('error', serverState.status.toLowerCase())
break
}
if (serverState.devices) {
setSystemInfo(serverState.devices)
}
serverState.time = Date.now()
} catch (e) {
console.log(e)
serverState = {'status': 'Offline', 'time': Date.now()}
setServerStatus('error', 'offline')
}
@ -340,10 +354,10 @@ function onDownloadImageClick(req, img) {
imgDownload.click()
}
function modifyCurrentRequest(req, ...reqDiff) {
function modifyCurrentRequest(...reqDiff) {
const newTaskRequest = getCurrentUserRequest()
newTaskRequest.reqBody = Object.assign({}, req, ...reqDiff, {
newTaskRequest.reqBody = Object.assign(newTaskRequest.reqBody, ...reqDiff, {
use_cpu: useCPUField.checked
})
newTaskRequest.seed = newTaskRequest.reqBody.seed
@ -410,7 +424,7 @@ async function doMakeImage(task) {
const RETRY_DELAY_IF_BUFFER_IS_EMPTY = 1000 // ms
const RETRY_DELAY_IF_SERVER_IS_BUSY = 30 * 1000 // ms, status_code 503, already a task running
const TASK_START_DELAY_ON_SERVER = 1500 // ms
const SERVER_STATE_VALIDITY_DURATION = 10 * 1000 // ms
const SERVER_STATE_VALIDITY_DURATION = 90 * 1000 // ms
const reqBody = task.reqBody
const batchCount = task.batchCount
@ -422,10 +436,10 @@ async function doMakeImage(task) {
const outputMsg = task['outputMsg']
const previewPrompt = task['previewPrompt']
const progressBar = task['progressBar']
const progressBarInner = progressBar.querySelector("div")
let res = undefined
try {
const lastTask = serverState.task
let renderRequest = undefined
do {
res = await fetch('/render', {
@ -561,6 +575,13 @@ async function doMakeImage(task) {
outputMsg.innerHTML += `. Time remaining (approx): ${timeRemaining}`
outputMsg.style.display = 'block'
progressBarInner.style.width = `${percent}%`
if (percent == 100) {
task.progressBar.style.height = "0px"
task.progressBar.style.border = "0px solid var(--background-color3)"
task.progressBar.classList.remove("active")
}
if (stepUpdate.output !== undefined) {
showImages(reqBody, stepUpdate, outputContainer, true)
}
@ -620,17 +641,14 @@ async function doMakeImage(task) {
let msg = `Unexpected Read Error:<br/><pre>Response: ${res}<br/>StepUpdate: ${typeof stepUpdate === 'object' ? JSON.stringify(stepUpdate, undefined, 4) : stepUpdate}</pre>`
logError(msg, res, outputMsg)
}
progressBar.style.display = 'none'
return false
}
lastPromptUsed = reqBody['prompt']
showImages(reqBody, stepUpdate, outputContainer, false)
} catch (e) {
console.log('request error', e)
logError('Stable Diffusion had an error. Please check the logs in the command-line window. <br/><br/>' + e + '<br/><pre>' + e.stack + '</pre>', res, outputMsg)
setStatus('request', 'error', 'error')
progressBar.style.display = 'none'
return false
}
return true
@ -713,6 +731,9 @@ async function checkTasks() {
if (successCount === task.batchCount) {
task.outputMsg.innerText = 'Processed ' + task.numOutputsTotal + ' images in ' + time + ' seconds'
task.progressBar.style.height = "0px"
task.progressBar.style.border = "0px solid var(--background-color3)"
task.progressBar.classList.remove("active")
// setStatus('request', 'done', 'success')
} else {
if (task.outputMsg.innerText.toLowerCase().indexOf('error') === -1) {
@ -762,9 +783,9 @@ function getCurrentUserRequest() {
height: heightField.value,
// allow_nsfw: allowNSFWField.checked,
turbo: turboField.checked,
use_cpu: useCPUField.checked,
use_full_precision: useFullPrecisionField.checked,
use_stable_diffusion_model: stableDiffusionModelField.value,
use_vae_model: vaeModelField.value,
stream_progress_updates: true,
stream_image_progress: (numOutputsTotal > 50 ? false : streamImageProgressField.checked),
show_only_filtered_image: showOnlyFilteredImageField.checked,
@ -813,29 +834,34 @@ function makeImage() {
}
function createTask(task) {
let taskConfig = `Seed: ${task.seed}, Sampler: ${task.reqBody.sampler}, Inference Steps: ${task.reqBody.num_inference_steps}, Guidance Scale: ${task.reqBody.guidance_scale}, Model: ${task.reqBody.use_stable_diffusion_model}`
if (negativePromptField.value.trim() !== '') {
taskConfig += `, Negative Prompt: ${task.reqBody.negative_prompt}`
let taskConfig = `<b>Seed:</b> ${task.seed}, <b>Sampler:</b> ${task.reqBody.sampler}, <b>Inference Steps:</b> ${task.reqBody.num_inference_steps}, <b>Guidance Scale:</b> ${task.reqBody.guidance_scale}, <b>Model:</b> ${task.reqBody.use_stable_diffusion_model}`
if (task.reqBody.use_vae_model.trim() !== '') {
taskConfig += `, <b>VAE:</b> ${task.reqBody.use_vae_model}`
}
if (task.reqBody.negative_prompt.trim() !== '') {
taskConfig += `, <b>Negative Prompt:</b> ${task.reqBody.negative_prompt}`
}
if (task.reqBody.init_image !== undefined) {
taskConfig += `, Prompt Strength: ${task.reqBody.prompt_strength}`
taskConfig += `, <b>Prompt Strength:</b> ${task.reqBody.prompt_strength}`
}
if (task.reqBody.use_face_correction) {
taskConfig += `, Fix Faces: ${task.reqBody.use_face_correction}`
taskConfig += `, <b>Fix Faces:</b> ${task.reqBody.use_face_correction}`
}
if (task.reqBody.use_upscale) {
taskConfig += `, Upscale: ${task.reqBody.use_upscale}`
taskConfig += `, <b>Upscale:</b> ${task.reqBody.use_upscale}`
}
let taskEntry = document.createElement('div')
taskEntry.className = 'imageTaskContainer'
taskEntry.innerHTML = ` <div class="taskStatusLabel">Enqueued</div>
<button class="secondaryButton stopTask"><i class="fa-solid fa-trash-can"></i> Remove</button>
<div class="preview-prompt collapsible active"></div>
<div class="taskConfig">${taskConfig}</div>
<div class="collapsible-content" style="display: block">
taskEntry.innerHTML = ` <div class="header-content panel collapsible active">
<div class="taskStatusLabel">Enqueued</div>
<button class="secondaryButton stopTask"><i class="fa-solid fa-trash-can"></i> Remove</button>
<div class="preview-prompt collapsible active"></div>
<div class="taskConfig">${taskConfig}</div>
<div class="outputMsg"></div>
<div class="progressBar"></div>
<div class="progress-bar active"><div></div></div>
</div>
<div class="collapsible-content">
<div class="img-preview">
</div>`
@ -845,12 +871,14 @@ function createTask(task) {
task['outputContainer'] = taskEntry.querySelector('.img-preview')
task['outputMsg'] = taskEntry.querySelector('.outputMsg')
task['previewPrompt'] = taskEntry.querySelector('.preview-prompt')
task['progressBar'] = taskEntry.querySelector('.progressBar')
task['progressBar'] = taskEntry.querySelector('.progress-bar')
task['stopTask'] = taskEntry.querySelector('.stopTask')
task['stopTask'].addEventListener('click', async function() {
task['stopTask'].addEventListener('click', async function(e) {
e.stopPropagation()
if (task['isProcessing']) {
task.isProcessing = false
task.progressBar.classList.remove("active")
try {
let res = await fetch('/image/stop?session_id=' + sessionId)
} catch (e) {
@ -1044,16 +1072,25 @@ function onDimensionChange() {
resizeInpaintingEditor(widthValue, heightValue)
}
saveToDiskField.addEventListener('click', function(e) {
diskPathField.disabled = !this.checked
})
diskPathField.disabled = !saveToDiskField.checked
useUpscalingField.addEventListener('click', function(e) {
upscaleModelField.disabled = !useUpscalingField.checked
useUpscalingField.addEventListener('change', function(e) {
upscaleModelField.disabled = !this.checked
})
if (useBetaChannelField.checked) {
updateBranchLabel.innerText = "(beta)"
}
makeImageBtn.addEventListener('click', makeImage)
document.onkeydown = function(e) {
if (e.ctrlKey && e.code === 'Enter') {
makeImage()
e.preventDefault()
}
}
function updateGuidanceScale() {
guidanceScaleField.value = guidanceScaleSlider.value / 10
@ -1095,80 +1132,44 @@ promptStrengthSlider.addEventListener('input', updatePromptStrength)
promptStrengthField.addEventListener('input', updatePromptStrengthSlider)
updatePromptStrength()
useBetaChannelField.addEventListener('click', async function(e) {
if (!isServerAvailable()) {
// logError('The server is still starting up..')
alert('The server is still starting up..')
e.preventDefault()
return false
}
let updateBranch = (this.checked ? 'beta' : 'main')
try {
let res = await fetch('/app_config', {
method: 'POST',
headers: {
'Content-Type': 'application/json'
},
body: JSON.stringify({
'update_branch': updateBranch
})
})
res = await res.json()
console.log('set config status response', res)
} catch (e) {
console.log('set config status error', e)
}
})
async function getAppConfig() {
try {
let res = await fetch('/get/app_config')
const config = await res.json()
if (config.update_branch === 'beta') {
useBetaChannelField.checked = true
updateBranchLabel.innerText = "(beta)"
}
console.log('get config status response', config)
} catch (e) {
console.log('get config status error', e)
}
}
async function getModels() {
try {
var model_setting_key = "stable_diffusion_model"
var selectedModel = SETTINGS[model_setting_key].value
var sd_model_setting_key = "stable_diffusion_model"
var vae_model_setting_key = "vae_model"
var selectedSDModel = SETTINGS[sd_model_setting_key].value
var selectedVaeModel = SETTINGS[vae_model_setting_key].value
let res = await fetch('/get/models')
const models = await res.json()
// let activeModel = models['active']
console.log('get models response', models)
let modelOptions = models['options']
let stableDiffusionOptions = modelOptions['stable-diffusion']
let vaeOptions = modelOptions['vae']
vaeOptions.unshift('') // add a None option
stableDiffusionOptions.forEach(modelName => {
let modelOption = document.createElement('option')
modelOption.value = modelName
modelOption.innerText = modelName
function createModelOptions(modelField, selectedModel) {
return function(modelName) {
let modelOption = document.createElement('option')
modelOption.value = modelName
modelOption.innerText = modelName !== '' ? modelName : 'None'
if (modelName === selectedModel) {
modelOption.selected = true
if (modelName === selectedModel) {
modelOption.selected = true
}
modelField.appendChild(modelOption)
}
stableDiffusionModelField.appendChild(modelOption)
})
// TODO: set default for model here too
SETTINGS[model_setting_key].default = stableDiffusionOptions[0]
if (getSetting(model_setting_key) == '' || SETTINGS[model_setting_key].value == '') {
setSetting(model_setting_key, stableDiffusionOptions[0])
}
console.log('get models response', models)
stableDiffusionOptions.forEach(createModelOptions(stableDiffusionModelField, selectedSDModel))
vaeOptions.forEach(createModelOptions(vaeModelField, selectedVaeModel))
// TODO: set default for model here too
SETTINGS[sd_model_setting_key].default = stableDiffusionOptions[0]
if (getSetting(sd_model_setting_key) == '' || SETTINGS[sd_model_setting_key].value == '') {
setSetting(sd_model_setting_key, stableDiffusionOptions[0])
}
} catch (e) {
console.log('get models error', e)
}
@ -1267,21 +1268,56 @@ promptsFromFileSelector.addEventListener('change', function() {
}
})
async function getDiskPath() {
try {
var diskPath = getSetting("diskPath")
if (diskPath == '' || diskPath == undefined || diskPath == "undefined") {
let res = await fetch('/get/output_dir')
if (res.status === 200) {
res = await res.json()
res = res.output_dir
setSetting("diskPath", res)
}
/* setup popup handlers */
document.querySelectorAll('.popup').forEach(popup => {
popup.addEventListener('click', event => {
if (event.target == popup) {
popup.classList.remove("active")
}
} catch (e) {
console.log('error fetching output dir path', e)
})
var closeButton = popup.querySelector(".close-button")
if (closeButton) {
closeButton.addEventListener('click', () => {
popup.classList.remove("active")
})
}
}
})
var tabElements = [];
document.querySelectorAll(".tab").forEach(tab => {
var name = tab.id.replace("tab-", "");
var content = document.getElementById(`tab-content-${name}`)
tabElements.push({
name: name,
tab: tab,
content: content
})
tab.addEventListener("click", event => {
if (!tab.classList.contains("active")) {
tabElements.forEach(tabInfo => {
if (tabInfo.tab.classList.contains("active")) {
tabInfo.tab.classList.toggle("active")
tabInfo.content.classList.toggle("active")
}
})
tab.classList.toggle("active")
content.classList.toggle("active")
}
})
})
window.addEventListener("beforeunload", function(e) {
const msg = "Unsaved pictures will be lost!";
let elementList = document.getElementsByClassName("imageTaskContainer");
if (elementList.length != 0) {
e.preventDefault();
(e || window.event).returnValue = msg;
return msg;
} else {
return true;
}
});
createCollapsibles()

330
ui/media/js/parameters.js Normal file
View File

@ -0,0 +1,330 @@
/**
* Enum of parameter types
* @readonly
* @enum {string}
*/
var ParameterType = {
checkbox: "checkbox",
select: "select",
select_multiple: "select_multiple",
custom: "custom",
};
/**
* JSDoc style
* @typedef {object} Parameter
* @property {string} id
* @property {ParameterType} type
* @property {string} label
* @property {?string} note
* @property {number|boolean|string} default
*/
/** @type {Array.<Parameter>} */
var PARAMETERS = [
{
id: "theme",
type: ParameterType.select,
label: "Theme",
default: "theme-default",
options: [ // Note: options expanded dynamically
{
value: "theme-default",
label: "Default"
}
]
},
{
id: "save_to_disk",
type: ParameterType.checkbox,
label: "Auto-Save Images",
note: "automatically saves images to the specified location",
default: false,
},
{
id: "diskPath",
type: ParameterType.custom,
label: "Save Location",
render: (parameter) => {
return `<input id="${parameter.id}" name="${parameter.id}" size="30" disabled>`
}
},
{
id: "sound_toggle",
type: ParameterType.checkbox,
label: "Enable Sound",
note: "plays a sound on task completion",
default: true,
},
{
id: "ui_open_browser_on_start",
type: ParameterType.checkbox,
label: "Open browser on startup",
note: "starts the default browser on startup",
default: true,
},
{
id: "turbo",
type: ParameterType.checkbox,
label: "Turbo Mode",
default: true,
note: "generates images faster, but uses an additional 1 GB of GPU memory",
},
{
id: "use_cpu",
type: ParameterType.checkbox,
label: "Use CPU (not GPU)",
note: "warning: this will be *very* slow",
default: false,
},
{
id: "auto_pick_gpus",
type: ParameterType.checkbox,
label: "Automatically pick the GPUs (experimental)",
default: false,
},
{
id: "use_gpus",
type: ParameterType.select_multiple,
label: "GPUs to use (experimental)",
note: "to process in parallel",
default: false,
},
{
id: "use_full_precision",
type: ParameterType.checkbox,
label: "Use Full Precision",
note: "for GPU-only. warning: this will consume more VRAM",
default: false,
},
{
id: "auto_save_settings",
type: ParameterType.checkbox,
label: "Auto-Save Settings",
note: "restores settings on browser load",
default: true,
},
{
id: "use_beta_channel",
type: ParameterType.checkbox,
label: "🔥Beta channel",
note: "Get the latest features immediately (but could be less stable). Please restart the program after changing this.",
default: false,
},
];
function getParameterSettingsEntry(id) {
let parameter = PARAMETERS.filter(p => p.id === id)
if (parameter.length === 0) {
return
}
return parameter[0].settingsEntry
}
function getParameterElement(parameter) {
switch (parameter.type) {
case ParameterType.checkbox:
var is_checked = parameter.default ? " checked" : "";
return `<input id="${parameter.id}" name="${parameter.id}"${is_checked} type="checkbox">`
case ParameterType.select:
case ParameterType.select_multiple:
var options = (parameter.options || []).map(option => `<option value="${option.value}">${option.label}</option>`).join("")
var multiple = (parameter.type == ParameterType.select_multiple ? 'multiple' : '')
return `<select id="${parameter.id}" name="${parameter.id}" ${multiple}>${options}</select>`
case ParameterType.custom:
return parameter.render(parameter)
default:
console.error(`Invalid type for parameter ${parameter.id}`);
return "ERROR: Invalid Type"
}
}
let parametersTable = document.querySelector("#system-settings table")
/* fill in the system settings popup table */
function initParameters() {
PARAMETERS.forEach(parameter => {
var element = getParameterElement(parameter)
var note = parameter.note ? `<small>${parameter.note}</small>` : "";
var newrow = document.createElement('tr')
newrow.innerHTML = `
<td><label for="${parameter.id}">${parameter.label}</label></td>
<td><div>${element}${note}<div></td>`
parametersTable.appendChild(newrow)
parameter.settingsEntry = newrow
})
}
initParameters()
let turboField = document.querySelector('#turbo')
let useCPUField = document.querySelector('#use_cpu')
let autoPickGPUsField = document.querySelector('#auto_pick_gpus')
let useGPUsField = document.querySelector('#use_gpus')
let useFullPrecisionField = document.querySelector('#use_full_precision')
let saveToDiskField = document.querySelector('#save_to_disk')
let diskPathField = document.querySelector('#diskPath')
let useBetaChannelField = document.querySelector("#use_beta_channel")
let uiOpenBrowserOnStartField = document.querySelector("#ui_open_browser_on_start")
let saveSettingsBtn = document.querySelector('#save-system-settings-btn')
async function changeAppConfig(configDelta) {
try {
let res = await fetch('/app_config', {
method: 'POST',
headers: {
'Content-Type': 'application/json'
},
body: JSON.stringify(configDelta)
})
res = await res.json()
console.log('set config status response', res)
} catch (e) {
console.log('set config status error', e)
}
}
async function getAppConfig() {
try {
let res = await fetch('/get/app_config')
const config = await res.json()
if (config.update_branch === 'beta') {
useBetaChannelField.checked = true
}
if (config.ui && config.ui.open_browser_on_start === false) {
uiOpenBrowserOnStartField.checked = false
}
console.log('get config status response', config)
} catch (e) {
console.log('get config status error', e)
}
}
saveToDiskField.addEventListener('change', function(e) {
diskPathField.disabled = !this.checked
})
function getCurrentRenderDeviceSelection() {
let selectedGPUs = $('#use_gpus').val()
if (useCPUField.checked && !autoPickGPUsField.checked) {
return 'cpu'
}
if (autoPickGPUsField.checked || selectedGPUs.length == 0) {
return 'auto'
}
return selectedGPUs.join(',')
}
useCPUField.addEventListener('click', function() {
let gpuSettingEntry = getParameterSettingsEntry('use_gpus')
let autoPickGPUSettingEntry = getParameterSettingsEntry('auto_pick_gpus')
if (this.checked) {
gpuSettingEntry.style.display = 'none'
autoPickGPUSettingEntry.style.display = 'none'
autoPickGPUsField.setAttribute('data-old-value', autoPickGPUsField.checked)
autoPickGPUsField.checked = false
} else if (useGPUsField.options.length >= MIN_GPUS_TO_SHOW_SELECTION) {
gpuSettingEntry.style.display = ''
autoPickGPUSettingEntry.style.display = ''
let oldVal = autoPickGPUsField.getAttribute('data-old-value')
if (oldVal === null || oldVal === undefined) { // the UI started with CPU selected by default
autoPickGPUsField.checked = true
} else {
autoPickGPUsField.checked = (oldVal === 'true')
}
gpuSettingEntry.style.display = (autoPickGPUsField.checked ? 'none' : '')
}
})
useGPUsField.addEventListener('click', function() {
let selectedGPUs = $('#use_gpus').val()
autoPickGPUsField.checked = (selectedGPUs.length === 0)
})
autoPickGPUsField.addEventListener('click', function() {
if (this.checked) {
$('#use_gpus').val([])
}
let gpuSettingEntry = getParameterSettingsEntry('use_gpus')
gpuSettingEntry.style.display = (this.checked ? 'none' : '')
})
async function getDiskPath() {
try {
var diskPath = getSetting("diskPath")
if (diskPath == '' || diskPath == undefined || diskPath == "undefined") {
let res = await fetch('/get/output_dir')
if (res.status === 200) {
res = await res.json()
res = res.output_dir
setSetting("diskPath", res)
}
}
} catch (e) {
console.log('error fetching output dir path', e)
}
}
async function getDevices() {
try {
let res = await fetch('/get/devices')
if (res.status === 200) {
res = await res.json()
let allDeviceIds = Object.keys(res['all']).filter(d => d !== 'cpu')
let activeDeviceIds = Object.keys(res['active']).filter(d => d !== 'cpu')
if (activeDeviceIds.length === 0) {
useCPUField.checked = true
}
if (allDeviceIds.length < MIN_GPUS_TO_SHOW_SELECTION || useCPUField.checked) {
let gpuSettingEntry = getParameterSettingsEntry('use_gpus')
gpuSettingEntry.style.display = 'none'
let autoPickGPUSettingEntry = getParameterSettingsEntry('auto_pick_gpus')
autoPickGPUSettingEntry.style.display = 'none'
}
if (allDeviceIds.length === 0) {
useCPUField.checked = true
useCPUField.disabled = true // no compatible GPUs, so make the CPU mandatory
}
autoPickGPUsField.checked = (res['config'] === 'auto')
useGPUsField.innerHTML = ''
allDeviceIds.forEach(device => {
let deviceName = res['all'][device]['name']
let deviceOption = `<option value="${device}">${deviceName} (${device})</option>`
useGPUsField.insertAdjacentHTML('beforeend', deviceOption)
})
if (autoPickGPUsField.checked) {
let gpuSettingEntry = getParameterSettingsEntry('use_gpus')
gpuSettingEntry.style.display = 'none'
} else {
$('#use_gpus').val(activeDeviceIds)
}
}
} catch (e) {
console.log('error fetching devices', e)
}
}
saveSettingsBtn.addEventListener('click', function() {
let updateBranch = (useBetaChannelField.checked ? 'beta' : 'main')
changeAppConfig({
'render_devices': getCurrentRenderDeviceSelection(),
'update_branch': updateBranch,
'ui_open_browser_on_start': uiOpenBrowserOnStartField.checked
})
})

View File

@ -29,12 +29,16 @@ function toggleCollapsible(element) {
var handle = element.querySelector(".collapsible-handle");
collapsibleHeader.classList.toggle("active")
let content = getNextSibling(collapsibleHeader, '.collapsible-content')
if (content.style.display === "block") {
if (!collapsibleHeader.classList.contains("active")) {
content.style.display = "none"
handle.innerHTML = '&#x2795;' // plus
if (handle != null) { // render results don't have a handle
handle.innerHTML = '&#x2795;' // plus
}
} else {
content.style.display = "block"
handle.innerHTML = '&#x2796;' // minus
if (handle != null) { // render results don't have a handle
handle.innerHTML = '&#x2796;' // minus
}
}
if (COLLAPSIBLES_INITIALIZED && COLLAPSIBLE_PANELS.includes(element)) {
@ -65,7 +69,7 @@ function createCollapsibles(node) {
let handle = document.createElement('span')
handle.className = 'collapsible-handle'
if (c.className.indexOf('active') !== -1) {
if (c.classList.contains("active")) {
handle.innerHTML = '&#x2796;' // minus
} else {
handle.innerHTML = '&#x2795;' // plus

View File

@ -18,11 +18,11 @@ class Request:
precision: str = "autocast" # or "full"
save_to_disk_path: str = None
turbo: bool = True
use_cpu: bool = False
use_full_precision: bool = False
use_face_correction: str = None # or "GFPGANv1.3"
use_upscale: str = None # or "RealESRGAN_x4plus" or "RealESRGAN_x4plus_anime_6B"
use_stable_diffusion_model: str = "sd-v1-4"
use_vae_model: str = None
show_only_filtered_image: bool = False
output_format: str = "jpeg" # or "png"
@ -45,10 +45,11 @@ class Request:
"use_face_correction": self.use_face_correction,
"use_upscale": self.use_upscale,
"use_stable_diffusion_model": self.use_stable_diffusion_model,
"use_vae_model": self.use_vae_model,
"output_format": self.output_format,
}
def to_string(self):
def __str__(self):
return f'''
session_id: {self.session_id}
prompt: {self.prompt}
@ -62,11 +63,11 @@ class Request:
precision: {self.precision}
save_to_disk_path: {self.save_to_disk_path}
turbo: {self.turbo}
use_cpu: {self.use_cpu}
use_full_precision: {self.use_full_precision}
use_face_correction: {self.use_face_correction}
use_upscale: {self.use_upscale}
use_stable_diffusion_model: {self.use_stable_diffusion_model}
use_vae_model: {self.use_vae_model}
show_only_filtered_image: {self.show_only_filtered_image}
output_format: {self.output_format}

View File

@ -0,0 +1,168 @@
import os
import torch
import traceback
import re
COMPARABLE_GPU_PERCENTILE = 0.65 # if a GPU's free_mem is within this % of the GPU with the most free_mem, it will be picked
mem_free_threshold = 0
def get_device_delta(render_devices, active_devices):
'''
render_devices: 'cpu', or 'auto' or ['cuda:N'...]
active_devices: ['cpu', 'cuda:N'...]
'''
if render_devices in ('cpu', 'auto'):
render_devices = [render_devices]
elif render_devices is not None:
if isinstance(render_devices, str):
render_devices = [render_devices]
if isinstance(render_devices, list) and len(render_devices) > 0:
render_devices = list(filter(lambda x: x.startswith('cuda:'), render_devices))
if len(render_devices) == 0:
raise Exception('Invalid render_devices value in config.json. Valid: {"render_devices": ["cuda:0", "cuda:1"...]}, or {"render_devices": "cpu"} or {"render_devices": "auto"}')
render_devices = list(filter(lambda x: is_device_compatible(x), render_devices))
if len(render_devices) == 0:
raise Exception('Sorry, none of the render_devices configured in config.json are compatible with Stable Diffusion')
else:
raise Exception('Invalid render_devices value in config.json. Valid: {"render_devices": ["cuda:0", "cuda:1"...]}, or {"render_devices": "cpu"} or {"render_devices": "auto"}')
else:
render_devices = ['auto']
if 'auto' in render_devices:
render_devices = auto_pick_devices(active_devices)
if 'cpu' in render_devices:
print('WARNING: Could not find a compatible GPU. Using the CPU, but this will be very slow!')
active_devices = set(active_devices)
render_devices = set(render_devices)
devices_to_start = render_devices - active_devices
devices_to_stop = active_devices - render_devices
return devices_to_start, devices_to_stop
def auto_pick_devices(currently_active_devices):
global mem_free_threshold
if not torch.cuda.is_available(): return ['cpu']
device_count = torch.cuda.device_count()
if device_count == 1:
return ['cuda:0'] if is_device_compatible('cuda:0') else ['cpu']
print('Autoselecting GPU. Using most free memory.')
devices = []
for device in range(device_count):
device = f'cuda:{device}'
if not is_device_compatible(device):
continue
mem_free, mem_total = torch.cuda.mem_get_info(device)
mem_free /= float(10**9)
mem_total /= float(10**9)
device_name = torch.cuda.get_device_name(device)
print(f'{device} detected: {device_name} - Memory (free/total): {round(mem_free, 2)}Gb / {round(mem_total, 2)}Gb')
devices.append({'device': device, 'device_name': device_name, 'mem_free': mem_free})
devices.sort(key=lambda x:x['mem_free'], reverse=True)
max_mem_free = devices[0]['mem_free']
curr_mem_free_threshold = COMPARABLE_GPU_PERCENTILE * max_mem_free
mem_free_threshold = max(curr_mem_free_threshold, mem_free_threshold)
# Auto-pick algorithm:
# 1. Pick the top 75 percentile of the GPUs, sorted by free_mem.
# 2. Also include already-running devices (GPU-only), otherwise their free_mem will
# always be very low (since their VRAM contains the model).
# These already-running devices probably aren't terrible, since they were picked in the past.
# Worst case, the user can restart the program and that'll get rid of them.
devices = list(filter((lambda x: x['mem_free'] > mem_free_threshold or x['device'] in currently_active_devices), devices))
devices = list(map(lambda x: x['device'], devices))
return devices
def device_init(thread_data, device):
'''
This function assumes the 'device' has already been verified to be compatible.
`get_device_delta()` has already filtered out incompatible devices.
'''
validate_device_id(device, log_prefix='device_init')
if device == 'cpu':
thread_data.device = 'cpu'
thread_data.device_name = get_processor_name()
print('Render device CPU available as', thread_data.device_name)
return
thread_data.device_name = torch.cuda.get_device_name(device)
thread_data.device = device
# Force full precision on 1660 and 1650 NVIDIA cards to avoid creating green images
device_name = thread_data.device_name.lower()
thread_data.force_full_precision = ('nvidia' in device_name or 'geforce' in device_name) and (' 1660' in device_name or ' 1650' in device_name)
if thread_data.force_full_precision:
print('forcing full precision on NVIDIA 16xx cards, to avoid green images. GPU detected: ', thread_data.device_name)
# Apply force_full_precision now before models are loaded.
thread_data.precision = 'full'
print(f'Setting {device} as active')
torch.cuda.device(device)
return
def validate_device_id(device, log_prefix=''):
def is_valid():
if not isinstance(device, str):
return False
if device == 'cpu':
return True
if not device.startswith('cuda:') or not device[5:].isnumeric():
return False
return True
if not is_valid():
raise EnvironmentError(f"{log_prefix}: device id should be 'cpu', or 'cuda:N' (where N is an integer index for the GPU). Got: {device}")
def is_device_compatible(device):
'''
Returns True/False, and prints any compatibility errors
'''
try:
validate_device_id(device, log_prefix='is_device_compatible')
except:
print(str(e))
return False
if device == 'cpu': return True
# Memory check
try:
_, mem_total = torch.cuda.mem_get_info(device)
mem_total /= float(10**9)
if mem_total < 3.0:
print(f'GPU {device} with less than 3 GB of VRAM is not compatible with Stable Diffusion')
return False
except RuntimeError as e:
print(str(e))
return False
return True
def get_processor_name():
try:
import platform, subprocess
if platform.system() == "Windows":
return platform.processor()
elif platform.system() == "Darwin":
os.environ['PATH'] = os.environ['PATH'] + os.pathsep + '/usr/sbin'
command = "sysctl -n machdep.cpu.brand_string"
return subprocess.check_output(command).strip()
elif platform.system() == "Linux":
command = "cat /proc/cpuinfo"
all_info = subprocess.check_output(command, shell=True).decode().strip()
for line in all_info.split("\n"):
if "model name" in line:
return re.sub(".*model name.*:", "", line, 1).strip()
except:
print(traceback.format_exc())
return "cpu"

View File

@ -1,8 +1,15 @@
"""runtime.py: torch device owned by a thread.
Notes:
Avoid device switching, transfering all models will get too complex.
To use a diffrent device signal the current render device to exit
And then start a new clean thread for the new device.
"""
import json
import os, re
import traceback
import torch
import numpy as np
from gc import collect as gc_collect
from omegaconf import OmegaConf
from PIL import Image, ImageOps
from tqdm import tqdm, trange
@ -28,70 +35,64 @@ logging.set_verbosity_error()
# consts
config_yaml = "optimizedSD/v1-inference.yaml"
filename_regex = re.compile('[^a-zA-Z0-9]')
force_gfpgan_to_cuda0 = True # workaround: gfpgan currently works only on cuda:0
# api stuff
from sd_internal import device_manager
from . import Request, Response, Image as ResponseImage
import base64
from io import BytesIO
#from colorama import Fore
# local
stop_processing = False
temp_images = {}
from threading import local as LocalThreadVars
thread_data = LocalThreadVars()
ckpt_file = None
gfpgan_file = None
real_esrgan_file = None
def thread_init(device):
# Thread bound properties
thread_data.stop_processing = False
thread_data.temp_images = {}
model = None
modelCS = None
modelFS = None
model_gfpgan = None
model_real_esrgan = None
thread_data.ckpt_file = None
thread_data.vae_file = None
thread_data.gfpgan_file = None
thread_data.real_esrgan_file = None
model_is_half = False
model_fs_is_half = False
device = None
unet_bs = 1
precision = 'autocast'
sampler_plms = None
sampler_ddim = None
thread_data.model = None
thread_data.modelCS = None
thread_data.modelFS = None
thread_data.model_gfpgan = None
thread_data.model_real_esrgan = None
has_valid_gpu = False
force_full_precision = False
try:
gpu = torch.cuda.current_device()
gpu_name = torch.cuda.get_device_name(gpu)
print('GPU detected: ', gpu_name)
thread_data.model_is_half = False
thread_data.model_fs_is_half = False
thread_data.device = None
thread_data.device_name = None
thread_data.unet_bs = 1
thread_data.precision = 'autocast'
thread_data.sampler_plms = None
thread_data.sampler_ddim = None
force_full_precision = ('nvidia' in gpu_name.lower() or 'geforce' in gpu_name.lower()) and (' 1660' in gpu_name or ' 1650' in gpu_name) # otherwise these NVIDIA cards create green images
if force_full_precision:
print('forcing full precision on NVIDIA 16xx cards, to avoid green images. GPU detected: ', gpu_name)
thread_data.turbo = False
thread_data.force_full_precision = False
thread_data.reduced_memory = True
mem_free, mem_total = torch.cuda.mem_get_info(gpu)
mem_total /= float(10**9)
if mem_total < 3.0:
print("GPUs with less than 3 GB of VRAM are not compatible with Stable Diffusion")
raise Exception()
device_manager.device_init(thread_data, device)
has_valid_gpu = True
except:
print('WARNING: No compatible GPU found. Using the CPU, but this will be very slow!')
pass
def load_model_ckpt():
if not thread_data.ckpt_file: raise ValueError(f'Thread ckpt_file is undefined.')
if not os.path.exists(thread_data.ckpt_file + '.ckpt'): raise FileNotFoundError(f'Cannot find {thread_data.ckpt_file}.ckpt')
def load_model_ckpt(ckpt_to_use, device_to_use='cuda', turbo=False, unet_bs_to_use=1, precision_to_use='autocast'):
global ckpt_file, model, modelCS, modelFS, model_is_half, device, unet_bs, precision, model_fs_is_half
if not thread_data.precision:
thread_data.precision = 'full' if thread_data.force_full_precision else 'autocast'
device = device_to_use if has_valid_gpu else 'cpu'
precision = precision_to_use if not force_full_precision else 'full'
unet_bs = unet_bs_to_use
if not thread_data.unet_bs:
thread_data.unet_bs = 1
unload_model()
if thread_data.device == 'cpu':
thread_data.precision = 'full'
if device == 'cpu':
precision = 'full'
sd = load_model_from_config(f"{ckpt_to_use}.ckpt")
print('loading', thread_data.ckpt_file + '.ckpt', 'to device', thread_data.device, 'using precision', thread_data.precision)
sd = load_model_from_config(thread_data.ckpt_file + '.ckpt')
li, lo = [], []
for key, value in sd.items():
sp = key.split(".")
@ -114,114 +115,205 @@ def load_model_ckpt(ckpt_to_use, device_to_use='cuda', turbo=False, unet_bs_to_u
model = instantiate_from_config(config.modelUNet)
_, _ = model.load_state_dict(sd, strict=False)
model.eval()
model.cdevice = device
model.unet_bs = unet_bs
model.turbo = turbo
model.cdevice = torch.device(thread_data.device)
model.unet_bs = thread_data.unet_bs
model.turbo = thread_data.turbo
# if thread_data.device != 'cpu':
# model.to(thread_data.device)
#if thread_data.reduced_memory:
#model.model1.to("cpu")
#model.model2.to("cpu")
thread_data.model = model
modelCS = instantiate_from_config(config.modelCondStage)
_, _ = modelCS.load_state_dict(sd, strict=False)
modelCS.eval()
modelCS.cond_stage_model.device = device
modelCS.cond_stage_model.device = torch.device(thread_data.device)
# if thread_data.device != 'cpu':
# if thread_data.reduced_memory:
# modelCS.to('cpu')
# else:
# modelCS.to(thread_data.device) # Preload on device if not already there.
thread_data.modelCS = modelCS
modelFS = instantiate_from_config(config.modelFirstStage)
_, _ = modelFS.load_state_dict(sd, strict=False)
if thread_data.vae_file is not None:
for model_extension in ['.ckpt', '.vae.pt']:
if os.path.exists(thread_data.vae_file + model_extension):
print(f"Loading VAE weights from: {thread_data.vae_file}{model_extension}")
vae_ckpt = torch.load(thread_data.vae_file + model_extension, map_location="cpu")
vae_dict = {k: v for k, v in vae_ckpt["state_dict"].items() if k[0:4] != "loss"}
modelFS.first_stage_model.load_state_dict(vae_dict, strict=False)
break
else:
print(f'Cannot find VAE file: {thread_data.vae_file}{model_extension}')
modelFS.eval()
# if thread_data.device != 'cpu':
# if thread_data.reduced_memory:
# modelFS.to('cpu')
# else:
# modelFS.to(thread_data.device) # Preload on device if not already there.
thread_data.modelFS = modelFS
del sd
if device != "cpu" and precision == "autocast":
model.half()
modelCS.half()
modelFS.half()
model_is_half = True
model_fs_is_half = True
if thread_data.device != "cpu" and thread_data.precision == "autocast":
thread_data.model.half()
thread_data.modelCS.half()
thread_data.modelFS.half()
thread_data.model_is_half = True
thread_data.model_fs_is_half = True
else:
model_is_half = False
model_fs_is_half = False
thread_data.model_is_half = False
thread_data.model_fs_is_half = False
ckpt_file = ckpt_to_use
print(f'''loaded model
model file: {thread_data.ckpt_file}.ckpt
model.device: {model.device}
modelCS.device: {modelCS.cond_stage_model.device}
modelFS.device: {thread_data.modelFS.device}
using precision: {thread_data.precision}''')
print('loaded ', ckpt_file, 'to', device, 'precision', precision)
def unload_filters():
if thread_data.model_gfpgan is not None:
if thread_data.device != 'cpu': thread_data.model_gfpgan.gfpgan.to('cpu')
def unload_model():
global model, modelCS, modelFS
del thread_data.model_gfpgan
thread_data.model_gfpgan = None
if model is not None:
del model
del modelCS
del modelFS
if thread_data.model_real_esrgan is not None:
if thread_data.device != 'cpu': thread_data.model_real_esrgan.model.to('cpu')
model = None
modelCS = None
modelFS = None
del thread_data.model_real_esrgan
thread_data.model_real_esrgan = None
def load_model_gfpgan(gfpgan_to_use):
global gfpgan_file, model_gfpgan
gc()
if gfpgan_to_use is None:
return
def unload_models():
if thread_data.model is not None:
print('Unloading models...')
if thread_data.device != 'cpu':
thread_data.modelFS.to('cpu')
thread_data.modelCS.to('cpu')
thread_data.model.model1.to("cpu")
thread_data.model.model2.to("cpu")
gfpgan_file = gfpgan_to_use
model_path = gfpgan_to_use + ".pth"
del thread_data.model
del thread_data.modelCS
del thread_data.modelFS
if device == 'cpu':
model_gfpgan = GFPGANer(model_path=model_path, upscale=1, arch='clean', channel_multiplier=2, bg_upsampler=None, device=torch.device('cpu'))
else:
model_gfpgan = GFPGANer(model_path=model_path, upscale=1, arch='clean', channel_multiplier=2, bg_upsampler=None, device=torch.device('cuda'))
thread_data.model = None
thread_data.modelCS = None
thread_data.modelFS = None
print('loaded ', gfpgan_to_use, 'to', device, 'precision', precision)
gc()
def load_model_real_esrgan(real_esrgan_to_use):
global real_esrgan_file, model_real_esrgan
def wait_model_move_to(model, target_device): # Send to target_device and wait until complete.
if thread_data.device == target_device: return
start_mem = torch.cuda.memory_allocated(thread_data.device) / 1e6
if start_mem <= 0: return
model_name = model.__class__.__name__
print(f'Device {thread_data.device} - Sending model {model_name} to {target_device} | Memory transfer starting. Memory Used: {round(start_mem)}Mb')
start_time = time.time()
model.to(target_device)
time_step = start_time
WARNING_TIMEOUT = 1.5 # seconds - Show activity in console after timeout.
last_mem = start_mem
is_transfering = True
while is_transfering:
time.sleep(0.5) # 500ms
mem = torch.cuda.memory_allocated(thread_data.device) / 1e6
is_transfering = bool(mem > 0 and mem < last_mem) # still stuff loaded, but less than last time.
last_mem = mem
if not is_transfering:
break;
if time.time() - time_step > WARNING_TIMEOUT: # Long delay, print to console to show activity.
print(f'Device {thread_data.device} - Waiting for Memory transfer. Memory Used: {round(mem)}Mb, Transfered: {round(start_mem - mem)}Mb')
time_step = time.time()
print(f'Device {thread_data.device} - {model_name} Moved: {round(start_mem - last_mem)}Mb in {round(time.time() - start_time, 3)} seconds to {target_device}')
if real_esrgan_to_use is None:
return
def load_model_gfpgan():
if thread_data.gfpgan_file is None: raise ValueError(f'Thread gfpgan_file is undefined.')
model_path = thread_data.gfpgan_file + ".pth"
device = 'cuda:0' if force_gfpgan_to_cuda0 else thread_data.device
thread_data.model_gfpgan = GFPGANer(device=torch.device(device), model_path=model_path, upscale=1, arch='clean', channel_multiplier=2, bg_upsampler=None)
print('loaded', thread_data.gfpgan_file, 'to', thread_data.model_gfpgan.device, 'precision', thread_data.precision)
real_esrgan_file = real_esrgan_to_use
model_path = real_esrgan_to_use + ".pth"
def load_model_real_esrgan():
if thread_data.real_esrgan_file is None: raise ValueError(f'Thread real_esrgan_file is undefined.')
model_path = thread_data.real_esrgan_file + ".pth"
RealESRGAN_models = {
'RealESRGAN_x4plus': RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32, scale=4),
'RealESRGAN_x4plus_anime_6B': RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=6, num_grow_ch=32, scale=4)
}
model_to_use = RealESRGAN_models[real_esrgan_to_use]
model_to_use = RealESRGAN_models[thread_data.real_esrgan_file]
if device == 'cpu':
model_real_esrgan = RealESRGANer(scale=2, model_path=model_path, model=model_to_use, pre_pad=0, half=False) # cpu does not support half
model_real_esrgan.device = torch.device('cpu')
model_real_esrgan.model.to('cpu')
if thread_data.device == 'cpu':
thread_data.model_real_esrgan = RealESRGANer(device=torch.device(thread_data.device), scale=2, model_path=model_path, model=model_to_use, pre_pad=0, half=False) # cpu does not support half
#thread_data.model_real_esrgan.device = torch.device(thread_data.device)
thread_data.model_real_esrgan.model.to('cpu')
else:
model_real_esrgan = RealESRGANer(scale=2, model_path=model_path, model=model_to_use, pre_pad=0, half=model_is_half)
thread_data.model_real_esrgan = RealESRGANer(device=torch.device(thread_data.device), scale=2, model_path=model_path, model=model_to_use, pre_pad=0, half=thread_data.model_is_half)
model_real_esrgan.model.name = real_esrgan_to_use
thread_data.model_real_esrgan.model.name = thread_data.real_esrgan_file
print('loaded ', thread_data.real_esrgan_file, 'to', thread_data.model_real_esrgan.device, 'precision', thread_data.precision)
print('loaded ', real_esrgan_to_use, 'to', device, 'precision', precision)
def get_session_out_path(disk_path, session_id):
if disk_path is None: return None
if session_id is None: return None
session_out_path = os.path.join(disk_path, filename_regex.sub('_',session_id))
os.makedirs(session_out_path, exist_ok=True)
return session_out_path
def get_base_path(disk_path, session_id, prompt, img_id, ext, suffix=None):
if disk_path is None: return None
if session_id is None: return None
if ext is None: raise Exception('Missing ext')
session_out_path = os.path.join(disk_path, session_id)
os.makedirs(session_out_path, exist_ok=True)
session_out_path = get_session_out_path(disk_path, session_id)
prompt_flattened = filename_regex.sub('_', prompt)[:50]
if suffix is not None:
return os.path.join(session_out_path, f"{prompt_flattened}_{img_id}_{suffix}.{ext}")
return os.path.join(session_out_path, f"{prompt_flattened}_{img_id}.{ext}")
def apply_filters(filter_name, image_data):
def apply_filters(filter_name, image_data, model_path=None):
print(f'Applying filter {filter_name}...')
gc()
gc() # Free space before loading new data.
if filter_name == 'gfpgan':
_, _, output = model_gfpgan.enhance(image_data[:,:,::-1], has_aligned=False, only_center_face=False, paste_back=True)
if isinstance(image_data, torch.Tensor):
image_data.to('cuda:0' if force_gfpgan_to_cuda0 else thread_data.device)
if model_path is not None and model_path != thread_data.gfpgan_file:
thread_data.gfpgan_file = model_path
load_model_gfpgan()
elif not thread_data.model_gfpgan:
load_model_gfpgan()
if thread_data.model_gfpgan is None: raise Exception('Model "gfpgan" not loaded.')
print('enhance with', thread_data.gfpgan_file, 'on', thread_data.model_gfpgan.device, 'precision', thread_data.precision)
_, _, output = thread_data.model_gfpgan.enhance(image_data[:,:,::-1], has_aligned=False, only_center_face=False, paste_back=True)
image_data = output[:,:,::-1]
if filter_name == 'real_esrgan':
output, _ = model_real_esrgan.enhance(image_data[:,:,::-1])
if isinstance(image_data, torch.Tensor):
image_data.to(thread_data.device)
if model_path is not None and model_path != thread_data.real_esrgan_file:
thread_data.real_esrgan_file = model_path
load_model_real_esrgan()
elif not thread_data.model_real_esrgan:
load_model_real_esrgan()
if thread_data.model_real_esrgan is None: raise Exception('Model "gfpgan" not loaded.')
print('enhance with', thread_data.real_esrgan_file, 'on', thread_data.model_real_esrgan.device, 'precision', thread_data.precision)
output, _ = thread_data.model_real_esrgan.enhance(image_data[:,:,::-1])
image_data = output[:,:,::-1]
return image_data
@ -232,83 +324,102 @@ def mk_img(req: Request):
except Exception as e:
print(traceback.format_exc())
gc()
if device != "cpu":
modelFS.to("cpu")
modelCS.to("cpu")
model.model1.to("cpu")
model.model2.to("cpu")
gc()
if thread_data.device != 'cpu':
thread_data.modelFS.to('cpu')
thread_data.modelCS.to('cpu')
thread_data.model.model1.to("cpu")
thread_data.model.model2.to("cpu")
gc() # Release from memory.
yield json.dumps({
"status": 'failed',
"detail": str(e)
})
def do_mk_img(req: Request):
global ckpt_file
global model, modelCS, modelFS, device
global model_gfpgan, model_real_esrgan
global stop_processing
def update_temp_img(req, x_samples):
partial_images = []
for i in range(req.num_outputs):
x_sample_ddim = thread_data.modelFS.decode_first_stage(x_samples[i].unsqueeze(0))
x_sample = torch.clamp((x_sample_ddim + 1.0) / 2.0, min=0.0, max=1.0)
x_sample = 255.0 * rearrange(x_sample[0].cpu().numpy(), "c h w -> h w c")
x_sample = x_sample.astype(np.uint8)
img = Image.fromarray(x_sample)
buf = BytesIO()
img.save(buf, format='JPEG')
buf.seek(0)
stop_processing = False
del img, x_sample, x_sample_ddim
# don't delete x_samples, it is used in the code that called this callback
thread_data.temp_images[str(req.session_id) + '/' + str(i)] = buf
partial_images.append({'path': f'/image/tmp/{req.session_id}/{i}'})
return partial_images
# Build and return the apropriate generator for do_mk_img
def get_image_progress_generator(req, extra_props=None):
if not req.stream_progress_updates:
def empty_callback(x_samples, i): return x_samples
return empty_callback
thread_data.partial_x_samples = None
last_callback_time = -1
def img_callback(x_samples, i):
nonlocal last_callback_time
thread_data.partial_x_samples = x_samples
step_time = time.time() - last_callback_time if last_callback_time != -1 else -1
last_callback_time = time.time()
progress = {"step": i, "step_time": step_time}
if extra_props is not None:
progress.update(extra_props)
if req.stream_image_progress and i % 5 == 0:
progress['output'] = update_temp_img(req, x_samples)
yield json.dumps(progress)
if thread_data.stop_processing:
raise UserInitiatedStop("User requested that we stop processing")
return img_callback
def do_mk_img(req: Request):
thread_data.stop_processing = False
res = Response()
res.request = req
res.images = []
temp_images.clear()
thread_data.temp_images.clear()
# custom model support:
# the req.use_stable_diffusion_model needs to be a valid path
# to the ckpt file (without the extension).
if not os.path.exists(req.use_stable_diffusion_model + '.ckpt'): raise FileNotFoundError(f'Cannot find {req.use_stable_diffusion_model}.ckpt')
needs_model_reload = False
ckpt_to_use = ckpt_file
if ckpt_to_use != req.use_stable_diffusion_model:
ckpt_to_use = req.use_stable_diffusion_model
if not thread_data.model or thread_data.ckpt_file != req.use_stable_diffusion_model or thread_data.vae_file != req.use_vae_model:
thread_data.ckpt_file = req.use_stable_diffusion_model
thread_data.vae_file = req.use_vae_model
needs_model_reload = True
model.turbo = req.turbo
if req.use_cpu:
if device != 'cpu':
device = 'cpu'
if model_is_half:
load_model_ckpt(ckpt_to_use, device)
needs_model_reload = False
load_model_gfpgan(gfpgan_file)
load_model_real_esrgan(real_esrgan_file)
else:
if has_valid_gpu:
prev_device = device
device = 'cuda'
if (precision == 'autocast' and (req.use_full_precision or not model_is_half)) or \
(precision == 'full' and not req.use_full_precision and not force_full_precision):
load_model_ckpt(ckpt_to_use, device, req.turbo, unet_bs, ('full' if req.use_full_precision else 'autocast'))
needs_model_reload = False
if prev_device != device:
load_model_gfpgan(gfpgan_file)
load_model_real_esrgan(real_esrgan_file)
if thread_data.device != 'cpu':
if (thread_data.precision == 'autocast' and (req.use_full_precision or not thread_data.model_is_half)) or \
(thread_data.precision == 'full' and not req.use_full_precision and not thread_data.force_full_precision):
thread_data.precision = 'full' if req.use_full_precision else 'autocast'
needs_model_reload = True
if needs_model_reload:
load_model_ckpt(ckpt_to_use, device, req.turbo, unet_bs, precision)
unload_models()
unload_filters()
load_model_ckpt()
if req.use_face_correction != gfpgan_file:
load_model_gfpgan(req.use_face_correction)
if thread_data.turbo != req.turbo:
thread_data.turbo = req.turbo
thread_data.model.turbo = req.turbo
if req.use_upscale != real_esrgan_file:
load_model_real_esrgan(req.use_upscale)
model.cdevice = device
modelCS.cond_stage_model.device = device
# Start by cleaning memory, loading and unloading things can leave memory allocated.
gc()
opt_prompt = req.prompt
opt_seed = req.seed
@ -316,11 +427,9 @@ def do_mk_img(req: Request):
opt_C = 4
opt_f = 8
opt_ddim_eta = 0.0
opt_init_img = req.init_image
print(req.to_string(), '\n device', device)
print('\n\n Using precision:', precision)
print(req, '\n device', torch.device(thread_data.device), "as", thread_data.device_name)
print('\n\n Using precision:', thread_data.precision)
seed_everything(opt_seed)
@ -329,7 +438,7 @@ def do_mk_img(req: Request):
assert prompt is not None
data = [batch_size * [prompt]]
if precision == "autocast" and device != "cpu":
if thread_data.precision == "autocast" and thread_data.device != "cpu":
precision_scope = autocast
else:
precision_scope = nullcontext
@ -345,46 +454,46 @@ def do_mk_img(req: Request):
handler = _img2img
init_image = load_img(req.init_image, req.width, req.height)
init_image = init_image.to(device)
init_image = init_image.to(thread_data.device)
if device != "cpu" and precision == "autocast":
if thread_data.device != "cpu" and thread_data.precision == "autocast":
init_image = init_image.half()
modelFS.to(device)
thread_data.modelFS.to(thread_data.device)
init_image = repeat(init_image, '1 ... -> b ...', b=batch_size)
init_latent = modelFS.get_first_stage_encoding(modelFS.encode_first_stage(init_image)) # move to latent space
init_latent = thread_data.modelFS.get_first_stage_encoding(thread_data.modelFS.encode_first_stage(init_image)) # move to latent space
if req.mask is not None:
mask = load_mask(req.mask, req.width, req.height, init_latent.shape[2], init_latent.shape[3], True).to(device)
mask = load_mask(req.mask, req.width, req.height, init_latent.shape[2], init_latent.shape[3], True).to(thread_data.device)
mask = mask[0][0].unsqueeze(0).repeat(4, 1, 1).unsqueeze(0)
mask = repeat(mask, '1 ... -> b ...', b=batch_size)
if device != "cpu" and precision == "autocast":
if thread_data.device != "cpu" and thread_data.precision == "autocast":
mask = mask.half()
move_fs_to_cpu()
# Send to CPU and wait until complete.
wait_model_move_to(thread_data.modelFS, 'cpu')
assert 0. <= req.prompt_strength <= 1., 'can only work with strength in [0.0, 1.0]'
t_enc = int(req.prompt_strength * req.num_inference_steps)
print(f"target t_enc is {t_enc} steps")
if req.save_to_disk_path is not None:
session_out_path = os.path.join(req.save_to_disk_path, req.session_id)
os.makedirs(session_out_path, exist_ok=True)
session_out_path = get_session_out_path(req.save_to_disk_path, req.session_id)
else:
session_out_path = None
seeds = ""
with torch.no_grad():
for n in trange(opt_n_iter, desc="Sampling"):
for prompts in tqdm(data, desc="data"):
with precision_scope("cuda"):
modelCS.to(device)
if thread_data.reduced_memory:
thread_data.modelCS.to(thread_data.device)
uc = None
if req.guidance_scale != 1.0:
uc = modelCS.get_learned_conditioning(batch_size * [req.negative_prompt])
uc = thread_data.modelCS.get_learned_conditioning(batch_size * [req.negative_prompt])
if isinstance(prompts, tuple):
prompts = list(prompts)
@ -397,85 +506,65 @@ def do_mk_img(req: Request):
weight = weights[i]
# if not skip_normalize:
weight = weight / totalWeight
c = torch.add(c, modelCS.get_learned_conditioning(subprompts[i]), alpha=weight)
c = torch.add(c, thread_data.modelCS.get_learned_conditioning(subprompts[i]), alpha=weight)
else:
c = modelCS.get_learned_conditioning(prompts)
c = thread_data.modelCS.get_learned_conditioning(prompts)
modelFS.to(device)
if thread_data.reduced_memory:
thread_data.modelFS.to(thread_data.device)
partial_x_samples = None
last_callback_time = -1
def img_callback(x_samples, i):
nonlocal partial_x_samples, last_callback_time
partial_x_samples = x_samples
if req.stream_progress_updates:
n_steps = req.num_inference_steps if req.init_image is None else t_enc
step_time = time.time() - last_callback_time if last_callback_time != -1 else -1
last_callback_time = time.time()
progress = {"step": i, "total_steps": n_steps, "step_time": step_time}
if req.stream_image_progress and i % 5 == 0:
partial_images = []
for i in range(batch_size):
x_samples_ddim = modelFS.decode_first_stage(x_samples[i].unsqueeze(0))
x_sample = torch.clamp((x_samples_ddim + 1.0) / 2.0, min=0.0, max=1.0)
x_sample = 255.0 * rearrange(x_sample[0].cpu().numpy(), "c h w -> h w c")
x_sample = x_sample.astype(np.uint8)
img = Image.fromarray(x_sample)
buf = BytesIO()
img.save(buf, format='JPEG')
buf.seek(0)
del img, x_sample, x_samples_ddim
# don't delete x_samples, it is used in the code that called this callback
temp_images[str(req.session_id) + '/' + str(i)] = buf
partial_images.append({'path': f'/image/tmp/{req.session_id}/{i}'})
progress['output'] = partial_images
yield json.dumps(progress)
if stop_processing:
raise UserInitiatedStop("User requested that we stop processing")
n_steps = req.num_inference_steps if req.init_image is None else t_enc
img_callback = get_image_progress_generator(req, {"total_steps": n_steps})
# run the handler
try:
print('Running handler...')
if handler == _txt2img:
x_samples = _txt2img(req.width, req.height, req.num_outputs, req.num_inference_steps, req.guidance_scale, None, opt_C, opt_f, opt_ddim_eta, c, uc, opt_seed, img_callback, mask, req.sampler)
else:
x_samples = _img2img(init_latent, t_enc, batch_size, req.guidance_scale, c, uc, req.num_inference_steps, opt_ddim_eta, opt_seed, img_callback, mask)
yield from x_samples
x_samples = partial_x_samples
if req.stream_progress_updates:
yield from x_samples
if hasattr(thread_data, 'partial_x_samples'):
if thread_data.partial_x_samples is not None:
x_samples = thread_data.partial_x_samples
del thread_data.partial_x_samples
except UserInitiatedStop:
if partial_x_samples is None:
if not hasattr(thread_data, 'partial_x_samples'):
continue
if thread_data.partial_x_samples is None:
del thread_data.partial_x_samples
continue
x_samples = thread_data.partial_x_samples
del thread_data.partial_x_samples
x_samples = partial_x_samples
print("saving images")
print("decoding images")
img_data = [None] * batch_size
for i in range(batch_size):
img_id = base64.b64encode(int(time.time()+i).to_bytes(8, 'big')).decode() # Generate unique ID based on time.
img_id = img_id.translate({43:None, 47:None, 61:None})[-8:] # Remove + / = and keep last 8 chars.
x_samples_ddim = modelFS.decode_first_stage(x_samples[i].unsqueeze(0))
x_samples_ddim = thread_data.modelFS.decode_first_stage(x_samples[i].unsqueeze(0))
x_sample = torch.clamp((x_samples_ddim + 1.0) / 2.0, min=0.0, max=1.0)
x_sample = 255.0 * rearrange(x_sample[0].cpu().numpy(), "c h w -> h w c")
x_sample = x_sample.astype(np.uint8)
img = Image.fromarray(x_sample)
img_data[i] = x_sample
del x_samples, x_samples_ddim, x_sample
if thread_data.reduced_memory:
# Send to CPU and wait until complete.
wait_model_move_to(thread_data.modelFS, 'cpu')
print("saving images")
for i in range(batch_size):
img = Image.fromarray(img_data[i])
img_id = base64.b64encode(int(time.time()+i).to_bytes(8, 'big')).decode() # Generate unique ID based on time.
img_id = img_id.translate({43:None, 47:None, 61:None})[-8:] # Remove + / = and keep last 8 chars.
has_filters = (req.use_face_correction is not None and req.use_face_correction.startswith('GFPGAN')) or \
(req.use_upscale is not None and req.use_upscale.startswith('RealESRGAN'))
return_orig_img = not has_filters or not req.show_only_filtered_image
if stop_processing:
if thread_data.stop_processing:
return_orig_img = True
if req.save_to_disk_path is not None:
@ -486,25 +575,24 @@ def do_mk_img(req: Request):
save_metadata(meta_out_path, req, prompts[0], opt_seed)
if return_orig_img:
img_data = img_to_base64_str(img, req.output_format)
res_image_orig = ResponseImage(data=img_data, seed=opt_seed)
img_str = img_to_base64_str(img, req.output_format)
res_image_orig = ResponseImage(data=img_str, seed=opt_seed)
res.images.append(res_image_orig)
if req.save_to_disk_path is not None:
res_image_orig.path_abs = img_out_path
del img
if has_filters and not stop_processing:
if has_filters and not thread_data.stop_processing:
filters_applied = []
if req.use_face_correction:
x_sample = apply_filters('gfpgan', x_sample)
img_data[i] = apply_filters('gfpgan', img_data[i], req.use_face_correction)
filters_applied.append(req.use_face_correction)
if req.use_upscale:
x_sample = apply_filters('real_esrgan', x_sample)
img_data[i] = apply_filters('real_esrgan', img_data[i], req.use_upscale)
filters_applied.append(req.use_upscale)
if (len(filters_applied) > 0):
filtered_image = Image.fromarray(x_sample)
filtered_image = Image.fromarray(img_data[i])
filtered_img_data = img_to_base64_str(filtered_image, req.output_format)
response_image = ResponseImage(data=filtered_img_data, seed=opt_seed)
res.images.append(response_image)
@ -513,17 +601,17 @@ def do_mk_img(req: Request):
save_image(filtered_image, filtered_img_out_path)
response_image.path_abs = filtered_img_out_path
del filtered_image
seeds += str(opt_seed) + ","
# Filter Applied, move to next seed
opt_seed += 1
move_fs_to_cpu()
# if thread_data.reduced_memory:
# unload_filters()
del img_data
gc()
del x_samples, x_samples_ddim, x_sample
print("memory_final = ", torch.cuda.memory_allocated() / 1e6)
if thread_data.device != 'cpu':
print(f'memory_final = {round(torch.cuda.memory_allocated(thread_data.device) / 1e6, 2)}Mb')
print('Task completed')
yield json.dumps(res.json())
def save_image(img, img_out_path):
@ -533,7 +621,7 @@ def save_image(img, img_out_path):
print('could not save the file', traceback.format_exc())
def save_metadata(meta_out_path, req, prompt, opt_seed):
metadata = f"""{prompt}
metadata = f'''{prompt}
Width: {req.width}
Height: {req.height}
Seed: {opt_seed}
@ -544,8 +632,9 @@ Use Face Correction: {req.use_face_correction}
Use Upscaling: {req.use_upscale}
Sampler: {req.sampler}
Negative Prompt: {req.negative_prompt}
Stable Diffusion Model: {req.use_stable_diffusion_model + '.ckpt'}
"""
Stable Diffusion model: {req.use_stable_diffusion_model + '.ckpt'}
VAE model: {req.use_vae_model}
'''
try:
with open(meta_out_path, 'w', encoding='utf-8') as f:
f.write(metadata)
@ -555,16 +644,13 @@ Stable Diffusion Model: {req.use_stable_diffusion_model + '.ckpt'}
def _txt2img(opt_W, opt_H, opt_n_samples, opt_ddim_steps, opt_scale, start_code, opt_C, opt_f, opt_ddim_eta, c, uc, opt_seed, img_callback, mask, sampler_name):
shape = [opt_n_samples, opt_C, opt_H // opt_f, opt_W // opt_f]
if device != "cpu":
mem = torch.cuda.memory_allocated() / 1e6
modelCS.to("cpu")
while torch.cuda.memory_allocated() / 1e6 >= mem:
time.sleep(1)
# Send to CPU and wait until complete.
wait_model_move_to(thread_data.modelCS, 'cpu')
if sampler_name == 'ddim':
model.make_schedule(ddim_num_steps=opt_ddim_steps, ddim_eta=opt_ddim_eta, verbose=False)
thread_data.model.make_schedule(ddim_num_steps=opt_ddim_steps, ddim_eta=opt_ddim_eta, verbose=False)
samples_ddim = model.sample(
samples_ddim = thread_data.model.sample(
S=opt_ddim_steps,
conditioning=c,
seed=opt_seed,
@ -578,14 +664,13 @@ def _txt2img(opt_W, opt_H, opt_n_samples, opt_ddim_steps, opt_scale, start_code,
mask=mask,
sampler = sampler_name,
)
yield from samples_ddim
def _img2img(init_latent, t_enc, batch_size, opt_scale, c, uc, opt_ddim_steps, opt_ddim_eta, opt_seed, img_callback, mask):
# encode (scaled latent)
z_enc = model.stochastic_encode(
z_enc = thread_data.model.stochastic_encode(
init_latent,
torch.tensor([t_enc] * batch_size).to(device),
torch.tensor([t_enc] * batch_size).to(thread_data.device),
opt_seed,
opt_ddim_eta,
opt_ddim_steps,
@ -593,7 +678,7 @@ def _img2img(init_latent, t_enc, batch_size, opt_scale, c, uc, opt_ddim_steps, o
x_T = None if mask is None else init_latent
# decode it
samples_ddim = model.sample(
samples_ddim = thread_data.model.sample(
t_enc,
c,
z_enc,
@ -604,20 +689,12 @@ def _img2img(init_latent, t_enc, batch_size, opt_scale, c, uc, opt_ddim_steps, o
x_T=x_T,
sampler = 'ddim'
)
yield from samples_ddim
def move_fs_to_cpu():
if device != "cpu":
mem = torch.cuda.memory_allocated() / 1e6
modelFS.to("cpu")
while torch.cuda.memory_allocated() / 1e6 >= mem:
time.sleep(1)
def gc():
if device == 'cpu':
gc_collect()
if thread_data.device == 'cpu':
return
torch.cuda.empty_cache()
torch.cuda.ipc_collect()
@ -627,7 +704,6 @@ def chunk(it, size):
it = iter(it)
return iter(lambda: tuple(islice(it, size)), ())
def load_model_from_config(ckpt, verbose=False):
print(f"Loading model from {ckpt}")
pl_sd = torch.load(ckpt, map_location="cpu")
@ -689,10 +765,14 @@ def img_to_base64_str(img, output_format="PNG"):
img_str = f"data:{mime_type};base64," + base64.b64encode(img_byte).decode()
return img_str
def base64_str_to_img(img_str):
def base64_str_to_buffer(img_str):
mime_type = "image/png" if img_str.startswith("data:image/png;") else "image/jpeg"
img_str = img_str[len(f"data:{mime_type};base64,"):]
data = base64.b64decode(img_str)
buffered = BytesIO(data)
return buffered
def base64_str_to_img(img_str):
buffered = base64_str_to_buffer(img_str)
img = Image.open(buffered)
return img

View File

@ -1,13 +1,27 @@
"""task_manager.py: manage tasks dispatching and render threads.
Notes:
render_threads should be the only hard reference held by the manager to the threads.
Use weak_thread_data to store all other data using weak keys.
This will allow for garbage collection after the thread dies.
"""
import json
import traceback
TASK_TTL = 15 * 60 # seconds, Discard last session's task timeout
import queue, threading, time
import torch
import queue, threading, time, weakref
from typing import Any, Generator, Hashable, Optional, Union
from pydantic import BaseModel
from sd_internal import Request, Response
from sd_internal import Request, Response, runtime, device_manager
THREAD_NAME_PREFIX = 'Runtime-Render/'
ERR_LOCK_FAILED = ' failed to acquire lock within timeout.'
LOCK_TIMEOUT = 15 # Maximum locking time in seconds before failing a task.
# It's better to get an exception than a deadlock... ALWAYS use timeout in critical paths.
DEVICE_START_TIMEOUT = 60 # seconds - Maximum time to wait for a render device to init.
class SymbolClass(type): # Print nicely formatted Symbol names.
def __repr__(self): return self.__qualname__
@ -25,7 +39,8 @@ class RenderTask(): # Task with output queue and completion lock.
def __init__(self, req: Request):
self.request: Request = req # Initial Request
self.response: Any = None # Copy of the last reponse
self.temp_images:[] = [None] * req.num_outputs * (1 if req.show_only_filtered_image else 2)
self.render_device = None # Select the task affinity. (Not used to change active devices).
self.temp_images:list = [None] * req.num_outputs * (1 if req.show_only_filtered_image else 2)
self.error: Exception = None
self.lock: threading.Lock = threading.Lock() # Locks at task start and unlocks when task is completed
self.buffer_queue: queue.Queue = queue.Queue() # Queue of JSON string segments
@ -55,28 +70,43 @@ class ImageRequest(BaseModel):
# allow_nsfw: bool = False
save_to_disk_path: str = None
turbo: bool = True
use_cpu: bool = False
use_cpu: bool = False ##TODO Remove after UI and plugins transition.
render_device: str = None # Select the task affinity. (Not used to change active devices).
use_full_precision: bool = False
use_face_correction: str = None # or "GFPGANv1.3"
use_upscale: str = None # or "RealESRGAN_x4plus" or "RealESRGAN_x4plus_anime_6B"
use_stable_diffusion_model: str = "sd-v1-4"
use_vae_model: str = None
show_only_filtered_image: bool = False
output_format: str = "jpeg" # or "png"
stream_progress_updates: bool = False
stream_image_progress: bool = False
class FilterRequest(BaseModel):
session_id: str = "session"
model: str = None
name: str = ""
init_image: str = None # base64
width: int = 512
height: int = 512
save_to_disk_path: str = None
turbo: bool = True
render_device: str = None
use_full_precision: bool = False
output_format: str = "jpeg" # or "png"
# Temporary cache to allow to query tasks results for a short time after they are completed.
class TaskCache():
def __init__(self):
self._base = dict()
self._lock: threading.Lock = threading.RLock()
self._lock: threading.Lock = threading.Lock()
def _get_ttl_time(self, ttl: int) -> int:
return int(time.time()) + ttl
def _is_expired(self, timestamp: int) -> bool:
return int(time.time()) >= timestamp
def clean(self) -> None:
if not self._lock.acquire(blocking=True, timeout=10): raise Exception('TaskCache.clean failed to acquire lock within timeout.')
if not self._lock.acquire(blocking=True, timeout=LOCK_TIMEOUT): raise Exception('TaskCache.clean' + ERR_LOCK_FAILED)
try:
# Create a list of expired keys to delete
to_delete = []
@ -91,11 +121,11 @@ class TaskCache():
finally:
self._lock.release()
def clear(self) -> None:
if not self._lock.acquire(blocking=True, timeout=10): raise Exception('TaskCache.clear failed to acquire lock within timeout.')
if not self._lock.acquire(blocking=True, timeout=LOCK_TIMEOUT): raise Exception('TaskCache.clear' + ERR_LOCK_FAILED)
try: self._base.clear()
finally: self._lock.release()
def delete(self, key: Hashable) -> bool:
if not self._lock.acquire(blocking=True, timeout=10): raise Exception('TaskCache.delete failed to acquire lock within timeout.')
if not self._lock.acquire(blocking=True, timeout=LOCK_TIMEOUT): raise Exception('TaskCache.delete' + ERR_LOCK_FAILED)
try:
if key not in self._base:
return False
@ -104,7 +134,7 @@ class TaskCache():
finally:
self._lock.release()
def keep(self, key: Hashable, ttl: int) -> bool:
if not self._lock.acquire(blocking=True, timeout=10): raise Exception('TaskCache.keep failed to acquire lock within timeout.')
if not self._lock.acquire(blocking=True, timeout=LOCK_TIMEOUT): raise Exception('TaskCache.keep' + ERR_LOCK_FAILED)
try:
if key in self._base:
_, value = self._base.get(key)
@ -114,7 +144,7 @@ class TaskCache():
finally:
self._lock.release()
def put(self, key: Hashable, value: Any, ttl: int) -> bool:
if not self._lock.acquire(blocking=True, timeout=10): raise Exception('TaskCache.put failed to acquire lock within timeout.')
if not self._lock.acquire(blocking=True, timeout=LOCK_TIMEOUT): raise Exception('TaskCache.put' + ERR_LOCK_FAILED)
try:
self._base[key] = (
self._get_ttl_time(ttl), value
@ -128,131 +158,343 @@ class TaskCache():
finally:
self._lock.release()
def tryGet(self, key: Hashable) -> Any:
if not self._lock.acquire(blocking=True, timeout=10): raise Exception('TaskCache.tryGet failed to acquire lock within timeout.')
if not self._lock.acquire(blocking=True, timeout=LOCK_TIMEOUT): raise Exception('TaskCache.tryGet' + ERR_LOCK_FAILED)
try:
ttl, value = self._base.get(key, (None, None))
if ttl is not None and self._is_expired(ttl):
print(f'Session {key} expired. Discarding data.')
self.delete(key)
del self._base[key]
return None
return value
finally:
self._lock.release()
manager_lock = threading.RLock()
render_threads = []
current_state = ServerStates.Init
current_state_error:Exception = None
current_model_path = None
tasks_queue = queue.Queue()
current_vae_path = None
tasks_queue = []
task_cache = TaskCache()
default_model_to_load = None
default_vae_to_load = None
weak_thread_data = weakref.WeakKeyDictionary()
def preload_model(file_path=None):
global current_state, current_state_error, current_model_path
if file_path == None:
file_path = default_model_to_load
if file_path == current_model_path:
def preload_model(ckpt_file_path=None, vae_file_path=None):
global current_state, current_state_error, current_model_path, current_vae_path
if ckpt_file_path == None:
ckpt_file_path = default_model_to_load
if vae_file_path == None:
vae_file_path = default_vae_to_load
if ckpt_file_path == current_model_path and vae_file_path == current_vae_path:
return
current_state = ServerStates.LoadingModel
try:
from . import runtime
runtime.load_model_ckpt(ckpt_to_use=file_path)
current_model_path = file_path
runtime.thread_data.ckpt_file = ckpt_file_path
runtime.thread_data.vae_file = vae_file_path
runtime.load_model_ckpt()
current_model_path = ckpt_file_path
current_vae_path = vae_file_path
current_state_error = None
current_state = ServerStates.Online
except Exception as e:
current_model_path = None
current_vae_path = None
current_state_error = e
current_state = ServerStates.Unavailable
print(traceback.format_exc())
def thread_render():
global current_state, current_state_error, current_model_path
def thread_get_next_task():
from . import runtime
current_state = ServerStates.Online
preload_model()
if not manager_lock.acquire(blocking=True, timeout=LOCK_TIMEOUT):
print('Render thread on device', runtime.thread_data.device, 'failed to acquire manager lock.')
return None
if len(tasks_queue) <= 0:
manager_lock.release()
return None
task = None
try: # Select a render task.
for queued_task in tasks_queue:
if queued_task.request.use_face_correction and runtime.thread_data.device == 'cpu' and is_alive() == 1:
queued_task.error = Exception('The CPU cannot be used to run this task currently. Please remove "Fix incorrect faces" from Image Settings and try again.')
task = queued_task
break
if queued_task.render_device and runtime.thread_data.device != queued_task.render_device:
# Is asking for a specific render device.
if is_alive(queued_task.render_device) > 0:
continue # requested device alive, skip current one.
else:
# Requested device is not active, return error to UI.
queued_task.error = Exception(queued_task.render_device + ' is not currently active.')
task = queued_task
break
if not queued_task.render_device and runtime.thread_data.device == 'cpu' and is_alive() > 1:
# not asking for any specific devices, cpu want to grab task but other render devices are alive.
continue # Skip Tasks, don't run on CPU unless there is nothing else or user asked for it.
task = queued_task
break
if task is not None:
del tasks_queue[tasks_queue.index(task)]
return task
finally:
manager_lock.release()
def thread_render(device):
global current_state, current_state_error, current_model_path, current_vae_path
from . import runtime
try:
runtime.thread_init(device)
except Exception as e:
print(traceback.format_exc())
weak_thread_data[threading.current_thread()] = {
'error': e
}
return
weak_thread_data[threading.current_thread()] = {
'device': runtime.thread_data.device,
'device_name': runtime.thread_data.device_name,
'alive': True
}
if runtime.thread_data.device != 'cpu' or is_alive() == 1:
preload_model()
current_state = ServerStates.Online
while True:
task_cache.clean()
if not weak_thread_data[threading.current_thread()]['alive']:
print(f'Shutting down thread for device {runtime.thread_data.device}')
runtime.unload_models()
runtime.unload_filters()
return
if isinstance(current_state_error, SystemExit):
current_state = ServerStates.Unavailable
return
task = None
try:
task = tasks_queue.get(timeout=1)
except queue.Empty as e:
if isinstance(current_state_error, SystemExit):
current_state = ServerStates.Unavailable
return
else: continue
#if current_model_path != task.request.use_stable_diffusion_model:
# preload_model(task.request.use_stable_diffusion_model)
task = thread_get_next_task()
if task is None:
time.sleep(1)
continue
if task.error is not None:
print(task.error)
task.response = {"status": 'failed', "detail": str(task.error)}
task.buffer_queue.put(json.dumps(task.response))
continue
if current_state_error:
task.error = current_state_error
task.response = {"status": 'failed', "detail": str(task.error)}
task.buffer_queue.put(json.dumps(task.response))
continue
print(f'Session {task.request.session_id} starting task {id(task)}')
print(f'Session {task.request.session_id} starting task {id(task)} on {runtime.thread_data.device_name}')
if not task.lock.acquire(blocking=False): raise Exception('Got locked task from queue.')
try:
task.lock.acquire(blocking=False)
if runtime.thread_data.device == 'cpu' and is_alive() > 1:
# CPU is not the only device. Keep track of active time to unload resources later.
runtime.thread_data.lastActive = time.time()
# Open data generator.
res = runtime.mk_img(task.request)
if current_model_path == task.request.use_stable_diffusion_model:
current_state = ServerStates.Rendering
else:
current_state = ServerStates.LoadingModel
# Start reading from generator.
dataQueue = None
if task.request.stream_progress_updates:
dataQueue = task.buffer_queue
for result in res:
if current_state == ServerStates.LoadingModel:
current_state = ServerStates.Rendering
current_model_path = task.request.use_stable_diffusion_model
current_vae_path = task.request.use_vae_model
if isinstance(current_state_error, SystemExit) or isinstance(current_state_error, StopAsyncIteration) or isinstance(task.error, StopAsyncIteration):
runtime.thread_data.stop_processing = True
if isinstance(current_state_error, StopAsyncIteration):
task.error = current_state_error
current_state_error = None
print(f'Session {task.request.session_id} sent cancel signal for task {id(task)}')
if dataQueue:
dataQueue.put(result)
if isinstance(result, str):
result = json.loads(result)
task.response = result
if 'output' in result:
for out_obj in result['output']:
if 'path' in out_obj:
img_id = out_obj['path'][out_obj['path'].rindex('/') + 1:]
task.temp_images[int(img_id)] = runtime.thread_data.temp_images[out_obj['path'][11:]]
elif 'data' in out_obj:
buf = runtime.base64_str_to_buffer(out_obj['data'])
task.temp_images[result['output'].index(out_obj)] = buf
# Before looping back to the generator, mark cache as still alive.
task_cache.keep(task.request.session_id, TASK_TTL)
except Exception as e:
task.error = e
task.lock.release()
tasks_queue.task_done()
print(traceback.format_exc())
continue
dataQueue = None
if task.request.stream_progress_updates:
dataQueue = task.buffer_queue
for result in res:
if current_state == ServerStates.LoadingModel:
current_state = ServerStates.Rendering
current_model_path = task.request.use_stable_diffusion_model
if isinstance(current_state_error, SystemExit) or isinstance(current_state_error, StopAsyncIteration) or isinstance(task.error, StopAsyncIteration):
runtime.stop_processing = True
if isinstance(current_state_error, StopAsyncIteration):
task.error = current_state_error
current_state_error = None
print(f'Session {task.request.session_id} sent cancel signal for task {id(task)}')
if dataQueue:
dataQueue.put(result)
if isinstance(result, str):
result = json.loads(result)
task.response = result
if 'output' in result:
for out_obj in result['output']:
if 'path' in out_obj:
img_id = out_obj['path'][out_obj['path'].rindex('/') + 1:]
task.temp_images[int(img_id)] = runtime.temp_images[out_obj['path'][11:]]
elif 'data' in out_obj:
task.temp_images[result['output'].index(out_obj)] = out_obj['data']
task_cache.keep(task.request.session_id, TASK_TTL)
# Task completed
task.lock.release()
tasks_queue.task_done()
finally:
# Task completed
task.lock.release()
task_cache.keep(task.request.session_id, TASK_TTL)
if isinstance(task.error, StopAsyncIteration):
print(f'Session {task.request.session_id} task {id(task)} cancelled!')
elif task.error is not None:
print(f'Session {task.request.session_id} task {id(task)} failed!')
else:
print(f'Session {task.request.session_id} task {id(task)} completed.')
print(f'Session {task.request.session_id} task {id(task)} completed by {runtime.thread_data.device_name}.')
current_state = ServerStates.Online
render_thread = threading.Thread(target=thread_render)
def get_cached_task(session_id:str, update_ttl:bool=False):
# By calling keep before tryGet, wont discard if was expired.
if update_ttl and not task_cache.keep(session_id, TASK_TTL):
# Failed to keep task, already gone.
return None
return task_cache.tryGet(session_id)
def start_render_thread():
# Start Rendering Thread
render_thread.daemon = True
render_thread.start()
def get_devices():
devices = {
'all': {},
'active': {},
}
def get_device_info(device):
if device == 'cpu':
return {'name': device_manager.get_processor_name()}
mem_free, mem_total = torch.cuda.mem_get_info(device)
mem_free /= float(10**9)
mem_total /= float(10**9)
return {
'name': torch.cuda.get_device_name(device),
'mem_free': mem_free,
'mem_total': mem_total,
}
# list the compatible devices
gpu_count = torch.cuda.device_count()
for device in range(gpu_count):
device = f'cuda:{device}'
if not device_manager.is_device_compatible(device):
continue
devices['all'].update({device: get_device_info(device)})
devices['all'].update({'cpu': get_device_info('cpu')})
# list the activated devices
if not manager_lock.acquire(blocking=True, timeout=LOCK_TIMEOUT): raise Exception('get_devices' + ERR_LOCK_FAILED)
try:
for rthread in render_threads:
if not rthread.is_alive():
continue
weak_data = weak_thread_data.get(rthread)
if not weak_data or not 'device' in weak_data or not 'device_name' in weak_data:
continue
device = weak_data['device']
devices['active'].update({device: get_device_info(device)})
finally:
manager_lock.release()
return devices
def is_alive(device=None):
if not manager_lock.acquire(blocking=True, timeout=LOCK_TIMEOUT): raise Exception('is_alive' + ERR_LOCK_FAILED)
nbr_alive = 0
try:
for rthread in render_threads:
if device is not None:
weak_data = weak_thread_data.get(rthread)
if weak_data is None or not 'device' in weak_data or weak_data['device'] is None:
continue
thread_device = weak_data['device']
if thread_device != device:
continue
if rthread.is_alive():
nbr_alive += 1
return nbr_alive
finally:
manager_lock.release()
def start_render_thread(device):
if not manager_lock.acquire(blocking=True, timeout=LOCK_TIMEOUT): raise Exception('start_render_thread' + ERR_LOCK_FAILED)
print('Start new Rendering Thread on device', device)
try:
rthread = threading.Thread(target=thread_render, kwargs={'device': device})
rthread.daemon = True
rthread.name = THREAD_NAME_PREFIX + device
rthread.start()
render_threads.append(rthread)
finally:
manager_lock.release()
timeout = DEVICE_START_TIMEOUT
while not rthread.is_alive() or not rthread in weak_thread_data or not 'device' in weak_thread_data[rthread]:
if rthread in weak_thread_data and 'error' in weak_thread_data[rthread]:
print(rthread, device, 'error:', weak_thread_data[rthread]['error'])
return False
if timeout <= 0:
return False
timeout -= 1
time.sleep(1)
return True
def stop_render_thread(device):
try:
device_manager.validate_device_id(device, log_prefix='stop_render_thread')
except:
print(traceback.format_exec())
return False
if not manager_lock.acquire(blocking=True, timeout=LOCK_TIMEOUT): raise Exception('stop_render_thread' + ERR_LOCK_FAILED)
print('Stopping Rendering Thread on device', device)
try:
thread_to_remove = None
for rthread in render_threads:
weak_data = weak_thread_data.get(rthread)
if weak_data is None or not 'device' in weak_data or weak_data['device'] is None:
continue
thread_device = weak_data['device']
if thread_device == device:
weak_data['alive'] = False
thread_to_remove = rthread
break
if thread_to_remove is not None:
render_threads.remove(rthread)
return True
finally:
manager_lock.release()
return False
def update_render_threads(render_devices, active_devices):
devices_to_start, devices_to_stop = device_manager.get_device_delta(render_devices, active_devices)
print('devices_to_start', devices_to_start)
print('devices_to_stop', devices_to_stop)
for device in devices_to_stop:
if is_alive(device) <= 0:
print(device, 'is not alive')
continue
if not stop_render_thread(device):
print(device, 'could not stop render thread')
for device in devices_to_start:
if is_alive(device) >= 1:
print(device, 'already registered.')
continue
if not start_render_thread(device):
print(device, 'failed to start.')
if is_alive() <= 0: # No running devices, probably invalid user config.
raise EnvironmentError('ERROR: No active render devices! Please verify the "render_devices" value in config.json')
print('active devices', get_devices()['active'])
def shutdown_event(): # Signal render thread to close on shutdown
global current_state_error
current_state_error = SystemExit('Application shutting down.')
def render(req : ImageRequest):
if not render_thread.is_alive(): # Render thread is dead
if is_alive() <= 0: # Render thread is dead
raise ChildProcessError('Rendering thread has died.')
# Alive, check if task in cache
task = task_cache.tryGet(req.session_id)
@ -277,12 +519,12 @@ def render(req : ImageRequest):
r.sampler = req.sampler
# r.allow_nsfw = req.allow_nsfw
r.turbo = req.turbo
r.use_cpu = req.use_cpu
r.use_full_precision = req.use_full_precision
r.save_to_disk_path = req.save_to_disk_path
r.use_upscale: str = req.use_upscale
r.use_face_correction = req.use_face_correction
r.use_stable_diffusion_model = req.use_stable_diffusion_model
r.use_vae_model = req.use_vae_model
r.show_only_filtered_image = req.show_only_filtered_image
r.output_format = req.output_format
@ -293,7 +535,14 @@ def render(req : ImageRequest):
r.stream_image_progress = False
new_task = RenderTask(r)
if task_cache.put(r.session_id, new_task, TASK_TTL):
tasks_queue.put(new_task, block=True, timeout=30)
return new_task
# Use twice the normal timeout for adding user requests.
# Tries to force task_cache.put to fail before tasks_queue.put would.
if manager_lock.acquire(blocking=True, timeout=LOCK_TIMEOUT * 2):
try:
tasks_queue.append(new_task)
return new_task
finally:
manager_lock.release()
raise RuntimeError('Failed to add task to cache.')

View File

@ -1,3 +1,7 @@
"""server.py: FastAPI SD-UI Web Host.
Notes:
async endpoints always run on the main thread. Without they run on the thread pool.
"""
import json
import traceback
@ -16,14 +20,27 @@ UI_PLUGINS_DIR = os.path.abspath(os.path.join(SD_DIR, '..', 'plugins', 'ui'))
OUTPUT_DIRNAME = "Stable Diffusion UI" # in the user's home folder
TASK_TTL = 15 * 60 # Discard last session's task timeout
APP_CONFIG_DEFAULTS = {
# auto: selects the cuda device with the most free memory, cuda: use the currently active cuda device.
'render_devices': 'auto', # valid entries: 'auto', 'cpu' or 'cuda:N' (where N is a GPU index)
'update_branch': 'main',
'ui': {
'open_browser_on_start': True,
},
}
APP_CONFIG_DEFAULT_MODELS = [
# needed to support the legacy installations
'custom-model', # Check if user has a custom model, use it first.
'sd-v1-4', # Default fallback.
]
from fastapi import FastAPI, HTTPException
from fastapi.staticfiles import StaticFiles
from starlette.responses import FileResponse, JSONResponse, StreamingResponse
from pydantic import BaseModel
import logging
import queue, threading, time
from typing import Any, Generator, Hashable, Optional, Union
#import queue, threading, time
from typing import Any, Generator, Hashable, List, Optional, Union
from sd_internal import Request, Response, task_manager
@ -42,171 +59,7 @@ NOCACHE_HEADERS={"Cache-Control": "no-cache, no-store, must-revalidate", "Pragma
app.mount('/media', StaticFiles(directory=os.path.join(SD_UI_DIR, 'media')), name="media")
app.mount('/plugins', StaticFiles(directory=UI_PLUGINS_DIR), name="plugins")
class SetAppConfigRequest(BaseModel):
update_branch: str = "main"
# needs to support the legacy installations
def get_initial_model_to_load():
custom_weight_path = os.path.join(SD_DIR, 'custom-model.ckpt')
ckpt_to_use = "sd-v1-4" if not os.path.exists(custom_weight_path) else "custom-model"
ckpt_to_use = os.path.join(SD_DIR, ckpt_to_use)
config = getConfig()
if 'model' in config and 'stable-diffusion' in config['model']:
model_name = config['model']['stable-diffusion']
model_path = resolve_model_to_use(model_name)
if os.path.exists(model_path + '.ckpt'):
ckpt_to_use = model_path
else:
print('Could not find the configured custom model at:', model_path + '.ckpt', '. Using the default one:', ckpt_to_use + '.ckpt')
return ckpt_to_use
def resolve_model_to_use(model_name):
if model_name in ('sd-v1-4', 'custom-model'):
model_path = os.path.join(MODELS_DIR, 'stable-diffusion', model_name)
legacy_model_path = os.path.join(SD_DIR, model_name)
if not os.path.exists(model_path + '.ckpt') and os.path.exists(legacy_model_path + '.ckpt'):
model_path = legacy_model_path
else:
model_path = os.path.join(MODELS_DIR, 'stable-diffusion', model_name)
return model_path
@app.on_event("shutdown")
def shutdown_event(): # Signal render thread to close on shutdown
task_manager.current_state_error = SystemExit('Application shutting down.')
@app.get('/')
def read_root():
return FileResponse(os.path.join(SD_UI_DIR, 'index.html'), headers=NOCACHE_HEADERS)
@app.get('/ping') # Get server and optionally session status.
def ping(session_id:str=None):
if not task_manager.render_thread.is_alive(): # Render thread is dead.
if task_manager.current_state_error: raise HTTPException(status_code=500, detail=str(task_manager.current_state_error))
raise HTTPException(status_code=500, detail='Render thread is dead.')
if task_manager.current_state_error and not isinstance(task_manager.current_state_error, StopAsyncIteration): raise HTTPException(status_code=500, detail=str(task_manager.current_state_error))
# Alive
response = {'status': str(task_manager.current_state)}
if session_id:
task = task_manager.task_cache.tryGet(session_id)
if task:
response['task'] = id(task)
if task.lock.locked():
response['session'] = 'running'
elif isinstance(task.error, StopAsyncIteration):
response['session'] = 'stopped'
elif task.error:
response['session'] = 'error'
elif not task.buffer_queue.empty():
response['session'] = 'buffer'
elif task.response:
response['session'] = 'completed'
else:
response['session'] = 'pending'
return JSONResponse(response, headers=NOCACHE_HEADERS)
def save_model_to_config(model_name):
config = getConfig()
if 'model' not in config:
config['model'] = {}
config['model']['stable-diffusion'] = model_name
setConfig(config)
@app.post('/render')
def render(req : task_manager.ImageRequest):
try:
save_model_to_config(req.use_stable_diffusion_model)
req.use_stable_diffusion_model = resolve_model_to_use(req.use_stable_diffusion_model)
new_task = task_manager.render(req)
response = {
'status': str(task_manager.current_state),
'queue': task_manager.tasks_queue.qsize(),
'stream': f'/image/stream/{req.session_id}/{id(new_task)}',
'task': id(new_task)
}
return JSONResponse(response, headers=NOCACHE_HEADERS)
except ChildProcessError as e: # Render thread is dead
raise HTTPException(status_code=500, detail=f'Rendering thread has died.') # HTTP500 Internal Server Error
except ConnectionRefusedError as e: # Unstarted task pending, deny queueing more than one.
raise HTTPException(status_code=503, detail=f'Session {req.session_id} has an already pending task.') # HTTP503 Service Unavailable
except Exception as e:
raise HTTPException(status_code=500, detail=str(e))
@app.get('/image/stream/{session_id:str}/{task_id:int}')
def stream(session_id:str, task_id:int):
#TODO Move to WebSockets ??
task = task_manager.task_cache.tryGet(session_id)
if not task: raise HTTPException(status_code=410, detail='No request received.') # HTTP410 Gone
if (id(task) != task_id): raise HTTPException(status_code=409, detail=f'Wrong task id received. Expected:{id(task)}, Received:{task_id}') # HTTP409 Conflict
if task.buffer_queue.empty() and not task.lock.locked():
if task.response:
#print(f'Session {session_id} sending cached response')
return JSONResponse(task.response, headers=NOCACHE_HEADERS)
raise HTTPException(status_code=425, detail='Too Early, task not started yet.') # HTTP425 Too Early
#print(f'Session {session_id} opened live render stream {id(task.buffer_queue)}')
return StreamingResponse(task.read_buffer_generator(), media_type='application/json')
@app.get('/image/stop')
def stop(session_id:str=None):
if not session_id:
if task_manager.current_state == task_manager.ServerStates.Online or task_manager.current_state == task_manager.ServerStates.Unavailable:
raise HTTPException(status_code=409, detail='Not currently running any tasks.') # HTTP409 Conflict
task_manager.current_state_error = StopAsyncIteration('')
return {'OK'}
task = task_manager.task_cache.tryGet(session_id)
if not task: raise HTTPException(status_code=404, detail=f'Session {session_id} has no active task.') # HTTP404 Not Found
if isinstance(task.error, StopAsyncIteration): raise HTTPException(status_code=409, detail=f'Session {session_id} task is already stopped.') # HTTP409 Conflict
task.error = StopAsyncIteration('')
return {'OK'}
@app.get('/image/tmp/{session_id}/{img_id:int}')
def get_image(session_id, img_id):
task = task_manager.task_cache.tryGet(session_id)
if not task: raise HTTPException(status_code=410, detail=f'Session {session_id} has not submitted a task.') # HTTP410 Gone
if not task.temp_images[img_id]: raise HTTPException(status_code=425, detail='Too Early, task data is not available yet.') # HTTP425 Too Early
try:
img_data = task.temp_images[img_id]
if isinstance(img_data, str):
return img_data
img_data.seek(0)
return StreamingResponse(img_data, media_type='image/jpeg')
except KeyError as e:
raise HTTPException(status_code=500, detail=str(e))
@app.post('/app_config')
async def setAppConfig(req : SetAppConfigRequest):
try:
config = {
'update_branch': req.update_branch
}
config_json_str = json.dumps(config)
config_bat_str = f'@set update_branch={req.update_branch}'
config_sh_str = f'export update_branch={req.update_branch}'
config_json_path = os.path.join(CONFIG_DIR, 'config.json')
config_bat_path = os.path.join(CONFIG_DIR, 'config.bat')
config_sh_path = os.path.join(CONFIG_DIR, 'config.sh')
with open(config_json_path, 'w', encoding='utf-8') as f:
f.write(config_json_str)
with open(config_bat_path, 'w', encoding='utf-8') as f:
f.write(config_bat_str)
with open(config_sh_path, 'w', encoding='utf-8') as f:
f.write(config_sh_str)
return {'OK'}
except Exception as e:
print(traceback.format_exc())
raise HTTPException(status_code=500, detail=str(e))
def getConfig(default_val={}):
def getConfig(default_val=APP_CONFIG_DEFAULTS):
try:
config_json_path = os.path.join(CONFIG_DIR, 'config.json')
if not os.path.exists(config_json_path):
@ -219,41 +72,152 @@ def getConfig(default_val={}):
return default_val
def setConfig(config):
try:
try: # config.json
config_json_path = os.path.join(CONFIG_DIR, 'config.json')
with open(config_json_path, 'w', encoding='utf-8') as f:
return json.dump(config, f)
except Exception as e:
print(str(e))
json.dump(config, f)
except:
print(traceback.format_exc())
try: # config.bat
config_bat_path = os.path.join(CONFIG_DIR, 'config.bat')
config_bat = []
if 'update_branch' in config:
config_bat.append(f"@set update_branch={config['update_branch']}")
if os.getenv('SD_UI_BIND_PORT') is not None:
config_bat.append(f"@set SD_UI_BIND_PORT={os.getenv('SD_UI_BIND_PORT')}")
if os.getenv('SD_UI_BIND_IP') is not None:
config_bat.append(f"@set SD_UI_BIND_IP={os.getenv('SD_UI_BIND_IP')}")
if len(config_bat) > 0:
with open(config_bat_path, 'w', encoding='utf-8') as f:
f.write('\r\n'.join(config_bat))
except:
print(traceback.format_exc())
try: # config.sh
config_sh_path = os.path.join(CONFIG_DIR, 'config.sh')
config_sh = ['#!/bin/bash']
if 'update_branch' in config:
config_sh.append(f"export update_branch={config['update_branch']}")
if os.getenv('SD_UI_BIND_PORT') is not None:
config_sh.append(f"export SD_UI_BIND_PORT={os.getenv('SD_UI_BIND_PORT')}")
if os.getenv('SD_UI_BIND_IP') is not None:
config_sh.append(f"export SD_UI_BIND_IP={os.getenv('SD_UI_BIND_IP')}")
if len(config_sh) > 1:
with open(config_sh_path, 'w', encoding='utf-8') as f:
f.write('\n'.join(config_sh))
except:
print(traceback.format_exc())
def resolve_model_to_use(model_name:str, model_type:str, model_dir:str, model_extensions:list, default_models=[]):
model_dirs = [os.path.join(MODELS_DIR, model_dir), SD_DIR]
if not model_name: # When None try user configured model.
config = getConfig()
if 'model' in config and model_type in config['model']:
model_name = config['model'][model_type]
if model_name:
# Check models directory
models_dir_path = os.path.join(MODELS_DIR, model_dir, model_name)
for model_extension in model_extensions:
if os.path.exists(models_dir_path + model_extension):
return models_dir_path
if os.path.exists(model_name + model_extension):
# Direct Path to file
model_name = os.path.abspath(model_name)
return model_name
# Default locations
if model_name in default_models:
default_model_path = os.path.join(SD_DIR, model_name)
for model_extension in model_extensions:
if os.path.exists(default_model_path + model_extension):
return default_model_path
# Can't find requested model, check the default paths.
for default_model in default_models:
for model_dir in model_dirs:
default_model_path = os.path.join(model_dir, default_model)
for model_extension in model_extensions:
if os.path.exists(default_model_path + model_extension):
if model_name is not None:
print(f'Could not find the configured custom model {model_name}{model_extension}. Using the default one: {default_model_path}{model_extension}')
return default_model_path
raise Exception('No valid models found.')
def resolve_ckpt_to_use(model_name:str=None):
return resolve_model_to_use(model_name, model_type='stable-diffusion', model_dir='stable-diffusion', model_extensions=['.ckpt'], default_models=APP_CONFIG_DEFAULT_MODELS)
def resolve_vae_to_use(model_name:str=None):
try:
return resolve_model_to_use(model_name, model_type='vae', model_dir='vae', model_extensions=['.vae.pt', '.ckpt'], default_models=[])
except:
return None
class SetAppConfigRequest(BaseModel):
update_branch: str = None
render_devices: Union[List[str], List[int], str, int] = None
model_vae: str = None
ui_open_browser_on_start: bool = None
@app.post('/app_config')
async def setAppConfig(req : SetAppConfigRequest):
config = getConfig()
if req.update_branch is not None:
config['update_branch'] = req.update_branch
if req.render_devices is not None:
update_render_devices_in_config(config, req.render_devices)
if req.ui_open_browser_on_start is not None:
if 'ui' not in config:
config['ui'] = {}
config['ui']['open_browser_on_start'] = req.ui_open_browser_on_start
try:
setConfig(config)
if req.render_devices:
update_render_threads()
return JSONResponse({'status': 'OK'}, headers=NOCACHE_HEADERS)
except Exception as e:
print(traceback.format_exc())
raise HTTPException(status_code=500, detail=str(e))
def getModels():
models = {
'active': {
'stable-diffusion': 'sd-v1-4',
'vae': '',
},
'options': {
'stable-diffusion': ['sd-v1-4'],
'vae': [],
},
}
def listModels(models_dirname, model_type, model_extensions):
models_dir = os.path.join(MODELS_DIR, models_dirname)
if not os.path.exists(models_dir):
os.makedirs(models_dir)
for file in os.listdir(models_dir):
for model_extension in model_extensions:
if file.endswith(model_extension):
model_name = file[:-len(model_extension)]
models['options'][model_type].append(model_name)
models['options'][model_type] = [*set(models['options'][model_type])] # remove duplicates
models['options'][model_type].sort()
# custom models
sd_models_dir = os.path.join(MODELS_DIR, 'stable-diffusion')
for file in os.listdir(sd_models_dir):
if file.endswith('.ckpt'):
model_name = os.path.splitext(file)[0]
models['options']['stable-diffusion'].append(model_name)
listModels(models_dirname='stable-diffusion', model_type='stable-diffusion', model_extensions=['.ckpt'])
listModels(models_dirname='vae', model_type='vae', model_extensions=['.vae.pt', '.ckpt'])
# legacy
custom_weight_path = os.path.join(SD_DIR, 'custom-model.ckpt')
if os.path.exists(custom_weight_path):
models['active']['stable-diffusion'] = 'custom-model'
models['options']['stable-diffusion'].append('custom-model')
config = getConfig()
if 'model' in config and 'stable-diffusion' in config['model']:
models['active']['stable-diffusion'] = config['model']['stable-diffusion']
return models
def getUIPlugins():
@ -272,8 +236,13 @@ def read_web_data(key:str=None):
elif key == 'app_config':
config = getConfig(default_val=None)
if config is None:
raise HTTPException(status_code=500, detail="Config file is missing or unreadable")
config = APP_CONFIG_DEFAULTS
return JSONResponse(config, headers=NOCACHE_HEADERS)
elif key == 'devices':
config = getConfig()
devices = task_manager.get_devices()
devices['config'] = config.get('render_devices', "auto")
return JSONResponse(devices, headers=NOCACHE_HEADERS)
elif key == 'models':
return JSONResponse(getModels(), headers=NOCACHE_HEADERS)
elif key == 'modifiers': return FileResponse(os.path.join(SD_UI_DIR, 'modifiers.json'), headers=NOCACHE_HEADERS)
@ -282,6 +251,123 @@ def read_web_data(key:str=None):
else:
raise HTTPException(status_code=404, detail=f'Request for unknown {key}') # HTTP404 Not Found
@app.get('/ping') # Get server and optionally session status.
def ping(session_id:str=None):
if task_manager.is_alive() <= 0: # Check that render threads are alive.
if task_manager.current_state_error: raise HTTPException(status_code=500, detail=str(task_manager.current_state_error))
raise HTTPException(status_code=500, detail='Render thread is dead.')
if task_manager.current_state_error and not isinstance(task_manager.current_state_error, StopAsyncIteration): raise HTTPException(status_code=500, detail=str(task_manager.current_state_error))
# Alive
response = {'status': str(task_manager.current_state)}
if session_id:
task = task_manager.get_cached_task(session_id, update_ttl=True)
if task:
response['task'] = id(task)
if task.lock.locked():
response['session'] = 'running'
elif isinstance(task.error, StopAsyncIteration):
response['session'] = 'stopped'
elif task.error:
response['session'] = 'error'
elif not task.buffer_queue.empty():
response['session'] = 'buffer'
elif task.response:
response['session'] = 'completed'
else:
response['session'] = 'pending'
response['devices'] = task_manager.get_devices()
return JSONResponse(response, headers=NOCACHE_HEADERS)
def save_model_to_config(ckpt_model_name, vae_model_name):
config = getConfig()
if 'model' not in config:
config['model'] = {}
config['model']['stable-diffusion'] = ckpt_model_name
config['model']['vae'] = vae_model_name
if vae_model_name is None or vae_model_name == "":
del config['model']['vae']
setConfig(config)
def update_render_devices_in_config(config, render_devices):
if render_devices not in ('cpu', 'auto') and not render_devices.startswith('cuda:'):
raise HTTPException(status_code=400, detail=f'Invalid render device requested: {render_devices}')
if render_devices.startswith('cuda:'):
render_devices = render_devices.split(',')
config['render_devices'] = render_devices
@app.post('/render')
def render(req : task_manager.ImageRequest):
try:
save_model_to_config(req.use_stable_diffusion_model, req.use_vae_model)
req.use_stable_diffusion_model = resolve_ckpt_to_use(req.use_stable_diffusion_model)
req.use_vae_model = resolve_vae_to_use(req.use_vae_model)
new_task = task_manager.render(req)
response = {
'status': str(task_manager.current_state),
'queue': len(task_manager.tasks_queue),
'stream': f'/image/stream/{req.session_id}/{id(new_task)}',
'task': id(new_task)
}
return JSONResponse(response, headers=NOCACHE_HEADERS)
except ChildProcessError as e: # Render thread is dead
raise HTTPException(status_code=500, detail=f'Rendering thread has died.') # HTTP500 Internal Server Error
except ConnectionRefusedError as e: # Unstarted task pending, deny queueing more than one.
raise HTTPException(status_code=503, detail=f'Session {req.session_id} has an already pending task.') # HTTP503 Service Unavailable
except Exception as e:
raise HTTPException(status_code=500, detail=str(e))
@app.get('/image/stream/{session_id:str}/{task_id:int}')
def stream(session_id:str, task_id:int):
#TODO Move to WebSockets ??
task = task_manager.get_cached_task(session_id, update_ttl=True)
if not task: raise HTTPException(status_code=410, detail='No request received.') # HTTP410 Gone
if (id(task) != task_id): raise HTTPException(status_code=409, detail=f'Wrong task id received. Expected:{id(task)}, Received:{task_id}') # HTTP409 Conflict
if task.buffer_queue.empty() and not task.lock.locked():
if task.response:
#print(f'Session {session_id} sending cached response')
return JSONResponse(task.response, headers=NOCACHE_HEADERS)
raise HTTPException(status_code=425, detail='Too Early, task not started yet.') # HTTP425 Too Early
#print(f'Session {session_id} opened live render stream {id(task.buffer_queue)}')
return StreamingResponse(task.read_buffer_generator(), media_type='application/json')
@app.get('/image/stop')
def stop(session_id:str=None):
if not session_id:
if task_manager.current_state == task_manager.ServerStates.Online or task_manager.current_state == task_manager.ServerStates.Unavailable:
raise HTTPException(status_code=409, detail='Not currently running any tasks.') # HTTP409 Conflict
task_manager.current_state_error = StopAsyncIteration('')
return {'OK'}
task = task_manager.get_cached_task(session_id, update_ttl=False)
if not task: raise HTTPException(status_code=404, detail=f'Session {session_id} has no active task.') # HTTP404 Not Found
if isinstance(task.error, StopAsyncIteration): raise HTTPException(status_code=409, detail=f'Session {session_id} task is already stopped.') # HTTP409 Conflict
task.error = StopAsyncIteration('')
return {'OK'}
@app.get('/image/tmp/{session_id}/{img_id:int}')
def get_image(session_id, img_id):
task = task_manager.get_cached_task(session_id, update_ttl=True)
if not task: raise HTTPException(status_code=410, detail=f'Session {session_id} has not submitted a task.') # HTTP410 Gone
if not task.temp_images[img_id]: raise HTTPException(status_code=425, detail='Too Early, task data is not available yet.') # HTTP425 Too Early
try:
img_data = task.temp_images[img_id]
img_data.seek(0)
return StreamingResponse(img_data, media_type='image/jpeg')
except KeyError as e:
raise HTTPException(status_code=500, detail=str(e))
@app.get('/')
def read_root():
return FileResponse(os.path.join(SD_UI_DIR, 'index.html'), headers=NOCACHE_HEADERS)
@app.on_event("shutdown")
def shutdown_event(): # Signal render thread to close on shutdown
task_manager.current_state_error = SystemExit('Application shutting down.')
# don't log certain requests
class LogSuppressFilter(logging.Filter):
def filter(self, record: logging.LogRecord) -> bool:
@ -292,8 +378,25 @@ class LogSuppressFilter(logging.Filter):
return True
logging.getLogger('uvicorn.access').addFilter(LogSuppressFilter())
task_manager.default_model_to_load = get_initial_model_to_load()
task_manager.start_render_thread()
# Start the task_manager
task_manager.default_model_to_load = resolve_ckpt_to_use()
task_manager.default_vae_to_load = resolve_vae_to_use()
def update_render_threads():
config = getConfig()
render_devices = config.get('render_devices', 'auto')
active_devices = task_manager.get_devices()['active'].keys()
print('requesting for render_devices', render_devices)
task_manager.update_render_threads(render_devices, active_devices)
update_render_threads()
# start the browser ui
import webbrowser; webbrowser.open('http://localhost:9000')
def open_browser():
config = getConfig()
ui = config.get('ui', {})
if ui.get('open_browser_on_start', True):
import webbrowser; webbrowser.open('http://localhost:9000')
open_browser()