cmdr2
5782966d63
Merge branch 'beta' into refactor
2022-12-08 11:58:09 +05:30
Marc-Andre Ferland
ba2c966329
First draft of multi-task in a single session. ( #622 )
2022-12-08 11:12:46 +05:30
cmdr2
fb6a7e04f5
Work-in-progress refactor of the backend, to move most of the logic to diffusion-kit and keeping this as a UI around that engine. Does not work yet.
2022-12-07 22:15:35 +05:30
Guillaume Mercier
cbe91251ac
Hypernetwork support ( #619 )
...
* Update README.md
* Update README.md
* Make on_sd_start.sh executable
* Merge pull request #542 from patriceac/patch-1
Fix restoration of model and VAE
* Merge pull request #541 from patriceac/patch-2
Fix restoration of parallel output setting
* Hypernetwork support
Adds support for hypernetworks. Hypernetworks are stored in /models/hypernetworks
* forgot to remove unused code
Co-authored-by: cmdr2 <secondary.cmdr2@gmail.com>
2022-12-07 11:24:16 +05:30
JeLuF
e7ca8090fd
Make JPEG Output quality user controllable ( #607 )
...
Add a slider to the image options for the JPEG quality
For PNG images, the slider is hidden.
2022-12-05 11:02:33 +05:30
cmdr2
ac605e9352
Typos and minor fixes for sd 2
2022-11-29 13:30:08 +05:30
cmdr2
e37be0f954
Remove the need to use yield in the core loop for streaming results. This removes the need to patch the Stable Diffusion code, which can be fragile
2022-11-29 13:03:57 +05:30
cmdr2
9499685dda
Check for enqueued tasks more frequently
2022-11-21 14:06:26 +05:30
cmdr2
025d4df774
Don't crash if a VAE file fails to load
2022-11-18 13:11:48 +05:30
cmdr2
97ee085f30
Fix a bug where Face Correction (GFPGAN) would fail on cuda:N (i.e. GPUs other than cuda:0), as well as fail on CPU if the system had an incompatible GPU.
2022-11-17 12:27:06 +05:30
cmdr2
9d2b944063
Remove unused variable
2022-11-15 13:18:00 +05:30
Marc-Andre Ferland
a108e5067d
Typos in comments.
2022-11-14 22:20:21 -05:00
Marc-Andre Ferland
ffe0eb1544
Changed update_render_threads to use SetAppConfigRequest to set which devices are active.
...
Keep ImageRequest.render_device for affinity only. (Send a task to an already active device.)
2022-11-14 21:54:24 -05:00
cmdr2
8707f88c07
Show mem free info
2022-11-14 20:35:47 +05:30
cmdr2
338ceffa6d
Use 'auto' as the default render_device
2022-11-14 15:14:58 +05:30
cmdr2
d79eb5e1a6
Typo
2022-11-14 11:51:56 +05:30
cmdr2
f6651b03b5
Workaround to run gfpgan on cuda:0 even if it's not enabled in the multi-gpu setup
2022-11-14 11:51:18 +05:30
cmdr2
5f880a179c
Remove idle CPU unloading (when GPUs are active), because now a CPU can never be used along with GPUs
2022-11-14 11:24:30 +05:30
cmdr2
ea03fd22db
Start on multiple GPUs by default (top 75 percentile by free_mem); UI selection for 'cpu' or 'auto' or a list of specific GPUs, which is now linked to the backend; Dynamically start/stop render threads for the devices, without requiring a full program restart
2022-11-14 11:23:22 +05:30
Marc-Andre Ferland
aa21115e26
Always return a byte buffer. Sending the picture as URL text fails in some browsers.
2022-11-11 20:44:39 -05:00
cmdr2
a39f845835
current_vae_path needs to be global
2022-11-11 19:30:33 +05:30
cmdr2
3fdd8d91e2
Handle device init failures and record that as an error, if the GPU has less than 3 gb of VRAM
2022-11-11 16:13:27 +05:30
cmdr2
c13bccc7ae
Fix the error where a device named 'None' would get assigned for incompatible GPUs
2022-11-11 15:43:20 +05:30
cmdr2
bd56795c62
Switch to using cuda:N instead of N (integer device ids)
2022-11-11 14:46:05 +05:30
cmdr2
b9a12d1562
Restrict device selection id to 'cpu' or integers (and 'auto' in the initial device selection functions)
2022-11-10 20:03:11 +05:30
cmdr2
058ce6fe82
UI-side changes for selecting multiple GPUs, and keeping the Use CPU option synchronized with the backend. This change isn't ready to be shipped, it still needs python-side changes to support the req.render_device config
2022-11-09 19:17:44 +05:30
cmdr2
8f1d214b12
Bring back CPU unloading of models when idle for a while (applies only when GPUs are present)
2022-11-08 19:23:35 +05:30
cmdr2
51fb1a43de
Temporarily disable the idle CPU unloading behavior, since it's not clear whether it'll reload the model if a future request for the CPU is received after it has unloaded the model
2022-11-08 19:02:21 +05:30
Marc-Andre Ferland
b09b80933d
Print device name on task start and complete to avoid doubt from users on what device selected the task.
2022-11-01 22:28:10 -04:00
Marc-Andre Ferland
eb596ba866
Allow start_render_thread to proceed faster in case of failure.
2022-10-30 06:04:06 -04:00
Marc-Andre Ferland
c687091ce9
Only return valid data for alive threads.
2022-10-30 01:38:32 -04:00
Marc-Andre Ferland
eb994716e6
Indentation...
2022-10-30 01:33:17 -04:00
Marc-Andre Ferland
099727d671
Added auto unload to CPU if GPUs are active.
2022-10-29 18:57:10 -04:00
Marc-Andre Ferland
b7a663ed20
Implement complete device selection in the backend.
2022-10-29 17:34:53 -04:00
Marc-Andre Ferland
86da27a7a1
Moved wait outside lock and now returns false on failure.
2022-10-28 22:52:00 -04:00
Marc-Andre Ferland
fc2a6567da
Moved import before use of runtime.thread_data.device
2022-10-28 22:51:04 -04:00
cmdr2
a8c16e39b8
Support custom VAE files; Use vae-ft-mse-840000-ema-pruned as the default VAE, which can be overridden by putting a .vae.pt file inside models/stable-diffusion with the same name as the ckpt model file. The UI / System Settings allows setting the default VAE model to use
2022-10-28 20:06:44 +05:30
Marc-Andre Ferland
26562e445f
Set online after preload. Move ident to include in if check.
2022-10-28 04:09:34 -04:00
Marc-Andre Ferland
22a11769fa
Enable preload on cpu when no other devices are alive.
2022-10-27 21:57:50 -04:00
Marc-Andre Ferland
7f4786f9dd
Wait until device is fully ready before proceding.
2022-10-27 20:27:21 -04:00
cmdr2
0dfaf9159d
Put back the check to only preload on GPU
2022-10-28 00:04:33 +05:30
cmdr2
389e3397ec
Preload the model even in the CPU mode
2022-10-27 23:17:41 +05:30
Marc-Andre Ferland
ae40b6ba8c
Missed a is_alive check in the conversion.
2022-10-25 03:00:50 -04:00
Marc-Andre Ferland
364e364429
Added get_cached_task to replace task_cache.tryGet in server.py
...
Now updated cache TTL on /stream and temp images endpoints.
Keep images alive longer when browser keeps reading the endpoints.
2022-10-22 13:52:13 -04:00
Marc-Andre Ferland
3b5f96a133
Fixed stopping tasks and more cleaning.
2022-10-21 22:45:19 -04:00
Marc-Andre Ferland
56ed4fe6f2
Fix VisualStudio Type Warning.
2022-10-21 01:30:49 -04:00
Marc-Andre Ferland
849d1d7ebd
Merge branch 'beta' of https://github.com/cmdr2/stable-diffusion-ui.git into multi-gpu
...
# Conflicts:
# ui/media/js/main.js
# ui/sd_internal/runtime.py
# ui/server.py
2022-10-20 20:08:23 -04:00
cmdr2
090dfff730
Refactor the time delays into constants and mention the units
2022-10-20 17:22:01 +05:30
Marc-Andre Ferland
4e5ddca3bd
Display the failure detail when there is one at that step.
...
Was checking the json object, not the server response.
2022-10-19 05:10:37 -04:00
Marc-Andre Ferland
3bdc90451a
Dont preload on cpu.
2022-10-19 04:34:54 -04:00