Commit Graph

143 Commits

Author SHA1 Message Date
cmdr2
ea03fd22db Start on multiple GPUs by default (top 75 percentile by free_mem); UI selection for 'cpu' or 'auto' or a list of specific GPUs, which is now linked to the backend; Dynamically start/stop render threads for the devices, without requiring a full program restart 2022-11-14 11:23:22 +05:30
cmdr2
a19ba40672
Typo 2022-11-12 13:31:59 +05:30
cmdr2
3983cb001f
Save the VAE model to the metadata text file 2022-11-12 13:29:24 +05:30
Marc-Andre Ferland
aa21115e26 Always return a byte buffer. Sending the picture as URL text fails in some browsers. 2022-11-11 20:44:39 -05:00
cmdr2
a39f845835
current_vae_path needs to be global 2022-11-11 19:30:33 +05:30
cmdr2
3fdd8d91e2 Handle device init failures and record that as an error, if the GPU has less than 3 gb of VRAM 2022-11-11 16:13:27 +05:30
cmdr2
c13bccc7ae Fix the error where a device named 'None' would get assigned for incompatible GPUs 2022-11-11 15:43:20 +05:30
cmdr2
bd56795c62 Switch to using cuda:N instead of N (integer device ids) 2022-11-11 14:46:05 +05:30
cmdr2
b9a12d1562 Restrict device selection id to 'cpu' or integers (and 'auto' in the initial device selection functions) 2022-11-10 20:03:11 +05:30
cmdr2
058ce6fe82 UI-side changes for selecting multiple GPUs, and keeping the Use CPU option synchronized with the backend. This change isn't ready to be shipped, it still needs python-side changes to support the req.render_device config 2022-11-09 19:17:44 +05:30
cmdr2
8f1d214b12 Bring back CPU unloading of models when idle for a while (applies only when GPUs are present) 2022-11-08 19:23:35 +05:30
cmdr2
51fb1a43de Temporarily disable the idle CPU unloading behavior, since it's not clear whether it'll reload the model if a future request for the CPU is received after it has unloaded the model 2022-11-08 19:02:21 +05:30
cmdr2
9bc7521de0 Make custom VAE an Image Setting, rather than a System Setting; Don't load a VAE into memory by default 2022-11-08 16:54:15 +05:30
cmdr2
67cca3bc00 Print the devices for which rendering threads have started; Prettier print of the model data 2022-11-07 18:26:10 +05:30
cmdr2
90b1609d4e device_selection is already a string, since we've used string functions before this line 2022-11-07 18:08:43 +05:30
cmdr2
abbfae2fc0 Simplify the logic used for displaying the GFPGAN warning 2022-11-07 17:55:27 +05:30
JeLuF
59e4c1cf79 Sanitize session id's before using them as path components 2022-11-03 00:43:44 +01:00
Marc-Andre Ferland
b09b80933d Print device name on task start and complete to avoid doubt from users on what device selected the task. 2022-11-01 22:28:10 -04:00
Marc-Andre Ferland
eb596ba866 Allow start_render_thread to proceed faster in case of failure. 2022-10-30 06:04:06 -04:00
Marc-Andre Ferland
c687091ce9 Only return valid data for alive threads. 2022-10-30 01:38:32 -04:00
Marc-Andre Ferland
eb994716e6 Indentation... 2022-10-30 01:33:17 -04:00
Marc-Andre Ferland
099727d671 Added auto unload to CPU if GPUs are active. 2022-10-29 18:57:10 -04:00
Marc-Andre Ferland
6229cdb1ba Added a missing device_name 2022-10-29 17:47:45 -04:00
Marc-Andre Ferland
b7a663ed20 Implement complete device selection in the backend. 2022-10-29 17:34:53 -04:00
Marc-Andre Ferland
86da27a7a1 Moved wait outside lock and now returns false on failure. 2022-10-28 22:52:00 -04:00
Marc-Andre Ferland
fc2a6567da Moved import before use of runtime.thread_data.device 2022-10-28 22:51:04 -04:00
cmdr2
a8c16e39b8 Support custom VAE files; Use vae-ft-mse-840000-ema-pruned as the default VAE, which can be overridden by putting a .vae.pt file inside models/stable-diffusion with the same name as the ckpt model file. The UI / System Settings allows setting the default VAE model to use 2022-10-28 20:06:44 +05:30
Marc-Andre Ferland
26562e445f Set online after preload. Move ident to include in if check. 2022-10-28 04:09:34 -04:00
Marc-Andre Ferland
c52fc843f6 Comment... 2022-10-28 02:09:11 -04:00
Marc-Andre Ferland
02240bda25 Moved up to not duplicate if statement. 2022-10-28 02:05:48 -04:00
Marc-Andre Ferland
0185ef7c83 Apply force_full_precision if was set on device_select. 2022-10-28 02:02:09 -04:00
Marc-Andre Ferland
71c6beadb4 Only default to cpu on auto or current.
Not when a specific device was requested.
2022-10-28 01:09:38 -04:00
Marc-Andre Ferland
22a11769fa Enable preload on cpu when no other devices are alive. 2022-10-27 21:57:50 -04:00
Marc-Andre Ferland
7f4786f9dd Wait until device is fully ready before proceding. 2022-10-27 20:27:21 -04:00
cmdr2
0dfaf9159d Put back the check to only preload on GPU 2022-10-28 00:04:33 +05:30
cmdr2
389e3397ec Preload the model even in the CPU mode 2022-10-27 23:17:41 +05:30
cmdr2
284b95213e Fix a bug where the device wouldn't get set if no cuda-compatible hardware was found 2022-10-27 22:59:55 +05:30
cmdr2
952854f64e Revert 554650c18d 2022-10-27 22:59:17 +05:30
cmdr2
554650c18d Fix a bug where the device wouldn't get set if no cuda-compatible hardware was found 2022-10-27 22:51:45 +05:30
cmdr2
3fb5d886dc
Merge pull request #398 from madrang/mGpu-crashHandling
mGpu crash handling
2022-10-27 13:44:26 +05:30
Marc-Andre Ferland
d3df113fb0 When reduced_memory is True, on crash only move model back to Cpu. 2022-10-26 16:52:31 -04:00
Marc-Andre Ferland
06c2ab045a Fix TypeError: string indices must be integers 2022-10-26 16:14:29 -04:00
Marc-Andre Ferland
ae40b6ba8c Missed a is_alive check in the conversion. 2022-10-25 03:00:50 -04:00
Marc-Andre Ferland
c41baf3aeb Moved img_id creation inside save image loop. 2022-10-25 02:10:52 -04:00
Marc-Andre Ferland
fc875651d3 Removed unused vars 2022-10-23 05:00:21 -04:00
Marc-Andre Ferland
0d62123a0b Replaced missing gpu_name by device_name 2022-10-22 21:28:12 -04:00
Marc-Andre Ferland
8adf965d0b Formatting changes. 2022-10-22 19:02:02 -04:00
Marc-Andre Ferland
364e364429 Added get_cached_task to replace task_cache.tryGet in server.py
Now updated cache TTL on /stream and temp images endpoints.
Keep images alive longer when browser keeps reading the endpoints.
2022-10-22 13:52:13 -04:00
Marc-Andre Ferland
cd6d49860f Missing a 'r' in progress 2022-10-22 01:23:39 -04:00
Marc-Andre Ferland
8a10fcf7ea updated print statement. 2022-10-22 00:34:33 -04:00