Commit Graph

182 Commits

Author SHA1 Message Date
cmdr2
d7330b80a9
Revert "Update ddim_callback_sd2.patch" 2022-11-26 01:22:35 +05:30
jsuelwald
8114fa3f5d
Update ddim_callback_sd2.patch 2022-11-25 20:46:24 +01:00
cmdr2
4bc5508f38
Rollback 2022-11-26 01:07:55 +05:30
cmdr2
e503c6092e
Ddim decode for img2img 2022-11-26 00:55:39 +05:30
cmdr2
6a8985d8dd
Update ddim_callback_sd2.patch 2022-11-26 00:49:15 +05:30
cmdr2
bee67fd883
Shape 2022-11-25 23:54:08 +05:30
cmdr2
a1d75d40aa
Update runtime.py 2022-11-25 23:36:43 +05:30
cmdr2
29484867ca
Typo 2022-11-25 23:32:56 +05:30
cmdr2
7fa983b971
Img2img sd2 attempt 2 2022-11-25 23:28:31 +05:30
cmdr2
617a8b2814
Fix for make_schedule error in sd2 2022-11-25 23:15:22 +05:30
cmdr2
b924d323d4
img2img attempt for sd2 2022-11-25 22:36:02 +05:30
cmdr2
642c114501 Working txt2img 2022-11-25 14:29:24 +05:30
cmdr2
02dd3e457d Tweaks to load sd1 models in sd2 code, typos 2022-11-25 13:57:15 +05:30
cmdr2
ea7b28c9d5 Placeholder changes for SD 2.0 support, haven't tested yet 2022-11-25 12:17:44 +05:30
cmdr2
7cbf62cf12 Revert whitespace fix 2022-11-22 23:30:03 +05:30
cmdr2
5af84b8e90 Fix whitespace during git apply 2022-11-22 22:21:54 +05:30
cmdr2
93bbfac29a Change the backend to a custom fork of SD, since basujindal's fork is no longer under development. This fork is intended to include the common models/tools used like RealESRGAN, GFPGAN, Codeformer etc, and is meant to be a community-developed project 2022-11-22 16:38:39 +05:30
cmdr2
9499685dda Check for enqueued tasks more frequently 2022-11-21 14:06:26 +05:30
cmdr2
2cf8b2a453
Use the correct device name when moving the model to cpu 2022-11-20 00:43:38 +05:30
cmdr2
c10e773401 Speed up the model move, by using the earlier function to move modelCS and modelFS to the cpu 2022-11-19 11:53:33 +05:30
cmdr2
025d4df774 Don't crash if a VAE file fails to load 2022-11-18 13:11:48 +05:30
cmdr2
97ee085f30 Fix a bug where Face Correction (GFPGAN) would fail on cuda:N (i.e. GPUs other than cuda:0), as well as fail on CPU if the system had an incompatible GPU. 2022-11-17 12:27:06 +05:30
cmdr2
06b41aee58 Fix - reduce the amount of VRAM occupied when the program starts up, this caused a regression and failures on GPUs with 4 gb or less of VRAM 2022-11-16 19:29:04 +05:30
cmdr2
e99d54d1f6 Merge main 2022-11-16 11:19:10 +05:30
cmdr2
9d2b944063 Remove unused variable 2022-11-15 13:18:00 +05:30
cmdr2
8e1ec5903b Don't throw an exception when an invalid device is being checked for compatibility. Report and return false 2022-11-15 12:41:10 +05:30
Marc-Andre Ferland
a108e5067d Typos in comments. 2022-11-14 22:20:21 -05:00
Marc-Andre Ferland
a4a24b1a1a Fixed calling get_device_delta with a single cuda device inside config.json at boot. 2022-11-14 22:14:03 -05:00
Marc-Andre Ferland
ffe0eb1544 Changed update_render_threads to use SetAppConfigRequest to set which devices are active.
Keep ImageRequest.render_device for affinity only. (Send a task to an already active device.)
2022-11-14 21:54:24 -05:00
cmdr2
2967261acb Ensure that we only pick better GPUs than the current one, during the subsequent tasks 2022-11-14 21:13:24 +05:30
cmdr2
8707f88c07 Show mem free info 2022-11-14 20:35:47 +05:30
cmdr2
338ceffa6d Use 'auto' as the default render_device 2022-11-14 15:14:58 +05:30
cmdr2
371e104b00 Pick the device id 2022-11-14 13:43:37 +05:30
cmdr2
d5aba8eaf1 Show free/total mem while starting up 2022-11-14 13:40:55 +05:30
cmdr2
027b2e1b88 Use the 65 percentile of free_mem for GPU selection, instead of 75 percentile 2022-11-14 12:26:21 +05:30
cmdr2
d79eb5e1a6 Typo 2022-11-14 11:51:56 +05:30
cmdr2
f6651b03b5 Workaround to run gfpgan on cuda:0 even if it's not enabled in the multi-gpu setup 2022-11-14 11:51:18 +05:30
cmdr2
5f880a179c Remove idle CPU unloading (when GPUs are active), because now a CPU can never be used along with GPUs 2022-11-14 11:24:30 +05:30
cmdr2
ea03fd22db Start on multiple GPUs by default (top 75 percentile by free_mem); UI selection for 'cpu' or 'auto' or a list of specific GPUs, which is now linked to the backend; Dynamically start/stop render threads for the devices, without requiring a full program restart 2022-11-14 11:23:22 +05:30
cmdr2
a19ba40672
Typo 2022-11-12 13:31:59 +05:30
cmdr2
3983cb001f
Save the VAE model to the metadata text file 2022-11-12 13:29:24 +05:30
Marc-Andre Ferland
aa21115e26 Always return a byte buffer. Sending the picture as URL text fails in some browsers. 2022-11-11 20:44:39 -05:00
cmdr2
a39f845835
current_vae_path needs to be global 2022-11-11 19:30:33 +05:30
cmdr2
3fdd8d91e2 Handle device init failures and record that as an error, if the GPU has less than 3 gb of VRAM 2022-11-11 16:13:27 +05:30
cmdr2
c13bccc7ae Fix the error where a device named 'None' would get assigned for incompatible GPUs 2022-11-11 15:43:20 +05:30
cmdr2
bd56795c62 Switch to using cuda:N instead of N (integer device ids) 2022-11-11 14:46:05 +05:30
cmdr2
b9a12d1562 Restrict device selection id to 'cpu' or integers (and 'auto' in the initial device selection functions) 2022-11-10 20:03:11 +05:30
cmdr2
058ce6fe82 UI-side changes for selecting multiple GPUs, and keeping the Use CPU option synchronized with the backend. This change isn't ready to be shipped, it still needs python-side changes to support the req.render_device config 2022-11-09 19:17:44 +05:30
cmdr2
8f1d214b12 Bring back CPU unloading of models when idle for a while (applies only when GPUs are present) 2022-11-08 19:23:35 +05:30
cmdr2
51fb1a43de Temporarily disable the idle CPU unloading behavior, since it's not clear whether it'll reload the model if a future request for the CPU is received after it has unloaded the model 2022-11-08 19:02:21 +05:30