Commit Graph

78 Commits

Author SHA1 Message Date
cmdr2
e45cbbf1ca Use the turbo setting if requested 2022-12-11 20:42:31 +05:30
cmdr2
1a5b6ef260 Rename runtime2.py to renderer.py; Will remove the old runtime soon 2022-12-11 20:21:25 +05:30
cmdr2
096556d8c9 Move away the remaining model-related code to the model_manager 2022-12-11 20:13:44 +05:30
cmdr2
97919c7e87 Simplify the runtime code 2022-12-11 19:58:12 +05:30
cmdr2
6ce6dc3ff6 Get rid of the ugly copying around (and maintaining) of multiple request-related fields. Split into two objects: task-related fields, and render-related fields. Also remove the ability for request-defined full-precision. Full-precision can now be forced by using a USE_FULL_PRECISION environment variable 2022-12-11 18:16:29 +05:30
cmdr2
afb88616d8 Load the models after the device init, to let the UI load before the models finish loading 2022-12-11 13:30:16 +05:30
cmdr2
a2af811ad2 Disable uvicorn access logging in favor of cleaner server-side logging, we already get all that info; Print the request metadata 2022-12-09 22:47:34 +05:30
cmdr2
cde8c2d3bd Use a logger 2022-12-09 21:30:18 +05:30
cmdr2
79cc84b611 Option to apply color correction (balances the histogram) during inpainting; Refactor the runtime to use a general-purpose dict 2022-12-09 19:39:56 +05:30
cmdr2
8820814002 Simplify the API for resolving model paths; Code cleanup 2022-12-09 15:45:36 +05:30
cmdr2
f4a6910ab4 Work-in-progress: refactored the end-to-end codebase. Missing: hypernetworks, turbo config, and SD 2. Not tested yet 2022-12-08 21:39:09 +05:30
cmdr2
bad89160cc Work-in-progress model loading 2022-12-08 13:50:46 +05:30
cmdr2
5782966d63 Merge branch 'beta' into refactor 2022-12-08 11:58:09 +05:30
Marc-Andre Ferland
ba2c966329
First draft of multi-task in a single session. (#622) 2022-12-08 11:12:46 +05:30
cmdr2
fb6a7e04f5 Work-in-progress refactor of the backend, to move most of the logic to diffusion-kit and keeping this as a UI around that engine. Does not work yet. 2022-12-07 22:15:35 +05:30
Guillaume Mercier
cbe91251ac
Hypernetwork support (#619)
* Update README.md

* Update README.md

* Make on_sd_start.sh executable

* Merge pull request #542 from patriceac/patch-1

Fix restoration of model and VAE

* Merge pull request #541 from patriceac/patch-2

Fix restoration of parallel output setting

* Hypernetwork support

Adds support for hypernetworks. Hypernetworks are stored in /models/hypernetworks

* forgot to remove unused code

Co-authored-by: cmdr2 <secondary.cmdr2@gmail.com>
2022-12-07 11:24:16 +05:30
JeLuF
e7ca8090fd
Make JPEG Output quality user controllable (#607)
Add a slider to the image options for the JPEG quality
For PNG images, the slider is hidden.
2022-12-05 11:02:33 +05:30
cmdr2
ac605e9352 Typos and minor fixes for sd 2 2022-11-29 13:30:08 +05:30
cmdr2
e37be0f954 Remove the need to use yield in the core loop for streaming results. This removes the need to patch the Stable Diffusion code, which can be fragile 2022-11-29 13:03:57 +05:30
cmdr2
9499685dda Check for enqueued tasks more frequently 2022-11-21 14:06:26 +05:30
cmdr2
025d4df774 Don't crash if a VAE file fails to load 2022-11-18 13:11:48 +05:30
cmdr2
97ee085f30 Fix a bug where Face Correction (GFPGAN) would fail on cuda:N (i.e. GPUs other than cuda:0), as well as fail on CPU if the system had an incompatible GPU. 2022-11-17 12:27:06 +05:30
cmdr2
9d2b944063 Remove unused variable 2022-11-15 13:18:00 +05:30
Marc-Andre Ferland
a108e5067d Typos in comments. 2022-11-14 22:20:21 -05:00
Marc-Andre Ferland
ffe0eb1544 Changed update_render_threads to use SetAppConfigRequest to set which devices are active.
Keep ImageRequest.render_device for affinity only. (Send a task to an already active device.)
2022-11-14 21:54:24 -05:00
cmdr2
8707f88c07 Show mem free info 2022-11-14 20:35:47 +05:30
cmdr2
338ceffa6d Use 'auto' as the default render_device 2022-11-14 15:14:58 +05:30
cmdr2
d79eb5e1a6 Typo 2022-11-14 11:51:56 +05:30
cmdr2
f6651b03b5 Workaround to run gfpgan on cuda:0 even if it's not enabled in the multi-gpu setup 2022-11-14 11:51:18 +05:30
cmdr2
5f880a179c Remove idle CPU unloading (when GPUs are active), because now a CPU can never be used along with GPUs 2022-11-14 11:24:30 +05:30
cmdr2
ea03fd22db Start on multiple GPUs by default (top 75 percentile by free_mem); UI selection for 'cpu' or 'auto' or a list of specific GPUs, which is now linked to the backend; Dynamically start/stop render threads for the devices, without requiring a full program restart 2022-11-14 11:23:22 +05:30
Marc-Andre Ferland
aa21115e26 Always return a byte buffer. Sending the picture as URL text fails in some browsers. 2022-11-11 20:44:39 -05:00
cmdr2
a39f845835
current_vae_path needs to be global 2022-11-11 19:30:33 +05:30
cmdr2
3fdd8d91e2 Handle device init failures and record that as an error, if the GPU has less than 3 gb of VRAM 2022-11-11 16:13:27 +05:30
cmdr2
c13bccc7ae Fix the error where a device named 'None' would get assigned for incompatible GPUs 2022-11-11 15:43:20 +05:30
cmdr2
bd56795c62 Switch to using cuda:N instead of N (integer device ids) 2022-11-11 14:46:05 +05:30
cmdr2
b9a12d1562 Restrict device selection id to 'cpu' or integers (and 'auto' in the initial device selection functions) 2022-11-10 20:03:11 +05:30
cmdr2
058ce6fe82 UI-side changes for selecting multiple GPUs, and keeping the Use CPU option synchronized with the backend. This change isn't ready to be shipped, it still needs python-side changes to support the req.render_device config 2022-11-09 19:17:44 +05:30
cmdr2
8f1d214b12 Bring back CPU unloading of models when idle for a while (applies only when GPUs are present) 2022-11-08 19:23:35 +05:30
cmdr2
51fb1a43de Temporarily disable the idle CPU unloading behavior, since it's not clear whether it'll reload the model if a future request for the CPU is received after it has unloaded the model 2022-11-08 19:02:21 +05:30
Marc-Andre Ferland
b09b80933d Print device name on task start and complete to avoid doubt from users on what device selected the task. 2022-11-01 22:28:10 -04:00
Marc-Andre Ferland
eb596ba866 Allow start_render_thread to proceed faster in case of failure. 2022-10-30 06:04:06 -04:00
Marc-Andre Ferland
c687091ce9 Only return valid data for alive threads. 2022-10-30 01:38:32 -04:00
Marc-Andre Ferland
eb994716e6 Indentation... 2022-10-30 01:33:17 -04:00
Marc-Andre Ferland
099727d671 Added auto unload to CPU if GPUs are active. 2022-10-29 18:57:10 -04:00
Marc-Andre Ferland
b7a663ed20 Implement complete device selection in the backend. 2022-10-29 17:34:53 -04:00
Marc-Andre Ferland
86da27a7a1 Moved wait outside lock and now returns false on failure. 2022-10-28 22:52:00 -04:00
Marc-Andre Ferland
fc2a6567da Moved import before use of runtime.thread_data.device 2022-10-28 22:51:04 -04:00
cmdr2
a8c16e39b8 Support custom VAE files; Use vae-ft-mse-840000-ema-pruned as the default VAE, which can be overridden by putting a .vae.pt file inside models/stable-diffusion with the same name as the ckpt model file. The UI / System Settings allows setting the default VAE model to use 2022-10-28 20:06:44 +05:30
Marc-Andre Ferland
26562e445f Set online after preload. Move ident to include in if check. 2022-10-28 04:09:34 -04:00