Compare commits

...

125 Commits

Author SHA1 Message Date
fc1a9bf3a6 Support python paths in legacy installations in the dev console 2025-07-25 14:49:43 +05:30
d1c834d19b Use the launched python executable for installing new packages 2025-07-25 14:43:43 +05:30
374b411d6c Fix a conflict where the system-wide python gets picked up on some Windows PCs 2025-07-25 14:31:00 +05:30
f443e3c694 Ignore pickle scanning for .sft and .gguf files 2025-06-27 16:44:35 +05:30
3a07523d08 Remove an unnecessary check, the installer already checks for this 2025-06-27 14:07:15 +05:30
a9e5c81a47 Fix a install warning check - this field isn't saved anymore 2025-06-27 14:07:09 +05:30
8016453a2c Potential fix for #1942 - basicsr wasn't installing on mac 2025-06-27 13:59:01 +05:30
3925e50a7f Add support for AMD 9060/9070 2025-06-13 10:44:51 +05:30
1d27765150 Auto-fix cpu-only torch installations on NVIDIA 5060 2025-05-30 18:35:44 +05:30
14e45bfd24 Early support for AMD 9070 (and Navi 4x series) 2025-05-30 13:11:24 +05:30
46c4aca7b7 Recognize 5060 Ti and a few other recent GPUs 2025-05-30 10:58:36 +05:30
38eb8f934e Support more GPUs (workstation and mining); Fix torchruntime test 2025-04-29 10:40:24 +05:30
0393c907d7 Fix torchruntime to not force cu128 on older NVIDIA GPUs (which don't support it) 2025-04-29 10:08:42 +05:30
f9c664dc3c Tell the user to reinstall ED if they're trying to use a 50xx GPU with Python 3.8-based Easy Diffusion 2025-04-27 21:29:14 +05:30
59ca8b0336 Fix FileNotFoundError when installing torch on Windows PCs without powershell in their system PATH variable 2025-04-25 20:45:37 +05:30
e0a4d43087 Use pytorch 2.7 (with cuda 12.8) on new installations with NVIDIA gpus 2025-04-24 15:51:15 +05:30
a33737b991 Auto-upgrade torch for NVIDIA 50xx series. Fix for #1918 2025-04-23 15:23:54 +05:30
bac23290dd Fix for incorrect upgrade logic for torchruntime. Affects the workaround in #1918 2025-04-23 15:23:49 +05:30
8380ba97c5 Update README.md 2025-04-02 10:13:42 +05:30
45c6865b2d Fix for blackwell GPUs on new installations 2025-03-29 14:35:55 +05:30
1b3d048bf2 Use safetensors as the default model instead of ckpt 2025-03-11 10:25:13 +05:30
a675161e47 Fix nvidia 50xx support 2025-03-10 06:21:18 +05:30
3a9f71d17a Fix compatibility with python 3.9 and directml 2025-03-07 10:39:58 +05:30
b09be681e6 Check for half-precision on GeForce MX450 2025-03-07 10:28:28 +05:30
4330434835 Fix Granite Ridge APU device type 2025-03-06 12:48:46 +05:30
3836b91ae1 Recognize some more cards (5070, Granite Ridge) via a torchruntime upgrade 2025-03-06 12:07:42 +05:30
72c4e47619 Support Blackwell (NVIDIA 5060/5070/5080/5090) 2025-03-04 16:23:43 +05:30
afae421cee Use python 3.9 by default 2025-03-04 15:28:38 +05:30
0d0ec4ee56 sdkit 2.0.22.7/2.0.15.16 - python 3.9 compatibility 2025-03-04 15:10:17 +05:30
55a31c77e6 torchruntime upgrade - fix bug with multiple GPUs of the same model 2025-03-04 11:28:03 +05:30
43d2642b68 Update README.md 2025-02-21 17:08:30 +05:30
9dc2154027 sdkit upgrade - fixes loras with numpy arrays 2025-02-20 10:12:35 +05:30
fd49ba5dbc Update README.md 2025-02-18 11:59:33 +05:30
3e71054150 Potential fix for #1902 2025-02-18 11:14:42 +05:30
8d6c0de262 Recognize Phoenix3 and Phoenix4 AMD APUs 2025-02-18 11:14:37 +05:30
561fe0cc79 Use torchruntime for installing torch/torchvision on the users' PC, instead of the custom logic used here. torchruntime was built from the custom logic used here, and covers a lot more scenarios and supports more devices 2025-02-13 11:56:24 +05:30
26cbc30407 Use the correct size of the image when used as the input. Code credit: @AvidGameFan 2025-02-13 11:33:29 +05:30
7a1e2c4190 Hotfix for missing torchruntime on new installations 2025-02-10 19:31:26 +05:30
7b0a17a3ab Temporary fix for #1899 2025-02-10 18:15:31 +05:30
302426f5d4 Another fix for mps backend 2025-02-10 10:05:35 +05:30
9dc9ea3825 Fix broken mps backend 2025-02-10 09:55:31 +05:30
2a24a49f6b torchruntime 1.9.2 - fixes a bug on cpu platforms 2025-02-08 16:15:12 +05:30
5e5e39c285 changelog 2025-02-08 15:10:50 +05:30
cd8365558a Remove hardcoded references to torch.cuda; Use torchruntime and sdkit's device utilities instead 2025-02-08 15:08:46 +05:30
2e4623736a sdkit 2.0.22.3 or 2.0.15.12 - fixes a regression on mac 2025-02-03 21:32:06 +05:30
7f3a4383c7 sdkit 2.0.22.2 or 2.0.15.11 - install torchruntime 2025-02-03 21:32:02 +05:30
6d6b528aad sdkit 2.0.22 or 2.0.15.9 2025-02-03 10:58:47 +05:30
76485ab1e7 Move the half precision bug check logic to sdkit 2025-02-03 10:58:43 +05:30
68d67248f4 [sdkit update] v2.0.22 (and v2.0.15.8 for LTS) 2025-02-03 10:58:39 +05:30
81119a5893 Skip sdkit/diffusers install if it's in developer mode 2025-01-28 09:57:04 +05:30
554559f5ce changelog 2025-01-28 09:56:16 +05:30
b3cc415359 Temporarily remove torch 2.5 from the list, since it doesn't work with Python 3.8. More on this in future commits 2025-01-28 09:55:09 +05:30
5ac44de6c7 Even older rocm versions 2025-01-09 16:39:44 +05:30
a7a78a40d0 Allow older rocm versions 2025-01-09 16:00:18 +05:30
fea24cee90 Update README.md 2025-01-07 11:29:20 +05:30
20d77a85a1 Upgrade the version of torch used for rocm for Navi 30+, and point to the broader torch URL 2025-01-07 10:32:25 +05:30
0687e7b020 Update the index url for AMD ROCm torch install 2025-01-06 19:49:06 +05:30
75e4dc25dc Extend the list of supported torch, CUDA and ROCm versions 2025-01-06 19:32:10 +05:30
7e635caec8 version bump for wmic deprecation 2025-01-04 18:12:49 +05:30
dcb1f3351e Replace the use of wmic (deprecated) with a powershell call 2025-01-04 18:09:43 +05:30
8e9a9dda0f Workaround for when the context doesn't have a model_load_errors field; Not sure why it doesn't have it 2025-01-04 18:07:00 +05:30
546fc937b2 Annual 2024-12-13 15:56:10 +05:30
28badd5319 2024-12-12 00:30:08 +05:30
1a1f8f381b winter is coming 2024-12-12 00:16:48 +05:30
c246c7456a Pin wandb 2024-12-11 11:59:26 +05:30
77a9226720 Annotate with types for pydantic 2024-12-11 11:47:18 +05:30
74b05022f4 Merge pull request #1867 from tjcomserv/tjcomserv-patch-1-pydantic
Tjcomserv patch 1 pydantic
2024-12-11 11:45:24 +05:30
b3a961fc82 Update check_modules.py 2024-11-13 21:49:17 +05:30
0c8410c371 Pin huggingface-hub to 0.23.2 to fix broken deployments 2024-11-13 21:39:01 +05:30
5fe3acd44b Use 1.4 by default, instead of 1.5 2024-09-09 18:34:32 +05:30
716f30fecb Merge pull request #1810 from easydiffusion/beta
Beta
2024-06-14 09:49:27 +05:30
364902f8a1 Ignore text in the version string when comparing them 2024-06-14 09:48:49 +05:30
a261a2d47d Fix #1779 - add to PATH only if it isn't present, to avoid exploding the PATH variable each time the function is called 2024-06-14 09:43:30 +05:30
dea962dc89 Merge pull request #1808 from easydiffusion/beta
temp hotfix for rocm torch
2024-06-13 14:06:14 +05:30
d062c2149a temp hotfix for rocm torch 2024-06-13 14:05:11 +05:30
7d49dc105e Merge pull request #1806 from easydiffusion/beta
Don't crash if psutils fails to get cpu or memory usage
2024-06-11 18:28:07 +05:30
fcdc3f2dd0 Don't crash if psutils fails to get cpu or memory usage 2024-06-11 18:27:21 +05:30
d17b167a81 Merge pull request #1803 from easydiffusion/beta
Support legacy installations with torch 1.11, as well as an option for people to upgrade to the latest sdkit+diffusers
2024-06-07 10:33:56 +05:30
1fa83eda0e Merge pull request #1800 from siakc/uvicorn-run-programmatically
Enhancement - using uvicorn.run() instead of os.system()
2024-06-06 17:51:06 +05:30
969751a195 Use uvicorn.run since it's clearer to read 2024-06-06 17:50:41 +05:30
1ae8675487 typo 2024-06-06 16:20:04 +05:30
05f0bfebba Upgrade torch if using the newer sdkit versions 2024-06-06 16:18:11 +05:30
91ad53cd94 Enhancement - using uvicorn.run() instead of os.system() 2024-06-06 10:28:01 +03:30
de680dfd09 Print diffusers' version 2024-06-05 18:53:45 +05:30
4edeb14e94 Allow a user to opt-in to the latest sdkit+diffusers version, while keeping existing 2.0.15.x users restricted to diffusers 0.21.4. This avoids a lengthy upgrade for existing users, while allowing users to opt-in to the latest version. More to come. 2024-06-05 18:46:22 +05:30
e64cf9c9eb Merge pull request #1796 from easydiffusion/beta
Another typo
2024-06-01 09:02:08 +05:30
66d0c4726e Another typo 2024-06-01 09:01:35 +05:30
c923b44f56 Merge pull request #1795 from easydiffusion/beta
typo
2024-05-31 19:30:50 +05:30
b9c343195b typo 2024-05-31 19:30:29 +05:30
4427e8d3dd Merge pull request #1794 from easydiffusion/beta
Generalize the hotfix for missing sdkit dependencies. This is still a…
2024-05-31 19:28:32 +05:30
87c8fe2758 Generalize the hotfix for missing sdkit dependencies. This is still a temporary hotfix, but will ensure that missing packages are installed, not assume that having picklescan means everything's good 2024-05-31 19:27:30 +05:30
70acde7809 Merge pull request #1792 from easydiffusion/beta
Hotfix - sdkit's dependencies aren't getting pulled for some reason
2024-05-31 10:57:43 +05:30
c4b938f132 Hotfix - sdkit's dependencies aren't getting pulled for some reason 2024-05-31 10:56:43 +05:30
d6fdb8d5a9 Merge pull request #1788 from easydiffusion/beta
Hotfix for older accelerate version in the Windows installer
2024-05-30 17:51:32 +05:30
54ac1f7169 Hotfix for older accelerate version in the Windows installer 2024-05-30 17:50:36 +05:30
deebfc6850 Merge pull request #1787 from easydiffusion/beta
Controlnet Strength and SDXL Controlnet support for img2img and inpainting
2024-05-30 13:22:12 +05:30
21644adbe1 sdkit 2.0.15.6 - typo that prevented 0 controlnet strength 2024-05-29 10:01:04 +05:30
fe3c648a24 sdkit 2.0.15.5 - minor null check 2024-05-28 19:46:59 +05:30
05f3523364 Set the controlnet alpha correctly from older exports; Fix a bug with null lora model in exports 2024-05-28 19:16:48 +05:30
4d9b023378 changelog 2024-05-28 18:48:23 +05:30
44789bf16b sdkit 2.0.15.4 - Controlnet strength slider 2024-05-28 18:45:08 +05:30
ad649a8050 sdkit 2.0.15.3 - disable watermarking on SDXL ControlNets to avoid visual artifacts 2024-05-28 09:00:57 +05:30
723304204e diffusers 0.21.4 2024-05-27 15:26:09 +05:30
ddf54d589e v3.0.8 - use sdkit 2.0.15.1, to enable SDXL Controlnets for img2img and inpainting, using diffusers 0.21.4 2024-05-27 15:18:14 +05:30
a5c9c44e53 Merge pull request #1784 from easydiffusion/beta
Another hotfix for setuptools version on Windows and Linux/mac
2024-05-27 10:58:06 +05:30
4d28c78fcc Another hotfix for setuptools version on Windows and Linux/mac 2024-05-27 10:57:20 +05:30
7dc01370ea Merge pull request #1783 from easydiffusion/beta
Pin setuptools to 0.59
2024-05-27 10:45:49 +05:30
21ff109632 Pin setuptools to 0.59 2024-05-27 10:45:19 +05:30
9b0a654d32 Merge pull request #1782 from easydiffusion/beta
Hotfix to pin setuptools to 0.69 - for #1781
2024-05-27 10:38:41 +05:30
fb749dbe24 Potential hotfix for #1781 - pin setuptools to a specific version, until clip is upgraded 2024-05-27 10:37:09 +05:30
17ef1e04f7 Roll back sdkit 2.0.18 (again) 2024-03-19 20:06:49 +05:30
a5b9eefcf9 v3.0.8 - update diffusers to v0.26.3 2024-03-19 19:17:10 +05:30
e5519cda37 sdkit 2.0.18 (diffusers 0.26.3) 2024-03-19 19:12:00 +05:30
d1bd9e2a16 Prev version 2024-03-13 19:53:14 +05:30
5924d01789 Temporarily revert to sdkit 2.0.15 2024-03-13 19:05:55 +05:30
47432fe54e diffusers 0.26.3 2024-03-13 19:01:09 +05:30
8660a79ccd v3.0.8 - update diffusers to v0.26.3 2024-03-13 18:44:17 +05:30
dfb26ed781 Merge pull request #1702 from easydiffusion/beta
Beta
2023-12-12 18:10:46 +05:30
25272ce083 Kofi only 2023-12-12 12:48:13 +05:30
212fa77b47 Merge pull request #1700 from easydiffusion/beta
Beta
2023-12-12 12:47:24 +05:30
6489cd785d Merge pull request #1648 from michaelachrisco/main
Fix Sampler learn more link
2023-11-05 19:16:14 +05:30
a4e651e27e Click to learn more about samplers should go to wiki page 2023-10-28 23:40:20 -07:00
bedf176e62 Merge pull request #1630 from easydiffusion/beta
Beta
2023-10-12 10:06:42 +05:30
824e057d7b Merge pull request #1624 from easydiffusion/beta
sdkit 2.0.15 - fix for gfgpan/realesrgan in parallel threads
2023-10-06 09:54:40 +05:30
307b00cc05 Merge pull request #1622 from easydiffusion/beta
Beta
2023-10-03 19:38:11 +05:30
25 changed files with 466 additions and 1170 deletions

1
.github/FUNDING.yml vendored
View File

@ -1,4 +1,3 @@
# These are supported funding model platforms
ko_fi: easydiffusion
patreon: easydiffusion

View File

@ -17,6 +17,11 @@
- **Major rewrite of the code** - We've switched to using diffusers under-the-hood, which allows us to release new features faster, and focus on making the UI and installer even easier to use.
### Detailed changelog
* 3.0.9c - 6 Feb 2025 - (Internal code change) Remove hardcoded references to `torch.cuda`, and replace with torchruntime's device utilities.
* 3.0.9b - 28 Jan 2025 - Fix a bug affecting older versions of Easy Diffusion, which tried to upgrade to an incompatible version of PyTorch.
* 3.0.9b - 4 Jan 2025 - Replace the use of WMIC (deprecated) with a powershell call.
* 3.0.9 - 28 May 2024 - Slider for controlling the strength of controlnets.
* 3.0.8 - 27 May 2024 - SDXL ControlNets for Img2Img and Inpainting.
* 3.0.7 - 11 Dec 2023 - Setting to enable/disable VAE tiling (in the Image Settings panel). Sometimes VAE tiling reduces the quality of the image, so this setting will help control that.
* 3.0.6 - 18 Sep 2023 - Add thumbnails to embeddings from the UI, using the new `Upload Thumbnail` button in the Embeddings popup. Thanks @JeLuf.
* 3.0.6 - 15 Sep 2023 - Fix broken embeddings dialog when LoRA information couldn't be fetched.

View File

@ -1,9 +1,9 @@
# Easy Diffusion 3.0
### The easiest way to install and use [Stable Diffusion](https://github.com/CompVis/stable-diffusion) on your computer.
### An easy way to install and use [Stable Diffusion](https://github.com/CompVis/stable-diffusion) on your computer.
Does not require technical knowledge, does not require pre-installed software. 1-click install, powerful features, friendly community.
️‍🔥🎉 **New!** Support for SDXL, ControlNet, multiple LoRA files, embeddings (and a lot more) have been added!
️‍🔥🎉 **New!** Support for Flux has been added in the beta branch (v3.5 engine)!
[Installation guide](#installation) | [Troubleshooting guide](https://github.com/easydiffusion/easydiffusion/wiki/Troubleshooting) | [User guide](https://github.com/easydiffusion/easydiffusion/wiki) | <sub>[![Discord Server](https://img.shields.io/discord/1014774730907209781?label=Discord)](https://discord.com/invite/u9yhsFmEkB)</sub> <sup>(for support queries, and development discussions)</sup>
@ -21,15 +21,15 @@ Click the download button for your operating system:
</p>
**Hardware requirements:**
- **Windows:** NVIDIA graphics card¹ (minimum 2 GB RAM), or run on your CPU.
- **Windows:** NVIDIA¹ or AMD graphics card (minimum 2 GB RAM), or run on your CPU.
- **Linux:** NVIDIA¹ or AMD² graphics card (minimum 2 GB RAM), or run on your CPU.
- **Mac:** M1 or M2, or run on your CPU.
- **Mac:** M1/M2/M3/M4 or AMD graphics card (Intel Mac), or run on your CPU.
- Minimum 8 GB of system RAM.
- Atleast 25 GB of space on the hard disk.
¹) [CUDA Compute capability](https://en.wikipedia.org/wiki/CUDA#GPUs_supported) level of 3.7 or higher required.
²) ROCm 5.2 support required.
²) ROCm 5.2 (or newer) support required.
The installer will take care of whatever is needed. If you face any problems, you can join the friendly [Discord community](https://discord.com/invite/u9yhsFmEkB) and ask for assistance.

View File

@ -4,7 +4,7 @@ echo "Opening Stable Diffusion UI - Developer Console.." & echo.
cd /d %~dp0
set PATH=C:\Windows\System32;%PATH%
set PATH=C:\Windows\System32;C:\Windows\System32\WindowsPowerShell\v1.0;%PATH%
@rem set legacy and new installer's PATH, if they exist
if exist "installer" set PATH=%cd%\installer;%cd%\installer\Library\bin;%cd%\installer\Scripts;%cd%\installer\Library\usr\bin;%PATH%
@ -26,18 +26,23 @@ call conda --version
echo.
echo COMSPEC=%COMSPEC%
echo.
powershell -Command "(Get-WmiObject Win32_VideoController | Select-Object Name, AdapterRAM, DriverDate, DriverVersion)"
@rem activate the legacy environment (if present) and set PYTHONPATH
if exist "installer_files\env" (
set PYTHONPATH=%cd%\installer_files\env\lib\site-packages
set PYTHON=%cd%\installer_files\env\python.exe
echo PYTHON=%PYTHON%
)
if exist "stable-diffusion\env" (
call conda activate .\stable-diffusion\env
set PYTHONPATH=%cd%\stable-diffusion\env\lib\site-packages
set PYTHON=%cd%\stable-diffusion\env\python.exe
echo PYTHON=%PYTHON%
)
call where python
call python --version
@REM call where python
call "%PYTHON%" --version
echo PYTHONPATH=%PYTHONPATH%

View File

@ -3,7 +3,7 @@
cd /d %~dp0
echo Install dir: %~dp0
set PATH=C:\Windows\System32;%PATH%
set PATH=C:\Windows\System32;C:\Windows\System32\WindowsPowerShell\v1.0;%PATH%
set PYTHONHOME=
if exist "on_sd_start.bat" (
@ -39,7 +39,7 @@ call where conda
call conda --version
echo .
echo COMSPEC=%COMSPEC%
wmic path win32_VideoController get name,AdapterRAM,DriverDate,DriverVersion
powershell -Command "(Get-WmiObject Win32_VideoController | Select-Object Name, AdapterRAM, DriverDate, DriverVersion)"
@rem Download the rest of the installer and UI
call scripts\on_env_start.bat

View File

@ -24,7 +24,7 @@ set USERPROFILE=%cd%\profile
@rem figure out whether git and conda needs to be installed
if exist "%INSTALL_ENV_DIR%" set PATH=%INSTALL_ENV_DIR%;%INSTALL_ENV_DIR%\Library\bin;%INSTALL_ENV_DIR%\Scripts;%INSTALL_ENV_DIR%\Library\usr\bin;%PATH%
set PACKAGES_TO_INSTALL=git python=3.8.5
set PACKAGES_TO_INSTALL=git python=3.9
if not exist "%LEGACY_INSTALL_ENV_DIR%\etc\profile.d\conda.sh" (
if not exist "%INSTALL_ENV_DIR%\etc\profile.d\conda.sh" set PACKAGES_TO_INSTALL=%PACKAGES_TO_INSTALL% conda

View File

@ -46,7 +46,7 @@ if [ -e "$INSTALL_ENV_DIR" ]; then export PATH="$INSTALL_ENV_DIR/bin:$PATH"; fi
PACKAGES_TO_INSTALL=""
if [ ! -e "$LEGACY_INSTALL_ENV_DIR/etc/profile.d/conda.sh" ] && [ ! -e "$INSTALL_ENV_DIR/etc/profile.d/conda.sh" ]; then PACKAGES_TO_INSTALL="$PACKAGES_TO_INSTALL conda python=3.8.5"; fi
if [ ! -e "$LEGACY_INSTALL_ENV_DIR/etc/profile.d/conda.sh" ] && [ ! -e "$INSTALL_ENV_DIR/etc/profile.d/conda.sh" ]; then PACKAGES_TO_INSTALL="$PACKAGES_TO_INSTALL conda python=3.9"; fi
if ! hash "git" &>/dev/null; then PACKAGES_TO_INSTALL="$PACKAGES_TO_INSTALL git"; fi
if "$MAMBA_ROOT_PREFIX/micromamba" --version &>/dev/null; then umamba_exists="T"; fi

File diff suppressed because it is too large Load Diff

View File

@ -26,19 +26,19 @@ if "%update_branch%"=="" (
set update_branch=main
)
@>nul findstr /m "conda_sd_ui_deps_installed" scripts\install_status.txt
@if "%ERRORLEVEL%" NEQ "0" (
for /f "tokens=*" %%a in ('python -c "import os; parts = os.getcwd().split(os.path.sep); print(len(parts))"') do if "%%a" NEQ "2" (
echo. & echo "!!!! WARNING !!!!" & echo.
echo "Your 'stable-diffusion-ui' folder is at %cd%" & echo.
echo "The 'stable-diffusion-ui' folder needs to be at the top of your drive, for e.g. 'C:\stable-diffusion-ui' or 'D:\stable-diffusion-ui' etc."
echo "Not placing this folder at the top of a drive can cause errors on some computers."
echo. & echo "Recommended: Please close this window and move the 'stable-diffusion-ui' folder to the top of a drive. For e.g. 'C:\stable-diffusion-ui'. Then run the installer again." & echo.
echo "Not Recommended: If you're sure that you want to install at the current location, please press any key to continue." & echo.
@REM @>nul findstr /m "sd_install_complete" scripts\install_status.txt
@REM @if "%ERRORLEVEL%" NEQ "0" (
@REM for /f "tokens=*" %%a in ('python -c "import os; parts = os.getcwd().split(os.path.sep); print(len(parts))"') do if "%%a" NEQ "2" (
@REM echo. & echo "!!!! WARNING !!!!" & echo.
@REM echo "Your 'stable-diffusion-ui' folder is at %cd%" & echo.
@REM echo "The 'stable-diffusion-ui' folder needs to be at the top of your drive, for e.g. 'C:\stable-diffusion-ui' or 'D:\stable-diffusion-ui' etc."
@REM echo "Not placing this folder at the top of a drive can cause errors on some computers."
@REM echo. & echo "Recommended: Please close this window and move the 'stable-diffusion-ui' folder to the top of a drive. For e.g. 'C:\stable-diffusion-ui'. Then run the installer again." & echo.
@REM echo "Not Recommended: If you're sure that you want to install at the current location, please press any key to continue." & echo.
pause
)
)
@REM pause
@REM )
@REM )
@>nul findstr /m "sd_ui_git_cloned" scripts\install_status.txt
@if "%ERRORLEVEL%" EQU "0" (

View File

@ -66,11 +66,17 @@ set PYTHONNOUSERSITE=1
set PYTHONPATH=%INSTALL_ENV_DIR%\lib\site-packages
echo PYTHONPATH=%PYTHONPATH%
@rem Download the required packages
call where python
call python --version
set PYTHON=%INSTALL_ENV_DIR%\python.exe
echo PYTHON=%PYTHON%
call python scripts\check_modules.py --launch-uvicorn
@rem Download the required packages
@REM call where python
call "%PYTHON%" --version
@rem this is outside check_modules.py to ensure that the required version of torchruntime is present
call "%PYTHON%" -m pip install -q "torchruntime>=1.19.1"
call "%PYTHON%" scripts\check_modules.py --launch-uvicorn
pause
exit /b

View File

@ -46,6 +46,9 @@ fi
if [ -e "src" ]; then mv src src-old; fi
if [ -e "ldm" ]; then mv ldm ldm-old; fi
# this is outside check_modules.py to ensure that the required version of torchruntime is present
python -m pip install -q "torchruntime>=1.19.1"
cd ..
# Download the required packages
python scripts/check_modules.py --launch-uvicorn

View File

@ -54,8 +54,7 @@ OUTPUT_DIRNAME = "Stable Diffusion UI" # in the user's home folder
PRESERVE_CONFIG_VARS = ["FORCE_FULL_PRECISION"]
TASK_TTL = 15 * 60 # Discard last session's task timeout
APP_CONFIG_DEFAULTS = {
# auto: selects the cuda device with the most free memory, cuda: use the currently active cuda device.
"render_devices": "auto", # valid entries: 'auto', 'cpu' or 'cuda:N' (where N is a GPU index)
"render_devices": "auto",
"update_branch": "main",
"ui": {
"open_browser_on_start": True,

View File

@ -6,6 +6,15 @@ import traceback
import torch
from easydiffusion.utils import log
from torchruntime.utils import (
get_installed_torch_platform,
get_device,
get_device_count,
get_device_name,
SUPPORTED_BACKENDS,
)
from sdkit.utils import mem_get_info, is_cpu_device, has_half_precision_bug
"""
Set `FORCE_FULL_PRECISION` in the environment variables, or in `config.bat`/`config.sh` to set full precision (i.e. float32).
Otherwise the models will load at half-precision (i.e. float16).
@ -22,33 +31,15 @@ mem_free_threshold = 0
def get_device_delta(render_devices, active_devices):
"""
render_devices: 'cpu', or 'auto', or 'mps' or ['cuda:N'...]
active_devices: ['cpu', 'mps', 'cuda:N'...]
render_devices: 'auto' or backends listed in `torchruntime.utils.SUPPORTED_BACKENDS`
active_devices: [backends listed in `torchruntime.utils.SUPPORTED_BACKENDS`]
"""
if render_devices in ("cpu", "auto", "mps"):
render_devices = [render_devices]
elif render_devices is not None:
if isinstance(render_devices, str):
render_devices = [render_devices]
if isinstance(render_devices, list) and len(render_devices) > 0:
render_devices = list(filter(lambda x: x.startswith("cuda:") or x == "mps", render_devices))
if len(render_devices) == 0:
raise Exception(
'Invalid render_devices value in config.json. Valid: {"render_devices": ["cuda:0", "cuda:1"...]}, or {"render_devices": "cpu"} or {"render_devices": "mps"} or {"render_devices": "auto"}'
)
render_devices = render_devices or "auto"
render_devices = [render_devices] if isinstance(render_devices, str) else render_devices
render_devices = list(filter(lambda x: is_device_compatible(x), render_devices))
if len(render_devices) == 0:
raise Exception(
"Sorry, none of the render_devices configured in config.json are compatible with Stable Diffusion"
)
else:
raise Exception(
'Invalid render_devices value in config.json. Valid: {"render_devices": ["cuda:0", "cuda:1"...]}, or {"render_devices": "cpu"} or {"render_devices": "auto"}'
)
else:
render_devices = ["auto"]
# check for backend support
validate_render_devices(render_devices)
if "auto" in render_devices:
render_devices = auto_pick_devices(active_devices)
@ -64,47 +55,39 @@ def get_device_delta(render_devices, active_devices):
return devices_to_start, devices_to_stop
def is_mps_available():
return (
platform.system() == "Darwin"
and hasattr(torch.backends, "mps")
and torch.backends.mps.is_available()
and torch.backends.mps.is_built()
)
def validate_render_devices(render_devices):
supported_backends = ("auto",) + SUPPORTED_BACKENDS
unsupported_render_devices = [d for d in render_devices if not d.lower().startswith(supported_backends)]
def is_cuda_available():
return torch.cuda.is_available()
if unsupported_render_devices:
raise ValueError(
f"Invalid render devices in config: {unsupported_render_devices}. Valid render devices: {supported_backends}"
)
def auto_pick_devices(currently_active_devices):
global mem_free_threshold
if is_mps_available():
return ["mps"]
torch_platform_name = get_installed_torch_platform()[0]
if not is_cuda_available():
return ["cpu"]
device_count = torch.cuda.device_count()
if device_count == 1:
return ["cuda:0"] if is_device_compatible("cuda:0") else ["cpu"]
if is_cpu_device(torch_platform_name):
return [torch_platform_name]
device_count = get_device_count()
log.debug("Autoselecting GPU. Using most free memory.")
devices = []
for device in range(device_count):
device = f"cuda:{device}"
if not is_device_compatible(device):
continue
for device_id in range(device_count):
device_id = f"{torch_platform_name}:{device_id}" if device_count > 1 else torch_platform_name
device = get_device(device_id)
mem_free, mem_total = torch.cuda.mem_get_info(device)
mem_free, mem_total = mem_get_info(device)
mem_free /= float(10**9)
mem_total /= float(10**9)
device_name = torch.cuda.get_device_name(device)
device_name = get_device_name(device)
log.debug(
f"{device} detected: {device_name} - Memory (free/total): {round(mem_free, 2)}Gb / {round(mem_total, 2)}Gb"
f"{device_id} detected: {device_name} - Memory (free/total): {round(mem_free, 2)}Gb / {round(mem_total, 2)}Gb"
)
devices.append({"device": device, "device_name": device_name, "mem_free": mem_free})
devices.append({"device": device_id, "device_name": device_name, "mem_free": mem_free})
devices.sort(key=lambda x: x["mem_free"], reverse=True)
max_mem_free = devices[0]["mem_free"]
@ -117,69 +100,45 @@ def auto_pick_devices(currently_active_devices):
# always be very low (since their VRAM contains the model).
# These already-running devices probably aren't terrible, since they were picked in the past.
# Worst case, the user can restart the program and that'll get rid of them.
devices = list(
filter(
(lambda x: x["mem_free"] > mem_free_threshold or x["device"] in currently_active_devices),
devices,
)
)
devices = list(map(lambda x: x["device"], devices))
devices = [
x["device"] for x in devices if x["mem_free"] >= mem_free_threshold or x["device"] in currently_active_devices
]
return devices
def device_init(context, device):
"""
This function assumes the 'device' has already been verified to be compatible.
`get_device_delta()` has already filtered out incompatible devices.
"""
def device_init(context, device_id):
context.device = device_id
validate_device_id(device, log_prefix="device_init")
if "cuda" not in device:
context.device = device
if is_cpu_device(context.torch_device):
context.device_name = get_processor_name()
context.half_precision = False
log.debug(f"Render device available as {context.device_name}")
return
else:
context.device_name = get_device_name(context.torch_device)
context.device_name = torch.cuda.get_device_name(device)
context.device = device
# Some graphics cards have bugs in their firmware that prevent image generation at half precision
if needs_to_force_full_precision(context.device_name):
log.warn(f"forcing full precision on this GPU, to avoid corrupted images. GPU: {context.device_name}")
context.half_precision = False
# Force full precision on 1660 and 1650 NVIDIA cards to avoid creating green images
if needs_to_force_full_precision(context):
log.warn(f"forcing full precision on this GPU, to avoid green images. GPU detected: {context.device_name}")
# Apply force_full_precision now before models are loaded.
context.half_precision = False
log.info(f'Setting {device} as active, with precision: {"half" if context.half_precision else "full"}')
torch.cuda.device(device)
log.info(f'Setting {device_id} as active, with precision: {"half" if context.half_precision else "full"}')
def needs_to_force_full_precision(context):
def needs_to_force_full_precision(device_name):
if "FORCE_FULL_PRECISION" in os.environ:
return True
device_name = context.device_name.lower()
return (
("nvidia" in device_name or "geforce" in device_name or "quadro" in device_name)
and (
" 1660" in device_name
or " 1650" in device_name
or " 1630" in device_name
or " t400" in device_name
or " t550" in device_name
or " t600" in device_name
or " t1000" in device_name
or " t1200" in device_name
or " t2000" in device_name
)
) or ("tesla k40m" in device_name)
return has_half_precision_bug(device_name.lower())
def get_max_vram_usage_level(device):
if "cuda" in device:
_, mem_total = torch.cuda.mem_get_info(device)
else:
"Expects a torch.device as the argument"
if is_cpu_device(device):
return "high"
_, mem_total = mem_get_info(device)
if mem_total < 0.001: # probably a torch platform without a mem_get_info() implementation
return "high"
mem_total /= float(10**9)
@ -191,51 +150,6 @@ def get_max_vram_usage_level(device):
return "high"
def validate_device_id(device, log_prefix=""):
def is_valid():
if not isinstance(device, str):
return False
if device == "cpu" or device == "mps":
return True
if not device.startswith("cuda:") or not device[5:].isnumeric():
return False
return True
if not is_valid():
raise EnvironmentError(
f"{log_prefix}: device id should be 'cpu', 'mps', or 'cuda:N' (where N is an integer index for the GPU). Got: {device}"
)
def is_device_compatible(device):
"""
Returns True/False, and prints any compatibility errors
"""
# static variable "history".
is_device_compatible.history = getattr(is_device_compatible, "history", {})
try:
validate_device_id(device, log_prefix="is_device_compatible")
except:
log.error(str(e))
return False
if device in ("cpu", "mps"):
return True
# Memory check
try:
_, mem_total = torch.cuda.mem_get_info(device)
mem_total /= float(10**9)
if mem_total < 1.9:
if is_device_compatible.history.get(device) == None:
log.warn(f"GPU {device} with less than 2 GB of VRAM is not compatible with Stable Diffusion")
is_device_compatible.history[device] = 1
return False
except RuntimeError as e:
log.error(str(e))
return False
return True
def get_processor_name():
try:
import subprocess
@ -243,7 +157,8 @@ def get_processor_name():
if platform.system() == "Windows":
return platform.processor()
elif platform.system() == "Darwin":
os.environ["PATH"] = os.environ["PATH"] + os.pathsep + "/usr/sbin"
if "/usr/sbin" not in os.environ["PATH"].split(os.pathsep):
os.environ["PATH"] = os.environ["PATH"] + os.pathsep + "/usr/sbin"
command = "sysctl -n machdep.cpu.brand_string"
return subprocess.check_output(command, shell=True).decode().strip()
elif platform.system() == "Linux":

View File

@ -76,7 +76,7 @@ def load_default_models(context: Context):
scan_model=context.model_paths[model_type] != None
and not context.model_paths[model_type].endswith(".safetensors"),
)
if model_type in context.model_load_errors:
if hasattr(context, "model_load_errors") and model_type in context.model_load_errors:
del context.model_load_errors[model_type]
except Exception as e:
log.error(f"[red]Error while loading {model_type} model: {context.model_paths[model_type]}[/red]")
@ -88,6 +88,8 @@ def load_default_models(context: Context):
log.exception(e)
del context.model_paths[model_type]
if not hasattr(context, "model_load_errors"):
context.model_load_errors = {}
context.model_load_errors[model_type] = str(e) # storing the entire Exception can lead to memory leaks
@ -179,11 +181,13 @@ def reload_models_if_necessary(context: Context, models_data: ModelsData, models
extra_params = models_data.model_params.get(model_type, {})
try:
action_fn(context, model_type, scan_model=False, **extra_params) # we've scanned them already
if model_type in context.model_load_errors:
if hasattr(context, "model_load_errors") and model_type in context.model_load_errors:
del context.model_load_errors[model_type]
except Exception as e:
log.exception(e)
if action_fn == load_model:
if not hasattr(context, "model_load_errors"):
context.model_load_errors = {}
context.model_load_errors[model_type] = str(e) # storing the entire Exception can lead to memory leaks
@ -207,7 +211,7 @@ def resolve_model_paths(models_data: ModelsData):
def fail_if_models_did_not_load(context: Context):
for model_type in KNOWN_MODEL_TYPES:
if model_type in context.model_load_errors:
if hasattr(context, "model_load_errors") and model_type in context.model_load_errors:
e = context.model_load_errors[model_type]
raise Exception(f"Could not load the {model_type} model! Reason: " + e)
@ -289,7 +293,7 @@ def make_model_folders():
def is_malicious_model(file_path):
try:
if file_path.endswith(".safetensors"):
if file_path.endswith((".safetensors", ".sft", ".gguf")):
return False
scan_result = scan_model(file_path)
if scan_result.issues_count > 0 or scan_result.infected_files > 0:

View File

@ -7,7 +7,8 @@ from sdkit.utils import log
from easydiffusion import app
# future home of scripts/check_modules.py
# was meant to be a rewrite of scripts/check_modules.py
# but probably dead for now
manifest = {
"tensorrt": {

View File

@ -196,11 +196,13 @@ def set_app_config_internal(req: SetAppConfigRequest):
def update_render_devices_in_config(config, render_devices):
if render_devices not in ("cpu", "auto") and not render_devices.startswith("cuda:"):
raise HTTPException(status_code=400, detail=f"Invalid render device requested: {render_devices}")
from easydiffusion.device_manager import validate_render_devices
if render_devices.startswith("cuda:"):
try:
render_devices = render_devices.split(",")
validate_render_devices(render_devices)
except ValueError as e:
raise HTTPException(status_code=400, detail=str(e))
config["render_devices"] = render_devices

View File

@ -21,6 +21,9 @@ from easydiffusion.tasks import Task
from easydiffusion.utils import log
from sdkit.utils import gc
from torchruntime.utils import get_device_count, get_device, get_device_name, get_installed_torch_platform
from sdkit.utils import is_cpu_device, mem_get_info
THREAD_NAME_PREFIX = ""
ERR_LOCK_FAILED = " failed to acquire lock within timeout."
LOCK_TIMEOUT = 15 # Maximum locking time in seconds before failing a task.
@ -329,34 +332,33 @@ def get_devices():
"active": {},
}
def get_device_info(device):
if device in ("cpu", "mps"):
def get_device_info(device_id):
if is_cpu_device(device_id):
return {"name": device_manager.get_processor_name()}
mem_free, mem_total = torch.cuda.mem_get_info(device)
device = get_device(device_id)
mem_free, mem_total = mem_get_info(device)
mem_free /= float(10**9)
mem_total /= float(10**9)
return {
"name": torch.cuda.get_device_name(device),
"name": get_device_name(device),
"mem_free": mem_free,
"mem_total": mem_total,
"max_vram_usage_level": device_manager.get_max_vram_usage_level(device),
}
# list the compatible devices
cuda_count = torch.cuda.device_count()
for device in range(cuda_count):
device = f"cuda:{device}"
if not device_manager.is_device_compatible(device):
continue
torch_platform_name = get_installed_torch_platform()[0]
device_count = get_device_count()
for device_id in range(device_count):
device_id = f"{torch_platform_name}:{device_id}" if device_count > 1 else torch_platform_name
devices["all"].update({device: get_device_info(device)})
devices["all"].update({device_id: get_device_info(device_id)})
if device_manager.is_mps_available():
devices["all"].update({"mps": get_device_info("mps")})
devices["all"].update({"cpu": get_device_info("cpu")})
if torch_platform_name != "cpu":
devices["all"].update({"cpu": get_device_info("cpu")})
# list the activated devices
if not manager_lock.acquire(blocking=True, timeout=LOCK_TIMEOUT):
@ -368,8 +370,8 @@ def get_devices():
weak_data = weak_thread_data.get(rthread)
if not weak_data or not "device" in weak_data or not "device_name" in weak_data:
continue
device = weak_data["device"]
devices["active"].update({device: get_device_info(device)})
device_id = weak_data["device"]
devices["active"].update({device_id: get_device_info(device_id)})
finally:
manager_lock.release()
@ -427,12 +429,6 @@ def start_render_thread(device):
def stop_render_thread(device):
try:
device_manager.validate_device_id(device, log_prefix="stop_render_thread")
except:
log.error(traceback.format_exc())
return False
if not manager_lock.acquire(blocking=True, timeout=LOCK_TIMEOUT):
raise Exception("stop_render_thread" + ERR_LOCK_FAILED)
log.info(f"Stopping Rendering Thread on device: {device}")

View File

@ -20,8 +20,8 @@ class GenerateImageRequest(BaseModel):
control_image: Any = None
control_alpha: Union[float, List[float]] = None
prompt_strength: float = 0.8
preserve_init_image_color_profile = False
strict_mask_border = False
preserve_init_image_color_profile: bool = False
strict_mask_border: bool = False
sampler_name: str = None # "ddim", "plms", "heun", "euler", "euler_a", "dpm2", "dpm2_a", "lms"
hypernetwork_strength: float = 0
@ -100,7 +100,7 @@ class MergeRequest(BaseModel):
model1: str = None
ratio: float = None
out_path: str = "mix"
use_fp16 = True
use_fp16: bool = True
class Image:

View File

@ -31,6 +31,7 @@ TASK_TEXT_MAPPING = {
"clip_skip": "Clip Skip",
"use_controlnet_model": "ControlNet model",
"control_filter_to_apply": "ControlNet Filter",
"control_alpha": "ControlNet Strength",
"use_vae_model": "VAE model",
"sampler_name": "Sampler",
"width": "Width",

View File

@ -35,7 +35,7 @@
<h1>
<img id="logo_img" src="/media/images/icon-512x512.png" >
Easy Diffusion
<small><span id="version">v3.0.7</span> <span id="updateBranchLabel"></span></small>
<small><span id="version">v3.0.9c</span> <span id="updateBranchLabel"></span></small>
</h1>
</div>
<div id="server-status">
@ -235,6 +235,8 @@
<label for="controlnet_model"><small>Model:</small></label> <input id="controlnet_model" type="text" spellcheck="false" autocomplete="off" class="model-filter" data-path="" />
<br/>
<label><small>Will download the necessary models, the first time.</small></label>
<br/>
<label for="controlnet_alpha_slider"><small>Strength:</small></label> <input id="controlnet_alpha_slider" name="controlnet_alpha_slider" class="editor-slider" value="10" type="range" min="0" max="10"> <input id="controlnet_alpha" name="controlnet_alpha" size="4" pattern="^[0-9\.]+$" onkeypress="preventNonNumericalInput(event)" inputmode="decimal">
</div>
</td>
</tr>
@ -266,7 +268,7 @@
<option value="unipc_tu_2" class="k_diffusion-only">UniPC TU 2</option>
<option value="unipc_tq" class="k_diffusion-only">UniPC TQ</option>
</select>
<a href="https://github.com/easydiffusion/easydiffusion/wiki/How-to-Use#samplers" target="_blank"><i class="fa-solid fa-circle-question help-btn"><span class="simple-tooltip top-left">Click to learn more about samplers</span></i></a>
<a href="https://github.com/easydiffusion/easydiffusion/wiki/Samplers" target="_blank"><i class="fa-solid fa-circle-question help-btn"><span class="simple-tooltip top-left">Click to learn more about samplers</span></i></a>
</td></tr>
<tr class="pl-5"><td><label>Image Size: </label></td><td id="image-size-options">
<select id="width" name="width" value="512">

View File

@ -57,6 +57,7 @@ const SETTINGS_IDS_LIST = [
"embedding-card-size-selector",
"lora_model",
"enable_vae_tiling",
"controlnet_alpha",
]
const IGNORE_BY_DEFAULT = ["prompt"]
@ -177,23 +178,6 @@ function loadSettings() {
}
})
CURRENTLY_LOADING_SETTINGS = false
} else if (localStorage.length < 2) {
// localStorage is too short for OldSettings
// So this is likely the first time Easy Diffusion is running.
// Initialize vram_usage_level based on the available VRAM
function initGPUProfile(event) {
if (
"detail" in event &&
"active" in event.detail &&
"cuda:0" in event.detail.active &&
event.detail.active["cuda:0"].mem_total < 4.5
) {
vramUsageLevelField.value = "low"
vramUsageLevelField.dispatchEvent(new Event("change"))
}
document.removeEventListener("system_info_update", initGPUProfile)
}
document.addEventListener("system_info_update", initGPUProfile)
} else {
CURRENTLY_LOADING_SETTINGS = true
tryLoadOldSettings()

View File

@ -309,10 +309,21 @@ const TASK_MAPPING = {
readUI: () => controlImageFilterField.value,
parse: (val) => val,
},
control_alpha: {
name: "ControlNet Strength",
setUI: (control_alpha) => {
control_alpha = control_alpha || 1.0
controlAlphaField.value = control_alpha
updateControlAlphaSlider()
},
readUI: () => parseFloat(controlAlphaField.value),
parse: (val) => val === null ? 1.0 : parseFloat(val),
},
use_lora_model: {
name: "LoRA model",
setUI: (use_lora_model) => {
let modelPaths = []
use_lora_model = use_lora_model === null ? "" : use_lora_model
use_lora_model = Array.isArray(use_lora_model) ? use_lora_model : [use_lora_model]
use_lora_model.forEach((m) => {
if (m.includes("models\\lora\\")) {
@ -529,6 +540,11 @@ function restoreTaskToUI(task, fieldsToSkip) {
// listen for inpainter loading event, which happens AFTER the main image loads (which reloads the inpai
controlImagePreview.src = task.reqBody.control_image
}
if ("use_controlnet_model" in task.reqBody && task.reqBody.use_controlnet_model && !("control_alpha" in task.reqBody)) {
controlAlphaField.value = 1.0
updateControlAlphaSlider()
}
}
function readUI() {
const reqBody = {}
@ -587,6 +603,7 @@ const TASK_TEXT_MAPPING = {
lora_alpha: "LoRA Strength",
use_controlnet_model: "ControlNet model",
control_filter_to_apply: "ControlNet Filter",
control_alpha: "ControlNet Strength",
tiling: "Seamless Tiling",
}
function parseTaskFromText(str) {

View File

@ -51,6 +51,10 @@ const taskConfigSetup = {
preserve_init_image_color_profile: "Preserve Color Profile",
strict_mask_border: "Strict Mask Border",
use_controlnet_model: "ControlNet Model",
control_alpha: {
label: "ControlNet Strength",
visible: ({ reqBody }) => !!reqBody?.use_controlnet_model,
},
},
pluginTaskConfig: {},
getCSSKey: (key) =>
@ -99,6 +103,8 @@ let controlImagePreview = document.querySelector("#control_image_preview")
let controlImageClearBtn = document.querySelector(".control_image_clear")
let controlImageContainer = document.querySelector("#control_image_wrapper")
let controlImageFilterField = document.querySelector("#control_image_filter")
let controlAlphaSlider = document.querySelector("#controlnet_alpha_slider")
let controlAlphaField = document.querySelector("#controlnet_alpha")
let applyColorCorrectionField = document.querySelector("#apply_color_correction")
let strictMaskBorderField = document.querySelector("#strict_mask_border")
let colorCorrectionSetting = document.querySelector("#apply_color_correction_setting")
@ -605,6 +611,13 @@ function onUseAsInputClick(req, img) {
initImagePreview.src = imgData
maskSetting.checked = false
//Force the image settings size to match the input, as inpaint currently only works correctly
//if input image and generate sizes match.
addImageSizeOption(img.naturalWidth);
addImageSizeOption(img.naturalHeight);
widthField.value = img.naturalWidth;
heightField.value = img.naturalHeight;
}
function onUseForControlnetClick(req, img) {
@ -1395,6 +1408,7 @@ function getCurrentUserRequest() {
if (controlnetModelField.value !== "" && IMAGE_REGEX.test(controlImagePreview.src)) {
newTask.reqBody.use_controlnet_model = controlnetModelField.value
newTask.reqBody.control_image = controlImagePreview.src
newTask.reqBody.control_alpha = parseFloat(controlAlphaField.value)
if (controlImageFilterField.value !== "") {
newTask.reqBody.control_filter_to_apply = controlImageFilterField.value
}
@ -2015,6 +2029,27 @@ function updateHypernetworkStrengthContainer() {
hypernetworkModelField.addEventListener("change", updateHypernetworkStrengthContainer)
updateHypernetworkStrengthContainer()
/********************* Controlnet Alpha **************************/
function updateControlAlpha() {
controlAlphaField.value = controlAlphaSlider.value / 10
controlAlphaField.dispatchEvent(new Event("change"))
}
function updateControlAlphaSlider() {
if (controlAlphaField.value < 0) {
controlAlphaField.value = 0
} else if (controlAlphaField.value > 10) {
controlAlphaField.value = 10
}
controlAlphaSlider.value = controlAlphaField.value * 10
controlAlphaSlider.dispatchEvent(new Event("change"))
}
controlAlphaSlider.addEventListener("input", updateControlAlpha)
controlAlphaField.addEventListener("input", updateControlAlphaSlider)
updateControlAlpha()
/********************* JPEG/WEBP Quality **********************/
function updateOutputQuality() {
outputQualityField.value = 0 | outputQualitySlider.value

View File

@ -642,7 +642,7 @@ function setDeviceInfo(devices) {
function ID_TO_TEXT(d) {
let info = devices.all[d]
if ("mem_free" in info && "mem_total" in info) {
if ("mem_free" in info && "mem_total" in info && info["mem_total"] > 0) {
return `${info.name} <small>(${d}) (${info.mem_free.toFixed(1)}Gb free / ${info.mem_total.toFixed(
1
)} Gb total)</small>`

View File

@ -0,0 +1,80 @@
// christmas hack, courtesy: https://pajasevi.github.io/CSSnowflakes/
;(function(){
"use strict";
function makeItSnow() {
const styleSheet = document.createElement("style")
styleSheet.textContent = `
/* customizable snowflake styling */
.snowflake {
color: #fff;
font-size: 1em;
font-family: Arial, sans-serif;
text-shadow: 0 0 5px #000;
}
.snowflake,.snowflake .inner{animation-iteration-count:infinite;animation-play-state:running}@keyframes snowflakes-fall{0%{transform:translateY(0)}100%{transform:translateY(110vh)}}@keyframes snowflakes-shake{0%,100%{transform:translateX(0)}50%{transform:translateX(80px)}}.snowflake{position:fixed;top:-10%;z-index:9999;-webkit-user-select:none;user-select:none;cursor:default;animation-name:snowflakes-shake;animation-duration:3s;animation-timing-function:ease-in-out}.snowflake .inner{animation-duration:10s;animation-name:snowflakes-fall;animation-timing-function:linear}.snowflake:nth-of-type(0){left:1%;animation-delay:0s}.snowflake:nth-of-type(0) .inner{animation-delay:0s}.snowflake:first-of-type{left:10%;animation-delay:1s}.snowflake:first-of-type .inner,.snowflake:nth-of-type(8) .inner{animation-delay:1s}.snowflake:nth-of-type(2){left:20%;animation-delay:.5s}.snowflake:nth-of-type(2) .inner,.snowflake:nth-of-type(6) .inner{animation-delay:6s}.snowflake:nth-of-type(3){left:30%;animation-delay:2s}.snowflake:nth-of-type(11) .inner,.snowflake:nth-of-type(3) .inner{animation-delay:4s}.snowflake:nth-of-type(4){left:40%;animation-delay:2s}.snowflake:nth-of-type(10) .inner,.snowflake:nth-of-type(4) .inner{animation-delay:2s}.snowflake:nth-of-type(5){left:50%;animation-delay:3s}.snowflake:nth-of-type(5) .inner{animation-delay:8s}.snowflake:nth-of-type(6){left:60%;animation-delay:2s}.snowflake:nth-of-type(7){left:70%;animation-delay:1s}.snowflake:nth-of-type(7) .inner{animation-delay:2.5s}.snowflake:nth-of-type(8){left:80%;animation-delay:0s}.snowflake:nth-of-type(9){left:90%;animation-delay:1.5s}.snowflake:nth-of-type(9) .inner{animation-delay:3s}.snowflake:nth-of-type(10){left:25%;animation-delay:0s}.snowflake:nth-of-type(11){left:65%;animation-delay:2.5s}
`
document.head.appendChild(styleSheet)
const snowflakes = document.createElement("div")
snowflakes.id = "snowflakes-container"
snowflakes.innerHTML = `
<div class="snowflakes" aria-hidden="true">
<div class="snowflake">
<div class="inner">❅</div>
</div>
<div class="snowflake">
<div class="inner">❅</div>
</div>
<div class="snowflake">
<div class="inner">❅</div>
</div>
<div class="snowflake">
<div class="inner">❅</div>
</div>
<div class="snowflake">
<div class="inner">❅</div>
</div>
<div class="snowflake">
<div class="inner">❅</div>
</div>
<div class="snowflake">
<div class="inner">❅</div>
</div>
<div class="snowflake">
<div class="inner">❅</div>
</div>
<div class="snowflake">
<div class="inner">❅</div>
</div>
<div class="snowflake">
<div class="inner">❅</div>
</div>
<div class="snowflake">
<div class="inner">❅</div>
</div>
<div class="snowflake">
<div class="inner">❅</div>
</div>
</div>`
document.body.appendChild(snowflakes)
const script = document.createElement("script")
script.innerHTML = `
$(document).ready(function() {
setTimeout(function() {
$("#snowflakes-container").fadeOut("slow", function() {$(this).remove()})
}, 10 * 1000)
})
`
document.body.appendChild(script)
}
let date = new Date()
if (date.getMonth() === 11 && date.getDate() >= 12) {
makeItSnow()
}
})()