mirror of
https://github.com/easydiffusion/easydiffusion.git
synced 2025-08-15 18:53:02 +02:00
Compare commits
127 Commits
Author | SHA1 | Date | |
---|---|---|---|
072dd685aa | |||
6ca4f74a6f | |||
8de2536982 | |||
4fb876c393 | |||
cce6a205a6 | |||
afcf85c3f7 | |||
0daf5ea9f6 | |||
491ee9ef1e | |||
1adf7d1a6e | |||
36f5ee97f8 | |||
ffaae89e7f | |||
ec638b4343 | |||
3977565cfa | |||
09f7250454 | |||
4e07966e54 | |||
5548e422e9 | |||
efd6dfaca5 | |||
8da94496ed | |||
78538f5cfb | |||
279ce2e263 | |||
889a070e62 | |||
497b996ce9 | |||
83b8028e0a | |||
7315584904 | |||
9d86291b13 | |||
2c359e0e39 | |||
a7f0568bff | |||
275897fcd4 | |||
9a71f23709 | |||
c845dd32a8 | |||
193a8dc7c5 | |||
712770d0f8 | |||
d5e4fc2e2f | |||
58c6d02c41 | |||
e4dd2e26ab | |||
40801de615 | |||
adfe24fdd0 | |||
1ed27e63f9 | |||
0e3b6a8609 | |||
16c76886fa | |||
5c7625c425 | |||
4044d696e6 | |||
5285e8f5e8 | |||
7a9d25fb4f | |||
0b24ef71a1 | |||
24e374228d | |||
9c7f84bd55 | |||
5df082a98f | |||
895801b667 | |||
f0671d407d | |||
64be622285 | |||
80a4c6f295 | |||
7a383d7bc4 | |||
5cddfe78b2 | |||
1ec4547a68 | |||
827ec785e1 | |||
40bcbad797 | |||
03c49d5b47 | |||
e9667fefa9 | |||
19c6e10eda | |||
55daf647a0 | |||
077c6c3ac9 | |||
d0f45f1f51 | |||
7a17adc46b | |||
3380b58c18 | |||
7577a1f66c | |||
a8b1bbe441 | |||
a1fb9bc65c | |||
f737921eaa | |||
b12a6b9537 | |||
2efef8043e | |||
9e21d681a0 | |||
964aef6bc3 | |||
07105d7cfd | |||
7aa4fe9c4b | |||
03a5108cdd | |||
c248231181 | |||
35f752b36d | |||
d5a7c1bdf6 | |||
d082ac3519 | |||
e57599c01e | |||
fd76a160ac | |||
5337aa1a6e | |||
89ada5bfa9 | |||
88b8e54ad8 | |||
be7adcc367 | |||
00fc1a81e0 | |||
5023619676 | |||
52aaef5e39 | |||
493035df16 | |||
4a62d4e76e | |||
0af3fa8e98 | |||
74b25bdcb1 | |||
e084a35ffc | |||
ad06e345c9 | |||
572b0329cf | |||
33ca04b916 | |||
8cd6ca6269 | |||
49488ded01 | |||
2f9b907a6b | |||
d5277cd38c | |||
fe2443ec0c | |||
b3136e5738 | |||
b302764265 | |||
6121f580e9 | |||
45a882731a | |||
d38512d841 | |||
5a03b61aef | |||
ca0dca4a0f | |||
4228ec0df8 | |||
3c9ffcf7ca | |||
b3a961fc82 | |||
0c8410c371 | |||
96ec3ed270 | |||
d0be4edf1d | |||
d4ea34a013 | |||
459bfd4280 | |||
f751070c7f | |||
3f03432580 | |||
d5edbfee8b | |||
7e0d5893cd | |||
d8c3d7cf92 | |||
dfb8313d1a | |||
b7d46be530 | |||
1cc7c1afa0 | |||
89f5e07619 | |||
45f350239e |
15
CHANGES.md
15
CHANGES.md
@ -2,6 +2,7 @@
|
|||||||
|
|
||||||
## v3.5 (preview)
|
## v3.5 (preview)
|
||||||
### Major Changes
|
### Major Changes
|
||||||
|
- **Chroma** - support for the Chroma model, including quantized bnb and nf4 models.
|
||||||
- **Flux** - full support for the Flux model, including quantized bnb and nf4 models.
|
- **Flux** - full support for the Flux model, including quantized bnb and nf4 models.
|
||||||
- **LyCORIS** - including `LoCon`, `Hada`, `IA3` and `Lokr`.
|
- **LyCORIS** - including `LoCon`, `Hada`, `IA3` and `Lokr`.
|
||||||
- **11 new samplers** - `DDIM CFG++`, `DPM Fast`, `DPM++ 2m SDE Heun`, `DPM++ 3M SDE`, `Restart`, `Heun PP2`, `IPNDM`, `IPNDM_V`, `LCM`, `[Forge] Flux Realistic`, `[Forge] Flux Realistic (Slow)`.
|
- **11 new samplers** - `DDIM CFG++`, `DPM Fast`, `DPM++ 2m SDE Heun`, `DPM++ 3M SDE`, `Restart`, `Heun PP2`, `IPNDM`, `IPNDM_V`, `LCM`, `[Forge] Flux Realistic`, `[Forge] Flux Realistic (Slow)`.
|
||||||
@ -14,6 +15,17 @@
|
|||||||
v3.5 is currently an optional upgrade, and you can switch between the v3.0 (diffusers) engine and the v3.5 (webui) engine using the `Settings` tab in the UI.
|
v3.5 is currently an optional upgrade, and you can switch between the v3.0 (diffusers) engine and the v3.5 (webui) engine using the `Settings` tab in the UI.
|
||||||
|
|
||||||
### Detailed changelog
|
### Detailed changelog
|
||||||
|
* 3.5.9 - 18 Jul 2025 - Stability fix for the Forge backend. Prevents unused Forge processes from hanging around even after closing Easy Diffusion.
|
||||||
|
* 3.5.8 - 14 Jul 2025 - Support custom Text Encoders and Flux VAEs in the UI.
|
||||||
|
* 3.5.7 - 27 Jun 2025 - Support for the Chroma model. Update Forge to the latest commit.
|
||||||
|
* 3.5.6 - 17 Feb 2025 - Fix broken model merging.
|
||||||
|
* 3.5.5 - 10 Feb 2025 - (Internal code change) Use `torchruntime` for installing torch/torchvision, instead of custom logic. This supports a lot more GPUs on various platforms, and was built using Easy Diffusion's torch-installation code.
|
||||||
|
* 3.5.4 - 8 Feb 2025 - Fix a bug where the inpainting mask wasn't resized to the image size when using the WebUI/v3.5 backend. Thanks @AvidGameFan for their help in investigating and fixing this!
|
||||||
|
* 3.5.3 - 6 Feb 2025 - (Internal code change) Remove hardcoded references to `torch.cuda`, and replace with torchruntime's device utilities.
|
||||||
|
* 3.5.2 - 28 Jan 2025 - Fix for accidental jailbreak when using conda with WebUI - fixes the `type not subscriptable` error when using WebUI.
|
||||||
|
* 3.5.2 - 28 Jan 2025 - Fix a bug affecting older versions of Easy Diffusion, which tried to upgrade to an incompatible version of PyTorch.
|
||||||
|
* 3.5.2 - 4 Jan 2025 - Replace the use of WMIC (deprecated) with a powershell call.
|
||||||
|
* 3.5.1 - 17 Dec 2024 - Update Forge to the latest commit.
|
||||||
* 3.5.0 - 11 Oct 2024 - **Preview release** of the new v3.5 engine, powered by Forge WebUI (a fork of Automatic1111). This enables Flux, SD3, LyCORIS and lots of new features, while using the same familiar Easy Diffusion interface.
|
* 3.5.0 - 11 Oct 2024 - **Preview release** of the new v3.5 engine, powered by Forge WebUI (a fork of Automatic1111). This enables Flux, SD3, LyCORIS and lots of new features, while using the same familiar Easy Diffusion interface.
|
||||||
|
|
||||||
## v3.0
|
## v3.0
|
||||||
@ -33,6 +45,9 @@ v3.5 is currently an optional upgrade, and you can switch between the v3.0 (diff
|
|||||||
- **Major rewrite of the code** - We've switched to using diffusers under-the-hood, which allows us to release new features faster, and focus on making the UI and installer even easier to use.
|
- **Major rewrite of the code** - We've switched to using diffusers under-the-hood, which allows us to release new features faster, and focus on making the UI and installer even easier to use.
|
||||||
|
|
||||||
### Detailed changelog
|
### Detailed changelog
|
||||||
|
* 3.0.13 - 10 Feb 2025 - (Internal code change) Use `torchruntime` for installing torch/torchvision, instead of custom logic. This supports a lot more GPUs on various platforms, and was built using Easy Diffusion's torch-installation code.
|
||||||
|
* 3.0.12 - 6 Feb 2025 - (Internal code change) Remove hardcoded references to `torch.cuda`, and replace with torchruntime's device utilities.
|
||||||
|
* 3.0.11 - 4 Jan 2025 - Replace the use of WMIC (deprecated) with a powershell call.
|
||||||
* 3.0.10 - 11 Oct 2024 - **Major Update** - An option to upgrade to v3.5, which enables Flux, Stable Diffusion 3, LyCORIS models and lots more.
|
* 3.0.10 - 11 Oct 2024 - **Major Update** - An option to upgrade to v3.5, which enables Flux, Stable Diffusion 3, LyCORIS models and lots more.
|
||||||
* 3.0.9 - 28 May 2024 - Slider for controlling the strength of controlnets.
|
* 3.0.9 - 28 May 2024 - Slider for controlling the strength of controlnets.
|
||||||
* 3.0.8 - 27 May 2024 - SDXL ControlNets for Img2Img and Inpainting.
|
* 3.0.8 - 27 May 2024 - SDXL ControlNets for Img2Img and Inpainting.
|
||||||
|
@ -3,6 +3,7 @@
|
|||||||
Target amd64-unicode
|
Target amd64-unicode
|
||||||
Unicode True
|
Unicode True
|
||||||
SetCompressor /FINAL lzma
|
SetCompressor /FINAL lzma
|
||||||
|
SetCompressorDictSize 64
|
||||||
RequestExecutionLevel user
|
RequestExecutionLevel user
|
||||||
!AddPluginDir /amd64-unicode "."
|
!AddPluginDir /amd64-unicode "."
|
||||||
; HM NIS Edit Wizard helper defines
|
; HM NIS Edit Wizard helper defines
|
||||||
@ -235,8 +236,8 @@ Section "MainSection" SEC01
|
|||||||
CreateDirectory "$SMPROGRAMS\Easy Diffusion"
|
CreateDirectory "$SMPROGRAMS\Easy Diffusion"
|
||||||
CreateShortCut "$SMPROGRAMS\Easy Diffusion\Easy Diffusion.lnk" "$INSTDIR\Start Stable Diffusion UI.cmd" "" "$INSTDIR\installer_files\cyborg_flower_girl.ico"
|
CreateShortCut "$SMPROGRAMS\Easy Diffusion\Easy Diffusion.lnk" "$INSTDIR\Start Stable Diffusion UI.cmd" "" "$INSTDIR\installer_files\cyborg_flower_girl.ico"
|
||||||
|
|
||||||
DetailPrint 'Downloading the Stable Diffusion 1.5 model...'
|
DetailPrint 'Downloading the Stable Diffusion 1.4 model...'
|
||||||
NScurl::http get "https://github.com/easydiffusion/sdkit-test-data/releases/download/assets/sd-v1-5.safetensors" "$INSTDIR\models\stable-diffusion\sd-v1-5.safetensors" /CANCEL /INSIST /END
|
NScurl::http get "https://github.com/easydiffusion/sdkit-test-data/releases/download/assets/sd-v1-4.safetensors" "$INSTDIR\models\stable-diffusion\sd-v1-4.safetensors" /CANCEL /INSIST /END
|
||||||
|
|
||||||
DetailPrint 'Downloading the GFPGAN model...'
|
DetailPrint 'Downloading the GFPGAN model...'
|
||||||
NScurl::http get "https://github.com/TencentARC/GFPGAN/releases/download/v1.3.4/GFPGANv1.4.pth" "$INSTDIR\models\gfpgan\GFPGANv1.4.pth" /CANCEL /INSIST /END
|
NScurl::http get "https://github.com/TencentARC/GFPGAN/releases/download/v1.3.4/GFPGANv1.4.pth" "$INSTDIR\models\gfpgan\GFPGANv1.4.pth" /CANCEL /INSIST /END
|
||||||
|
@ -4,7 +4,7 @@ echo "Opening Stable Diffusion UI - Developer Console.." & echo.
|
|||||||
|
|
||||||
cd /d %~dp0
|
cd /d %~dp0
|
||||||
|
|
||||||
set PATH=C:\Windows\System32;%PATH%
|
set PATH=C:\Windows\System32;C:\Windows\System32\WindowsPowerShell\v1.0;%PATH%
|
||||||
|
|
||||||
@rem set legacy and new installer's PATH, if they exist
|
@rem set legacy and new installer's PATH, if they exist
|
||||||
if exist "installer" set PATH=%cd%\installer;%cd%\installer\Library\bin;%cd%\installer\Scripts;%cd%\installer\Library\usr\bin;%PATH%
|
if exist "installer" set PATH=%cd%\installer;%cd%\installer\Library\bin;%cd%\installer\Scripts;%cd%\installer\Library\usr\bin;%PATH%
|
||||||
@ -26,18 +26,23 @@ call conda --version
|
|||||||
echo.
|
echo.
|
||||||
echo COMSPEC=%COMSPEC%
|
echo COMSPEC=%COMSPEC%
|
||||||
echo.
|
echo.
|
||||||
|
powershell -Command "(Get-WmiObject Win32_VideoController | Select-Object Name, AdapterRAM, DriverDate, DriverVersion)"
|
||||||
|
|
||||||
@rem activate the legacy environment (if present) and set PYTHONPATH
|
@rem activate the legacy environment (if present) and set PYTHONPATH
|
||||||
if exist "installer_files\env" (
|
if exist "installer_files\env" (
|
||||||
set PYTHONPATH=%cd%\installer_files\env\lib\site-packages
|
set PYTHONPATH=%cd%\installer_files\env\lib\site-packages
|
||||||
|
set PYTHON=%cd%\installer_files\env\python.exe
|
||||||
|
echo PYTHON=%PYTHON%
|
||||||
)
|
)
|
||||||
if exist "stable-diffusion\env" (
|
if exist "stable-diffusion\env" (
|
||||||
call conda activate .\stable-diffusion\env
|
call conda activate .\stable-diffusion\env
|
||||||
set PYTHONPATH=%cd%\stable-diffusion\env\lib\site-packages
|
set PYTHONPATH=%cd%\stable-diffusion\env\lib\site-packages
|
||||||
|
set PYTHON=%cd%\stable-diffusion\env\python.exe
|
||||||
|
echo PYTHON=%PYTHON%
|
||||||
)
|
)
|
||||||
|
|
||||||
call where python
|
@REM call where python
|
||||||
call python --version
|
call "%PYTHON%" --version
|
||||||
|
|
||||||
echo PYTHONPATH=%PYTHONPATH%
|
echo PYTHONPATH=%PYTHONPATH%
|
||||||
|
|
||||||
|
@ -3,7 +3,7 @@
|
|||||||
cd /d %~dp0
|
cd /d %~dp0
|
||||||
echo Install dir: %~dp0
|
echo Install dir: %~dp0
|
||||||
|
|
||||||
set PATH=C:\Windows\System32;C:\Windows\System32\wbem;%PATH%
|
set PATH=C:\Windows\System32;C:\Windows\System32\WindowsPowerShell\v1.0;%PATH%
|
||||||
set PYTHONHOME=
|
set PYTHONHOME=
|
||||||
|
|
||||||
if exist "on_sd_start.bat" (
|
if exist "on_sd_start.bat" (
|
||||||
@ -39,7 +39,7 @@ call where conda
|
|||||||
call conda --version
|
call conda --version
|
||||||
echo .
|
echo .
|
||||||
echo COMSPEC=%COMSPEC%
|
echo COMSPEC=%COMSPEC%
|
||||||
wmic path win32_VideoController get name,AdapterRAM,DriverDate,DriverVersion
|
powershell -Command "(Get-WmiObject Win32_VideoController | Select-Object Name, AdapterRAM, DriverDate, DriverVersion)"
|
||||||
|
|
||||||
@rem Download the rest of the installer and UI
|
@rem Download the rest of the installer and UI
|
||||||
call scripts\on_env_start.bat
|
call scripts\on_env_start.bat
|
||||||
|
@ -24,7 +24,7 @@ set USERPROFILE=%cd%\profile
|
|||||||
@rem figure out whether git and conda needs to be installed
|
@rem figure out whether git and conda needs to be installed
|
||||||
if exist "%INSTALL_ENV_DIR%" set PATH=%INSTALL_ENV_DIR%;%INSTALL_ENV_DIR%\Library\bin;%INSTALL_ENV_DIR%\Scripts;%INSTALL_ENV_DIR%\Library\usr\bin;%PATH%
|
if exist "%INSTALL_ENV_DIR%" set PATH=%INSTALL_ENV_DIR%;%INSTALL_ENV_DIR%\Library\bin;%INSTALL_ENV_DIR%\Scripts;%INSTALL_ENV_DIR%\Library\usr\bin;%PATH%
|
||||||
|
|
||||||
set PACKAGES_TO_INSTALL=git python=3.8.5
|
set PACKAGES_TO_INSTALL=git python=3.9
|
||||||
|
|
||||||
if not exist "%LEGACY_INSTALL_ENV_DIR%\etc\profile.d\conda.sh" (
|
if not exist "%LEGACY_INSTALL_ENV_DIR%\etc\profile.d\conda.sh" (
|
||||||
if not exist "%INSTALL_ENV_DIR%\etc\profile.d\conda.sh" set PACKAGES_TO_INSTALL=%PACKAGES_TO_INSTALL% conda
|
if not exist "%INSTALL_ENV_DIR%\etc\profile.d\conda.sh" set PACKAGES_TO_INSTALL=%PACKAGES_TO_INSTALL% conda
|
||||||
|
@ -46,7 +46,7 @@ if [ -e "$INSTALL_ENV_DIR" ]; then export PATH="$INSTALL_ENV_DIR/bin:$PATH"; fi
|
|||||||
|
|
||||||
PACKAGES_TO_INSTALL=""
|
PACKAGES_TO_INSTALL=""
|
||||||
|
|
||||||
if [ ! -e "$LEGACY_INSTALL_ENV_DIR/etc/profile.d/conda.sh" ] && [ ! -e "$INSTALL_ENV_DIR/etc/profile.d/conda.sh" ]; then PACKAGES_TO_INSTALL="$PACKAGES_TO_INSTALL conda python=3.8.5"; fi
|
if [ ! -e "$LEGACY_INSTALL_ENV_DIR/etc/profile.d/conda.sh" ] && [ ! -e "$INSTALL_ENV_DIR/etc/profile.d/conda.sh" ]; then PACKAGES_TO_INSTALL="$PACKAGES_TO_INSTALL conda python=3.9"; fi
|
||||||
if ! hash "git" &>/dev/null; then PACKAGES_TO_INSTALL="$PACKAGES_TO_INSTALL git"; fi
|
if ! hash "git" &>/dev/null; then PACKAGES_TO_INSTALL="$PACKAGES_TO_INSTALL git"; fi
|
||||||
|
|
||||||
if "$MAMBA_ROOT_PREFIX/micromamba" --version &>/dev/null; then umamba_exists="T"; fi
|
if "$MAMBA_ROOT_PREFIX/micromamba" --version &>/dev/null; then umamba_exists="T"; fi
|
||||||
|
File diff suppressed because it is too large
Load Diff
@ -26,19 +26,19 @@ if "%update_branch%"=="" (
|
|||||||
set update_branch=main
|
set update_branch=main
|
||||||
)
|
)
|
||||||
|
|
||||||
@>nul findstr /m "conda_sd_ui_deps_installed" scripts\install_status.txt
|
@REM @>nul findstr /m "sd_install_complete" scripts\install_status.txt
|
||||||
@if "%ERRORLEVEL%" NEQ "0" (
|
@REM @if "%ERRORLEVEL%" NEQ "0" (
|
||||||
for /f "tokens=*" %%a in ('python -c "import os; parts = os.getcwd().split(os.path.sep); print(len(parts))"') do if "%%a" NEQ "2" (
|
@REM for /f "tokens=*" %%a in ('python -c "import os; parts = os.getcwd().split(os.path.sep); print(len(parts))"') do if "%%a" NEQ "2" (
|
||||||
echo. & echo "!!!! WARNING !!!!" & echo.
|
@REM echo. & echo "!!!! WARNING !!!!" & echo.
|
||||||
echo "Your 'stable-diffusion-ui' folder is at %cd%" & echo.
|
@REM echo "Your 'stable-diffusion-ui' folder is at %cd%" & echo.
|
||||||
echo "The 'stable-diffusion-ui' folder needs to be at the top of your drive, for e.g. 'C:\stable-diffusion-ui' or 'D:\stable-diffusion-ui' etc."
|
@REM echo "The 'stable-diffusion-ui' folder needs to be at the top of your drive, for e.g. 'C:\stable-diffusion-ui' or 'D:\stable-diffusion-ui' etc."
|
||||||
echo "Not placing this folder at the top of a drive can cause errors on some computers."
|
@REM echo "Not placing this folder at the top of a drive can cause errors on some computers."
|
||||||
echo. & echo "Recommended: Please close this window and move the 'stable-diffusion-ui' folder to the top of a drive. For e.g. 'C:\stable-diffusion-ui'. Then run the installer again." & echo.
|
@REM echo. & echo "Recommended: Please close this window and move the 'stable-diffusion-ui' folder to the top of a drive. For e.g. 'C:\stable-diffusion-ui'. Then run the installer again." & echo.
|
||||||
echo "Not Recommended: If you're sure that you want to install at the current location, please press any key to continue." & echo.
|
@REM echo "Not Recommended: If you're sure that you want to install at the current location, please press any key to continue." & echo.
|
||||||
|
|
||||||
pause
|
@REM pause
|
||||||
)
|
@REM )
|
||||||
)
|
@REM )
|
||||||
|
|
||||||
@>nul findstr /m "sd_ui_git_cloned" scripts\install_status.txt
|
@>nul findstr /m "sd_ui_git_cloned" scripts\install_status.txt
|
||||||
@if "%ERRORLEVEL%" EQU "0" (
|
@if "%ERRORLEVEL%" EQU "0" (
|
||||||
|
@ -67,11 +67,17 @@ set PYTHONNOUSERSITE=1
|
|||||||
set PYTHONPATH=%INSTALL_ENV_DIR%\lib\site-packages
|
set PYTHONPATH=%INSTALL_ENV_DIR%\lib\site-packages
|
||||||
echo PYTHONPATH=%PYTHONPATH%
|
echo PYTHONPATH=%PYTHONPATH%
|
||||||
|
|
||||||
@rem Download the required packages
|
set PYTHON=%INSTALL_ENV_DIR%\python.exe
|
||||||
call where python
|
echo PYTHON=%PYTHON%
|
||||||
call python --version
|
|
||||||
|
|
||||||
call python scripts\check_modules.py --launch-uvicorn
|
@rem Download the required packages
|
||||||
|
@REM call where python
|
||||||
|
call "%PYTHON%" --version
|
||||||
|
|
||||||
|
@rem this is outside check_modules.py to ensure that the required version of torchruntime is present
|
||||||
|
call "%PYTHON%" -m pip install -q "torchruntime>=1.19.1"
|
||||||
|
|
||||||
|
call "%PYTHON%" scripts\check_modules.py --launch-uvicorn
|
||||||
pause
|
pause
|
||||||
exit /b
|
exit /b
|
||||||
|
|
||||||
|
@ -50,6 +50,9 @@ fi
|
|||||||
if [ -e "src" ]; then mv src src-old; fi
|
if [ -e "src" ]; then mv src src-old; fi
|
||||||
if [ -e "ldm" ]; then mv ldm ldm-old; fi
|
if [ -e "ldm" ]; then mv ldm ldm-old; fi
|
||||||
|
|
||||||
|
# this is outside check_modules.py to ensure that the required version of torchruntime is present
|
||||||
|
python -m pip install -q "torchruntime>=1.19.1"
|
||||||
|
|
||||||
cd ..
|
cd ..
|
||||||
# Download the required packages
|
# Download the required packages
|
||||||
python scripts/check_modules.py --launch-uvicorn
|
python scripts/check_modules.py --launch-uvicorn
|
||||||
|
@ -77,10 +77,10 @@ def print_env_info():
|
|||||||
|
|
||||||
print(f"PATH={os.environ['PATH']}")
|
print(f"PATH={os.environ['PATH']}")
|
||||||
|
|
||||||
if platform.system() == "Windows":
|
# if platform.system() == "Windows":
|
||||||
print(f"COMSPEC={os.environ['COMSPEC']}")
|
# print(f"COMSPEC={os.environ['COMSPEC']}")
|
||||||
print("")
|
# print("")
|
||||||
run("wmic path win32_VideoController get name,AdapterRAM,DriverDate,DriverVersion".split(" "))
|
# run("wmic path win32_VideoController get name,AdapterRAM,DriverDate,DriverVersion".split(" "))
|
||||||
|
|
||||||
print(f"PYTHONPATH={os.environ['PYTHONPATH']}")
|
print(f"PYTHONPATH={os.environ['PYTHONPATH']}")
|
||||||
print("")
|
print("")
|
||||||
|
@ -54,8 +54,7 @@ OUTPUT_DIRNAME = "Stable Diffusion UI" # in the user's home folder
|
|||||||
PRESERVE_CONFIG_VARS = ["FORCE_FULL_PRECISION"]
|
PRESERVE_CONFIG_VARS = ["FORCE_FULL_PRECISION"]
|
||||||
TASK_TTL = 15 * 60 # Discard last session's task timeout
|
TASK_TTL = 15 * 60 # Discard last session's task timeout
|
||||||
APP_CONFIG_DEFAULTS = {
|
APP_CONFIG_DEFAULTS = {
|
||||||
# auto: selects the cuda device with the most free memory, cuda: use the currently active cuda device.
|
"render_devices": "auto",
|
||||||
"render_devices": "auto", # valid entries: 'auto', 'cpu' or 'cuda:N' (where N is a GPU index)
|
|
||||||
"update_branch": "main",
|
"update_branch": "main",
|
||||||
"ui": {
|
"ui": {
|
||||||
"open_browser_on_start": True,
|
"open_browser_on_start": True,
|
||||||
|
@ -6,16 +6,20 @@ from threading import local
|
|||||||
import psutil
|
import psutil
|
||||||
import time
|
import time
|
||||||
import shutil
|
import shutil
|
||||||
|
import atexit
|
||||||
|
|
||||||
from easydiffusion.app import ROOT_DIR, getConfig
|
from easydiffusion.app import ROOT_DIR, getConfig
|
||||||
from easydiffusion.model_manager import get_model_dirs
|
from easydiffusion.model_manager import get_model_dirs
|
||||||
from easydiffusion.utils import log
|
from easydiffusion.utils import log
|
||||||
|
from torchruntime.utils import get_device, get_device_name, get_installed_torch_platform
|
||||||
|
from sdkit.utils import is_cpu_device
|
||||||
|
|
||||||
from . import impl
|
from . import impl
|
||||||
from .impl import (
|
from .impl import (
|
||||||
ping,
|
ping,
|
||||||
load_model,
|
load_model,
|
||||||
unload_model,
|
unload_model,
|
||||||
|
flush_model_changes,
|
||||||
set_options,
|
set_options,
|
||||||
generate_images,
|
generate_images,
|
||||||
filter_images,
|
filter_images,
|
||||||
@ -33,7 +37,7 @@ ed_info = {
|
|||||||
}
|
}
|
||||||
|
|
||||||
WEBUI_REPO = "https://github.com/lllyasviel/stable-diffusion-webui-forge.git"
|
WEBUI_REPO = "https://github.com/lllyasviel/stable-diffusion-webui-forge.git"
|
||||||
WEBUI_COMMIT = "f4d5e8cac16a42fa939e78a0956b4c30e2b47bb5"
|
WEBUI_COMMIT = "dfdcbab685e57677014f05a3309b48cc87383167"
|
||||||
|
|
||||||
BACKEND_DIR = os.path.abspath(os.path.join(ROOT_DIR, "webui"))
|
BACKEND_DIR = os.path.abspath(os.path.join(ROOT_DIR, "webui"))
|
||||||
SYSTEM_DIR = os.path.join(BACKEND_DIR, "system")
|
SYSTEM_DIR = os.path.join(BACKEND_DIR, "system")
|
||||||
@ -51,8 +55,18 @@ MODELS_TO_OVERRIDE = {
|
|||||||
"codeformer": "--codeformer-models-path",
|
"codeformer": "--codeformer-models-path",
|
||||||
"embeddings": "--embeddings-dir",
|
"embeddings": "--embeddings-dir",
|
||||||
"controlnet": "--controlnet-dir",
|
"controlnet": "--controlnet-dir",
|
||||||
|
"text-encoder": "--text-encoder-dir",
|
||||||
}
|
}
|
||||||
|
|
||||||
|
WEBUI_PATCHES = [
|
||||||
|
"forge_exception_leak_patch.patch",
|
||||||
|
"forge_model_crash_recovery.patch",
|
||||||
|
"forge_api_refresh_text_encoders.patch",
|
||||||
|
"forge_loader_force_gc.patch",
|
||||||
|
"forge_monitor_parent_process.patch",
|
||||||
|
"forge_disable_corrupted_model_renaming.patch",
|
||||||
|
]
|
||||||
|
|
||||||
backend_process = None
|
backend_process = None
|
||||||
conda = "conda"
|
conda = "conda"
|
||||||
|
|
||||||
@ -81,34 +95,50 @@ def install_backend():
|
|||||||
# install python 3.10 and git in the conda env
|
# install python 3.10 and git in the conda env
|
||||||
run([conda, "install", "-y", "--prefix", SYSTEM_DIR, "-c", "conda-forge", "python=3.10", "git"], cwd=ROOT_DIR)
|
run([conda, "install", "-y", "--prefix", SYSTEM_DIR, "-c", "conda-forge", "python=3.10", "git"], cwd=ROOT_DIR)
|
||||||
|
|
||||||
|
env = dict(os.environ)
|
||||||
|
env.update(get_env())
|
||||||
|
|
||||||
# print info
|
# print info
|
||||||
run_in_conda(["git", "--version"], cwd=ROOT_DIR)
|
run_in_conda(["git", "--version"], cwd=ROOT_DIR, env=env)
|
||||||
run_in_conda(["python", "--version"], cwd=ROOT_DIR)
|
run_in_conda(["python", "--version"], cwd=ROOT_DIR, env=env)
|
||||||
|
|
||||||
# clone webui
|
# clone webui
|
||||||
run_in_conda(["git", "clone", WEBUI_REPO, WEBUI_DIR], cwd=ROOT_DIR)
|
run_in_conda(["git", "clone", WEBUI_REPO, WEBUI_DIR], cwd=ROOT_DIR, env=env)
|
||||||
|
|
||||||
# install cpu-only torch if the PC doesn't have a graphics card (for Windows and Linux).
|
# install the appropriate version of torch using torchruntime
|
||||||
# this avoids WebUI installing a CUDA version and trying to activate it
|
run_in_conda(["python", "-m", "pip", "install", "torchruntime"], cwd=WEBUI_DIR, env=env)
|
||||||
if OS_NAME in ("Windows", "Linux") and not has_discrete_graphics_card():
|
run_in_conda(["python", "-m", "torchruntime", "install", "torch", "torchvision"], cwd=WEBUI_DIR, env=env)
|
||||||
run_in_conda(["python", "-m", "pip", "install", "torch", "torchvision"], cwd=WEBUI_DIR)
|
|
||||||
|
|
||||||
|
|
||||||
def start_backend():
|
def start_backend():
|
||||||
config = getConfig()
|
config = getConfig()
|
||||||
backend_config = config.get("backend_config", {})
|
backend_config = config.get("backend_config", {})
|
||||||
|
|
||||||
|
log.info(f"Expected WebUI backend dir: {BACKEND_DIR}")
|
||||||
|
|
||||||
if not os.path.exists(BACKEND_DIR):
|
if not os.path.exists(BACKEND_DIR):
|
||||||
install_backend()
|
install_backend()
|
||||||
|
|
||||||
|
env = dict(os.environ)
|
||||||
|
env.update(get_env())
|
||||||
|
|
||||||
was_still_installing = not is_installed()
|
was_still_installing = not is_installed()
|
||||||
|
|
||||||
if backend_config.get("auto_update", True):
|
if backend_config.get("auto_update", True):
|
||||||
run_in_conda(["git", "add", "-A", "."], cwd=WEBUI_DIR)
|
run_in_conda(["git", "add", "-A", "."], cwd=WEBUI_DIR, env=env)
|
||||||
run_in_conda(["git", "stash"], cwd=WEBUI_DIR)
|
run_in_conda(["git", "stash"], cwd=WEBUI_DIR, env=env)
|
||||||
run_in_conda(["git", "reset", "--hard"], cwd=WEBUI_DIR)
|
run_in_conda(["git", "reset", "--hard"], cwd=WEBUI_DIR, env=env)
|
||||||
run_in_conda(["git", "fetch"], cwd=WEBUI_DIR)
|
run_in_conda(["git", "fetch"], cwd=WEBUI_DIR, env=env)
|
||||||
run_in_conda(["git", "-c", "advice.detachedHead=false", "checkout", WEBUI_COMMIT], cwd=WEBUI_DIR)
|
run_in_conda(["git", "-c", "advice.detachedHead=false", "checkout", WEBUI_COMMIT], cwd=WEBUI_DIR, env=env)
|
||||||
|
|
||||||
|
# patch forge for various stability-related fixes
|
||||||
|
for patch in WEBUI_PATCHES:
|
||||||
|
patch_path = os.path.join(os.path.dirname(__file__), patch)
|
||||||
|
log.info(f"Applying WebUI patch: {patch_path}")
|
||||||
|
run_in_conda(["git", "apply", patch_path], cwd=WEBUI_DIR, env=env)
|
||||||
|
|
||||||
|
# workaround for the installations that broke out of conda and used ED's python 3.8 instead of WebUI conda's Py 3.10
|
||||||
|
run_in_conda(["python", "-m", "pip", "install", "-q", "--upgrade", "urllib3==2.2.3"], cwd=WEBUI_DIR, env=env)
|
||||||
|
|
||||||
# hack to prevent webui-macos-env.sh from overwriting the COMMANDLINE_ARGS env variable
|
# hack to prevent webui-macos-env.sh from overwriting the COMMANDLINE_ARGS env variable
|
||||||
mac_webui_file = os.path.join(WEBUI_DIR, "webui-macos-env.sh")
|
mac_webui_file = os.path.join(WEBUI_DIR, "webui-macos-env.sh")
|
||||||
@ -118,15 +148,12 @@ def start_backend():
|
|||||||
impl.WEBUI_HOST = backend_config.get("host", "localhost")
|
impl.WEBUI_HOST = backend_config.get("host", "localhost")
|
||||||
impl.WEBUI_PORT = backend_config.get("port", "7860")
|
impl.WEBUI_PORT = backend_config.get("port", "7860")
|
||||||
|
|
||||||
env = dict(os.environ)
|
|
||||||
env.update(get_env())
|
|
||||||
|
|
||||||
def restart_if_webui_dies_after_starting():
|
def restart_if_webui_dies_after_starting():
|
||||||
has_started = False
|
has_started = False
|
||||||
|
|
||||||
while True:
|
while True:
|
||||||
try:
|
try:
|
||||||
impl.ping(timeout=1)
|
impl.ping(timeout=30)
|
||||||
|
|
||||||
is_first_start = not has_started
|
is_first_start = not has_started
|
||||||
has_started = True
|
has_started = True
|
||||||
@ -161,9 +188,15 @@ def start_backend():
|
|||||||
|
|
||||||
cmd = "webui.bat" if OS_NAME == "Windows" else "./webui.sh"
|
cmd = "webui.bat" if OS_NAME == "Windows" else "./webui.sh"
|
||||||
|
|
||||||
print("starting", cmd, WEBUI_DIR)
|
log.info(f"starting: {cmd} in {WEBUI_DIR}")
|
||||||
|
log.info(f"COMMANDLINE_ARGS: {env['COMMANDLINE_ARGS']}")
|
||||||
|
|
||||||
backend_process = run_in_conda([cmd], cwd=WEBUI_DIR, env=env, wait=False, output_prefix="[WebUI] ")
|
backend_process = run_in_conda([cmd], cwd=WEBUI_DIR, env=env, wait=False, output_prefix="[WebUI] ")
|
||||||
|
|
||||||
|
# atexit.register isn't 100% reliable, that's why we also use `forge_monitor_parent_process.patch`
|
||||||
|
# which causes Forge to kill itself if the parent pid passed to it is no longer valid.
|
||||||
|
atexit.register(backend_process.terminate)
|
||||||
|
|
||||||
restart_if_dead_thread = threading.Thread(target=restart_if_webui_dies_after_starting)
|
restart_if_dead_thread = threading.Thread(target=restart_if_webui_dies_after_starting)
|
||||||
restart_if_dead_thread.start()
|
restart_if_dead_thread.start()
|
||||||
|
|
||||||
@ -288,7 +321,8 @@ def create_context():
|
|||||||
context = local()
|
context = local()
|
||||||
|
|
||||||
# temp hack, throws an attribute not found error otherwise
|
# temp hack, throws an attribute not found error otherwise
|
||||||
context.device = "cuda:0"
|
context.torch_device = get_device(0)
|
||||||
|
context.device = f"{context.torch_device.type}:{context.torch_device.index}"
|
||||||
context.half_precision = True
|
context.half_precision = True
|
||||||
context.vram_usage_level = None
|
context.vram_usage_level = None
|
||||||
|
|
||||||
@ -311,7 +345,9 @@ def get_env():
|
|||||||
raise RuntimeError("The system folder is missing!")
|
raise RuntimeError("The system folder is missing!")
|
||||||
|
|
||||||
config = getConfig()
|
config = getConfig()
|
||||||
|
backend_config = config.get("backend_config", {})
|
||||||
models_dir = config.get("models_dir", os.path.join(ROOT_DIR, "models"))
|
models_dir = config.get("models_dir", os.path.join(ROOT_DIR, "models"))
|
||||||
|
models_dir = models_dir.rstrip("/\\")
|
||||||
|
|
||||||
model_path_args = get_model_path_args()
|
model_path_args = get_model_path_args()
|
||||||
|
|
||||||
@ -339,7 +375,9 @@ def get_env():
|
|||||||
"PIP_INSTALLER_LOCATION": [], # [f"{dir}/python/get-pip.py"],
|
"PIP_INSTALLER_LOCATION": [], # [f"{dir}/python/get-pip.py"],
|
||||||
"TRANSFORMERS_CACHE": [f"{dir}/transformers-cache"],
|
"TRANSFORMERS_CACHE": [f"{dir}/transformers-cache"],
|
||||||
"HF_HUB_DISABLE_SYMLINKS_WARNING": ["true"],
|
"HF_HUB_DISABLE_SYMLINKS_WARNING": ["true"],
|
||||||
"COMMANDLINE_ARGS": [f'--api --models-dir "{models_dir}" {model_path_args} --skip-torch-cuda-test'],
|
"COMMANDLINE_ARGS": [
|
||||||
|
f'--api --models-dir "{models_dir}" {model_path_args} --skip-torch-cuda-test --disable-gpu-warning --port {impl.WEBUI_PORT}'
|
||||||
|
],
|
||||||
"SKIP_VENV": ["1"],
|
"SKIP_VENV": ["1"],
|
||||||
"SD_WEBUI_RESTARTING": ["1"],
|
"SD_WEBUI_RESTARTING": ["1"],
|
||||||
}
|
}
|
||||||
@ -350,6 +388,7 @@ def get_env():
|
|||||||
env_entries["PYTHONNOUSERSITE"] = ["1"]
|
env_entries["PYTHONNOUSERSITE"] = ["1"]
|
||||||
env_entries["PYTHON"] = [f"{dir}/python"]
|
env_entries["PYTHON"] = [f"{dir}/python"]
|
||||||
env_entries["GIT"] = [f"{dir}/Library/bin/git"]
|
env_entries["GIT"] = [f"{dir}/Library/bin/git"]
|
||||||
|
env_entries["COMMANDLINE_ARGS"][0] += f" --parent-pid {os.getpid()}"
|
||||||
else:
|
else:
|
||||||
env_entries["PATH"].append("/bin")
|
env_entries["PATH"].append("/bin")
|
||||||
env_entries["PATH"].append("/usr/bin")
|
env_entries["PATH"].append("/usr/bin")
|
||||||
@ -371,17 +410,16 @@ def get_env():
|
|||||||
else:
|
else:
|
||||||
env_entries["TORCH_COMMAND"] = ["pip install torch==2.3.1 torchvision==0.18.1"]
|
env_entries["TORCH_COMMAND"] = ["pip install torch==2.3.1 torchvision==0.18.1"]
|
||||||
else:
|
else:
|
||||||
import torch
|
from easydiffusion.device_manager import needs_to_force_full_precision
|
||||||
from easydiffusion.device_manager import needs_to_force_full_precision, is_cuda_available
|
|
||||||
|
torch_platform_name = get_installed_torch_platform()[0]
|
||||||
|
|
||||||
vram_usage_level = config.get("vram_usage_level", "balanced")
|
vram_usage_level = config.get("vram_usage_level", "balanced")
|
||||||
if config.get("render_devices", "auto") == "cpu" or not has_discrete_graphics_card() or not is_cuda_available():
|
if config.get("render_devices", "auto") == "cpu" or is_cpu_device(torch_platform_name):
|
||||||
env_entries["COMMANDLINE_ARGS"][0] += " --always-cpu"
|
env_entries["COMMANDLINE_ARGS"][0] += " --always-cpu"
|
||||||
else:
|
else:
|
||||||
c = local()
|
device = get_device(0)
|
||||||
c.device_name = torch.cuda.get_device_name()
|
if needs_to_force_full_precision(get_device_name(device)):
|
||||||
|
|
||||||
if needs_to_force_full_precision(c):
|
|
||||||
env_entries["COMMANDLINE_ARGS"][0] += " --no-half --precision full"
|
env_entries["COMMANDLINE_ARGS"][0] += " --no-half --precision full"
|
||||||
|
|
||||||
if vram_usage_level == "low":
|
if vram_usage_level == "low":
|
||||||
@ -389,6 +427,10 @@ def get_env():
|
|||||||
elif vram_usage_level == "high":
|
elif vram_usage_level == "high":
|
||||||
env_entries["COMMANDLINE_ARGS"][0] += " --always-high-vram"
|
env_entries["COMMANDLINE_ARGS"][0] += " --always-high-vram"
|
||||||
|
|
||||||
|
cli_args = backend_config.get("COMMANDLINE_ARGS")
|
||||||
|
if cli_args:
|
||||||
|
env_entries["COMMANDLINE_ARGS"][0] += " " + cli_args
|
||||||
|
|
||||||
env = {}
|
env = {}
|
||||||
for key, paths in env_entries.items():
|
for key, paths in env_entries.items():
|
||||||
paths = [p.replace("/", os.path.sep) for p in paths]
|
paths = [p.replace("/", os.path.sep) for p in paths]
|
||||||
@ -399,40 +441,6 @@ def get_env():
|
|||||||
return env
|
return env
|
||||||
|
|
||||||
|
|
||||||
def has_discrete_graphics_card():
|
|
||||||
system = OS_NAME
|
|
||||||
|
|
||||||
if system == "Windows":
|
|
||||||
try:
|
|
||||||
output = subprocess.check_output(
|
|
||||||
["wmic", "path", "win32_videocontroller", "get", "name"], stderr=subprocess.STDOUT
|
|
||||||
)
|
|
||||||
# Filter for discrete graphics cards (NVIDIA, AMD, etc.)
|
|
||||||
discrete_gpus = ["NVIDIA", "AMD", "ATI"]
|
|
||||||
return any(gpu in output.decode() for gpu in discrete_gpus)
|
|
||||||
except subprocess.CalledProcessError:
|
|
||||||
return False
|
|
||||||
|
|
||||||
elif system == "Linux":
|
|
||||||
try:
|
|
||||||
output = subprocess.check_output(["lspci"], stderr=subprocess.STDOUT)
|
|
||||||
# Check for discrete GPUs (NVIDIA, AMD)
|
|
||||||
discrete_gpus = ["NVIDIA", "AMD", "Advanced Micro Devices"]
|
|
||||||
return any(gpu in line for line in output.decode().splitlines() for gpu in discrete_gpus)
|
|
||||||
except subprocess.CalledProcessError:
|
|
||||||
return False
|
|
||||||
|
|
||||||
elif system == "Darwin": # macOS
|
|
||||||
try:
|
|
||||||
output = subprocess.check_output(["system_profiler", "SPDisplaysDataType"], stderr=subprocess.STDOUT)
|
|
||||||
# Check for discrete GPU in the output
|
|
||||||
return "NVIDIA" in output.decode() or "AMD" in output.decode()
|
|
||||||
except subprocess.CalledProcessError:
|
|
||||||
return False
|
|
||||||
|
|
||||||
return False
|
|
||||||
|
|
||||||
|
|
||||||
# https://stackoverflow.com/a/25134985
|
# https://stackoverflow.com/a/25134985
|
||||||
def kill(proc_pid):
|
def kill(proc_pid):
|
||||||
process = psutil.Process(proc_pid)
|
process = psutil.Process(proc_pid)
|
||||||
|
@ -0,0 +1,25 @@
|
|||||||
|
diff --git a/modules/api/api.py b/modules/api/api.py
|
||||||
|
index 9754be03..26d9eb9b 100644
|
||||||
|
--- a/modules/api/api.py
|
||||||
|
+++ b/modules/api/api.py
|
||||||
|
@@ -234,6 +234,7 @@ class Api:
|
||||||
|
self.add_api_route("/sdapi/v1/refresh-embeddings", self.refresh_embeddings, methods=["POST"])
|
||||||
|
self.add_api_route("/sdapi/v1/refresh-checkpoints", self.refresh_checkpoints, methods=["POST"])
|
||||||
|
self.add_api_route("/sdapi/v1/refresh-vae", self.refresh_vae, methods=["POST"])
|
||||||
|
+ self.add_api_route("/sdapi/v1/refresh-vae-and-text-encoders", self.refresh_vae_and_text_encoders, methods=["POST"])
|
||||||
|
self.add_api_route("/sdapi/v1/create/embedding", self.create_embedding, methods=["POST"], response_model=models.CreateResponse)
|
||||||
|
self.add_api_route("/sdapi/v1/create/hypernetwork", self.create_hypernetwork, methods=["POST"], response_model=models.CreateResponse)
|
||||||
|
self.add_api_route("/sdapi/v1/memory", self.get_memory, methods=["GET"], response_model=models.MemoryResponse)
|
||||||
|
@@ -779,6 +780,12 @@ class Api:
|
||||||
|
with self.queue_lock:
|
||||||
|
shared_items.refresh_vae_list()
|
||||||
|
|
||||||
|
+ def refresh_vae_and_text_encoders(self):
|
||||||
|
+ from modules_forge.main_entry import refresh_models
|
||||||
|
+
|
||||||
|
+ with self.queue_lock:
|
||||||
|
+ refresh_models()
|
||||||
|
+
|
||||||
|
def create_embedding(self, args: dict):
|
||||||
|
try:
|
||||||
|
shared.state.begin(job="create_embedding")
|
@ -0,0 +1,35 @@
|
|||||||
|
diff --git a/modules_forge/patch_basic.py b/modules_forge/patch_basic.py
|
||||||
|
index 822e2838..5893efad 100644
|
||||||
|
--- a/modules_forge/patch_basic.py
|
||||||
|
+++ b/modules_forge/patch_basic.py
|
||||||
|
@@ -39,18 +39,18 @@ def build_loaded(module, loader_name):
|
||||||
|
except Exception as e:
|
||||||
|
result = None
|
||||||
|
exp = str(e) + '\n'
|
||||||
|
- for path in list(args) + list(kwargs.values()):
|
||||||
|
- if isinstance(path, str):
|
||||||
|
- if os.path.exists(path):
|
||||||
|
- exp += f'File corrupted: {path} \n'
|
||||||
|
- corrupted_backup_file = path + '.corrupted'
|
||||||
|
- if os.path.exists(corrupted_backup_file):
|
||||||
|
- os.remove(corrupted_backup_file)
|
||||||
|
- os.replace(path, corrupted_backup_file)
|
||||||
|
- if os.path.exists(path):
|
||||||
|
- os.remove(path)
|
||||||
|
- exp += f'Forge has tried to move the corrupted file to {corrupted_backup_file} \n'
|
||||||
|
- exp += f'You may try again now and Forge will download models again. \n'
|
||||||
|
+ # for path in list(args) + list(kwargs.values()):
|
||||||
|
+ # if isinstance(path, str):
|
||||||
|
+ # if os.path.exists(path):
|
||||||
|
+ # exp += f'File corrupted: {path} \n'
|
||||||
|
+ # corrupted_backup_file = path + '.corrupted'
|
||||||
|
+ # if os.path.exists(corrupted_backup_file):
|
||||||
|
+ # os.remove(corrupted_backup_file)
|
||||||
|
+ # os.replace(path, corrupted_backup_file)
|
||||||
|
+ # if os.path.exists(path):
|
||||||
|
+ # os.remove(path)
|
||||||
|
+ # exp += f'Forge has tried to move the corrupted file to {corrupted_backup_file} \n'
|
||||||
|
+ # exp += f'You may try again now and Forge will download models again. \n'
|
||||||
|
raise ValueError(exp)
|
||||||
|
return result
|
||||||
|
|
@ -0,0 +1,20 @@
|
|||||||
|
diff --git a/modules_forge/main_thread.py b/modules_forge/main_thread.py
|
||||||
|
index eb3e7889..0cfcadd1 100644
|
||||||
|
--- a/modules_forge/main_thread.py
|
||||||
|
+++ b/modules_forge/main_thread.py
|
||||||
|
@@ -33,8 +33,8 @@ class Task:
|
||||||
|
except Exception as e:
|
||||||
|
traceback.print_exc()
|
||||||
|
print(e)
|
||||||
|
- self.exception = e
|
||||||
|
- last_exception = e
|
||||||
|
+ self.exception = str(e)
|
||||||
|
+ last_exception = str(e)
|
||||||
|
|
||||||
|
|
||||||
|
def loop():
|
||||||
|
@@ -74,4 +74,3 @@ def run_and_wait_result(func, *args, **kwargs):
|
||||||
|
with lock:
|
||||||
|
finished_list.remove(finished_task)
|
||||||
|
return finished_task.result
|
||||||
|
-
|
27
ui/easydiffusion/backends/webui/forge_loader_force_gc.patch
Normal file
27
ui/easydiffusion/backends/webui/forge_loader_force_gc.patch
Normal file
@ -0,0 +1,27 @@
|
|||||||
|
diff --git a/backend/loader.py b/backend/loader.py
|
||||||
|
index a090adab..53fdb26d 100644
|
||||||
|
--- a/backend/loader.py
|
||||||
|
+++ b/backend/loader.py
|
||||||
|
@@ -1,4 +1,5 @@
|
||||||
|
import os
|
||||||
|
+import gc
|
||||||
|
import torch
|
||||||
|
import logging
|
||||||
|
import importlib
|
||||||
|
@@ -447,6 +448,8 @@ def preprocess_state_dict(sd):
|
||||||
|
|
||||||
|
|
||||||
|
def split_state_dict(sd, additional_state_dicts: list = None):
|
||||||
|
+ gc.collect()
|
||||||
|
+
|
||||||
|
sd = load_torch_file(sd)
|
||||||
|
sd = preprocess_state_dict(sd)
|
||||||
|
guess = huggingface_guess.guess(sd)
|
||||||
|
@@ -456,6 +459,7 @@ def split_state_dict(sd, additional_state_dicts: list = None):
|
||||||
|
asd = load_torch_file(asd)
|
||||||
|
sd = replace_state_dict(sd, asd, guess)
|
||||||
|
del asd
|
||||||
|
+ gc.collect()
|
||||||
|
|
||||||
|
guess.clip_target = guess.clip_target(sd)
|
||||||
|
guess.model_type = guess.model_type(sd)
|
@ -0,0 +1,12 @@
|
|||||||
|
diff --git a/modules/sd_models.py b/modules/sd_models.py
|
||||||
|
index 76940ecc..b07d84b6 100644
|
||||||
|
--- a/modules/sd_models.py
|
||||||
|
+++ b/modules/sd_models.py
|
||||||
|
@@ -482,6 +482,7 @@ def forge_model_reload():
|
||||||
|
|
||||||
|
if model_data.sd_model:
|
||||||
|
model_data.sd_model = None
|
||||||
|
+ model_data.forge_hash = None
|
||||||
|
memory_management.unload_all_models()
|
||||||
|
memory_management.soft_empty_cache()
|
||||||
|
gc.collect()
|
@ -0,0 +1,85 @@
|
|||||||
|
diff --git a/launch.py b/launch.py
|
||||||
|
index c0568c7b..3919f7dd 100644
|
||||||
|
--- a/launch.py
|
||||||
|
+++ b/launch.py
|
||||||
|
@@ -2,6 +2,7 @@
|
||||||
|
# faulthandler.enable()
|
||||||
|
|
||||||
|
from modules import launch_utils
|
||||||
|
+from modules import parent_process_monitor
|
||||||
|
|
||||||
|
args = launch_utils.args
|
||||||
|
python = launch_utils.python
|
||||||
|
@@ -28,6 +29,10 @@ start = launch_utils.start
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
+ if args.parent_pid != -1:
|
||||||
|
+ print(f"Monitoring parent process for termination. Parent PID: {args.parent_pid}")
|
||||||
|
+ parent_process_monitor.start_monitor_thread(args.parent_pid)
|
||||||
|
+
|
||||||
|
if args.dump_sysinfo:
|
||||||
|
filename = launch_utils.dump_sysinfo()
|
||||||
|
|
||||||
|
diff --git a/modules/cmd_args.py b/modules/cmd_args.py
|
||||||
|
index fcd8a50f..7f684bec 100644
|
||||||
|
--- a/modules/cmd_args.py
|
||||||
|
+++ b/modules/cmd_args.py
|
||||||
|
@@ -148,3 +148,6 @@ parser.add_argument(
|
||||||
|
help="Path to directory with annotator model directories",
|
||||||
|
default=None,
|
||||||
|
)
|
||||||
|
+
|
||||||
|
+# Easy Diffusion arguments
|
||||||
|
+parser.add_argument("--parent-pid", type=int, default=-1, help='parent process id, if running webui as a sub-process')
|
||||||
|
diff --git a/modules/parent_process_monitor.py b/modules/parent_process_monitor.py
|
||||||
|
new file mode 100644
|
||||||
|
index 00000000..cc3e2049
|
||||||
|
--- /dev/null
|
||||||
|
+++ b/modules/parent_process_monitor.py
|
||||||
|
@@ -0,0 +1,45 @@
|
||||||
|
+# monitors and kills itself when the parent process dies. required when running Forge as a sub-process.
|
||||||
|
+# modified version of https://stackoverflow.com/a/23587108
|
||||||
|
+
|
||||||
|
+import sys
|
||||||
|
+import os
|
||||||
|
+import threading
|
||||||
|
+import platform
|
||||||
|
+import time
|
||||||
|
+
|
||||||
|
+
|
||||||
|
+def _monitor_parent_posix(parent_pid):
|
||||||
|
+ print(f"Monitoring parent pid: {parent_pid}")
|
||||||
|
+ while True:
|
||||||
|
+ if os.getppid() != parent_pid:
|
||||||
|
+ os._exit(0)
|
||||||
|
+ time.sleep(1)
|
||||||
|
+
|
||||||
|
+
|
||||||
|
+def _monitor_parent_windows(parent_pid):
|
||||||
|
+ from ctypes import WinDLL, WinError
|
||||||
|
+ from ctypes.wintypes import DWORD, BOOL, HANDLE
|
||||||
|
+
|
||||||
|
+ SYNCHRONIZE = 0x00100000 # Magic value from http://msdn.microsoft.com/en-us/library/ms684880.aspx
|
||||||
|
+ kernel32 = WinDLL("kernel32.dll")
|
||||||
|
+ kernel32.OpenProcess.argtypes = (DWORD, BOOL, DWORD)
|
||||||
|
+ kernel32.OpenProcess.restype = HANDLE
|
||||||
|
+
|
||||||
|
+ handle = kernel32.OpenProcess(SYNCHRONIZE, False, parent_pid)
|
||||||
|
+ if not handle:
|
||||||
|
+ raise WinError()
|
||||||
|
+
|
||||||
|
+ # Wait until parent exits
|
||||||
|
+ from ctypes import windll
|
||||||
|
+
|
||||||
|
+ print(f"Monitoring parent pid: {parent_pid}")
|
||||||
|
+ windll.kernel32.WaitForSingleObject(handle, -1)
|
||||||
|
+ os._exit(0)
|
||||||
|
+
|
||||||
|
+
|
||||||
|
+def start_monitor_thread(parent_pid):
|
||||||
|
+ if platform.system() == "Windows":
|
||||||
|
+ t = threading.Thread(target=_monitor_parent_windows, args=(parent_pid,), daemon=True)
|
||||||
|
+ else:
|
||||||
|
+ t = threading.Thread(target=_monitor_parent_posix, args=(parent_pid,), daemon=True)
|
||||||
|
+ t.start()
|
@ -1,6 +1,6 @@
|
|||||||
import os
|
import os
|
||||||
import requests
|
import requests
|
||||||
from requests.exceptions import ConnectTimeout, ConnectionError
|
from requests.exceptions import ConnectTimeout, ConnectionError, ReadTimeout
|
||||||
from typing import Union, List
|
from typing import Union, List
|
||||||
from threading import local as Context
|
from threading import local as Context
|
||||||
from threading import Thread
|
from threading import Thread
|
||||||
@ -8,7 +8,7 @@ import uuid
|
|||||||
import time
|
import time
|
||||||
from copy import deepcopy
|
from copy import deepcopy
|
||||||
|
|
||||||
from sdkit.utils import base64_str_to_img, img_to_base64_str
|
from sdkit.utils import base64_str_to_img, img_to_base64_str, log
|
||||||
|
|
||||||
WEBUI_HOST = "localhost"
|
WEBUI_HOST = "localhost"
|
||||||
WEBUI_PORT = "7860"
|
WEBUI_PORT = "7860"
|
||||||
@ -27,6 +27,7 @@ webui_opts: dict = None
|
|||||||
curr_models = {
|
curr_models = {
|
||||||
"stable-diffusion": None,
|
"stable-diffusion": None,
|
||||||
"vae": None,
|
"vae": None,
|
||||||
|
"text-encoder": None,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
@ -91,55 +92,56 @@ def ping(timeout=1):
|
|||||||
print(f"Error getting options: {e}")
|
print(f"Error getting options: {e}")
|
||||||
|
|
||||||
return True
|
return True
|
||||||
except (ConnectTimeout, ConnectionError) as e:
|
except (ConnectTimeout, ConnectionError, ReadTimeout) as e:
|
||||||
raise TimeoutError(e)
|
raise TimeoutError(e)
|
||||||
|
|
||||||
|
|
||||||
def load_model(context, model_type, **kwargs):
|
def load_model(context, model_type, **kwargs):
|
||||||
|
from easydiffusion.app import ROOT_DIR, getConfig
|
||||||
|
|
||||||
|
config = getConfig()
|
||||||
|
models_dir = config.get("models_dir", os.path.join(ROOT_DIR, "models"))
|
||||||
|
|
||||||
model_path = context.model_paths[model_type]
|
model_path = context.model_paths[model_type]
|
||||||
|
|
||||||
|
if model_type == "stable-diffusion":
|
||||||
|
base_dir = os.path.join(models_dir, model_type)
|
||||||
|
model_path = os.path.relpath(model_path, base_dir)
|
||||||
|
|
||||||
|
# print(f"load model: {model_type=} {model_path=} {curr_models=}")
|
||||||
|
curr_models[model_type] = model_path
|
||||||
|
|
||||||
|
|
||||||
|
def unload_model(context, model_type, **kwargs):
|
||||||
|
# print(f"unload model: {model_type=} {curr_models=}")
|
||||||
|
curr_models[model_type] = None
|
||||||
|
|
||||||
|
|
||||||
|
def flush_model_changes(context):
|
||||||
if webui_opts is None:
|
if webui_opts is None:
|
||||||
print("Server not ready, can't set the model")
|
print("Server not ready, can't set the model")
|
||||||
return
|
return
|
||||||
|
|
||||||
if model_type == "stable-diffusion":
|
modules = []
|
||||||
model_name = os.path.basename(model_path)
|
for model_type in ("vae", "text-encoder"):
|
||||||
model_name = os.path.splitext(model_name)[0]
|
if curr_models[model_type]:
|
||||||
print(f"setting sd model: {model_name}")
|
model_paths = curr_models[model_type]
|
||||||
if curr_models[model_type] != model_name:
|
model_paths = [model_paths] if not isinstance(model_paths, list) else model_paths
|
||||||
try:
|
modules += model_paths
|
||||||
res = webui_post("/sdapi/v1/options", json={"sd_model_checkpoint": model_name})
|
|
||||||
if res.status_code != 200:
|
|
||||||
raise Exception(res.text)
|
|
||||||
except Exception as e:
|
|
||||||
raise RuntimeError(
|
|
||||||
f"The engine failed to set the required options. Please check the logs in the command line window for more details."
|
|
||||||
)
|
|
||||||
|
|
||||||
curr_models[model_type] = model_name
|
opts = {"sd_model_checkpoint": curr_models["stable-diffusion"], "forge_additional_modules": modules}
|
||||||
elif model_type == "vae":
|
|
||||||
if curr_models[model_type] != model_path:
|
|
||||||
vae_model = [model_path] if model_path else []
|
|
||||||
|
|
||||||
opts = {"sd_model_checkpoint": curr_models["stable-diffusion"], "forge_additional_modules": vae_model}
|
print("Setting backend models", opts)
|
||||||
print("setting opts 2", opts)
|
|
||||||
|
|
||||||
try:
|
try:
|
||||||
res = webui_post("/sdapi/v1/options", json=opts)
|
res = webui_post("/sdapi/v1/options", json=opts)
|
||||||
if res.status_code != 200:
|
print("got res", res.status_code)
|
||||||
raise Exception(res.text)
|
if res.status_code != 200:
|
||||||
except Exception as e:
|
raise Exception(res.text)
|
||||||
raise RuntimeError(
|
except Exception as e:
|
||||||
f"The engine failed to set the required options. Please check the logs in the command line window for more details."
|
raise RuntimeError(
|
||||||
)
|
f"The engine failed to set the required options. Please check the logs in the command line window for more details."
|
||||||
|
)
|
||||||
curr_models[model_type] = model_path
|
|
||||||
|
|
||||||
|
|
||||||
def unload_model(context, model_type, **kwargs):
|
|
||||||
if model_type == "vae":
|
|
||||||
context.model_paths[model_type] = None
|
|
||||||
load_model(context, model_type)
|
|
||||||
|
|
||||||
|
|
||||||
def generate_images(
|
def generate_images(
|
||||||
@ -195,7 +197,7 @@ def generate_images(
|
|||||||
cmd["init_images"] = [init_image]
|
cmd["init_images"] = [init_image]
|
||||||
cmd["denoising_strength"] = prompt_strength
|
cmd["denoising_strength"] = prompt_strength
|
||||||
if init_image_mask:
|
if init_image_mask:
|
||||||
cmd["mask"] = init_image_mask
|
cmd["mask"] = init_image_mask if isinstance(init_image_mask, str) else img_to_base64_str(init_image_mask)
|
||||||
cmd["include_init_images"] = True
|
cmd["include_init_images"] = True
|
||||||
cmd["inpainting_fill"] = 1
|
cmd["inpainting_fill"] = 1
|
||||||
cmd["initial_noise_multiplier"] = 1
|
cmd["initial_noise_multiplier"] = 1
|
||||||
@ -252,8 +254,13 @@ def generate_images(
|
|||||||
if res.status_code == 200:
|
if res.status_code == 200:
|
||||||
res = res.json()
|
res = res.json()
|
||||||
else:
|
else:
|
||||||
|
if res.status_code == 500:
|
||||||
|
res = res.json()
|
||||||
|
log.error(f"Server error: {res}")
|
||||||
|
raise Exception(f"{res['message']}. Please check the logs in the command-line window for more details.")
|
||||||
|
|
||||||
raise Exception(
|
raise Exception(
|
||||||
"The engine failed while generating this image. Please check the logs in the command-line window for more details."
|
f"HTTP Status {res.status_code}. The engine failed while generating this image. Please check the logs in the command-line window for more details."
|
||||||
)
|
)
|
||||||
|
|
||||||
import json
|
import json
|
||||||
@ -346,7 +353,7 @@ def refresh_models():
|
|||||||
pass
|
pass
|
||||||
|
|
||||||
try:
|
try:
|
||||||
for type in ("checkpoints", "vae"):
|
for type in ("checkpoints", "vae-and-text-encoders"):
|
||||||
t = Thread(target=make_refresh_call, args=(type,))
|
t = Thread(target=make_refresh_call, args=(type,))
|
||||||
t.start()
|
t.start()
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
|
@ -6,6 +6,15 @@ import traceback
|
|||||||
import torch
|
import torch
|
||||||
from easydiffusion.utils import log
|
from easydiffusion.utils import log
|
||||||
|
|
||||||
|
from torchruntime.utils import (
|
||||||
|
get_installed_torch_platform,
|
||||||
|
get_device,
|
||||||
|
get_device_count,
|
||||||
|
get_device_name,
|
||||||
|
SUPPORTED_BACKENDS,
|
||||||
|
)
|
||||||
|
from sdkit.utils import mem_get_info, is_cpu_device, has_half_precision_bug
|
||||||
|
|
||||||
"""
|
"""
|
||||||
Set `FORCE_FULL_PRECISION` in the environment variables, or in `config.bat`/`config.sh` to set full precision (i.e. float32).
|
Set `FORCE_FULL_PRECISION` in the environment variables, or in `config.bat`/`config.sh` to set full precision (i.e. float32).
|
||||||
Otherwise the models will load at half-precision (i.e. float16).
|
Otherwise the models will load at half-precision (i.e. float16).
|
||||||
@ -22,33 +31,15 @@ mem_free_threshold = 0
|
|||||||
|
|
||||||
def get_device_delta(render_devices, active_devices):
|
def get_device_delta(render_devices, active_devices):
|
||||||
"""
|
"""
|
||||||
render_devices: 'cpu', or 'auto', or 'mps' or ['cuda:N'...]
|
render_devices: 'auto' or backends listed in `torchruntime.utils.SUPPORTED_BACKENDS`
|
||||||
active_devices: ['cpu', 'mps', 'cuda:N'...]
|
active_devices: [backends listed in `torchruntime.utils.SUPPORTED_BACKENDS`]
|
||||||
"""
|
"""
|
||||||
|
|
||||||
if render_devices in ("cpu", "auto", "mps"):
|
render_devices = render_devices or "auto"
|
||||||
render_devices = [render_devices]
|
render_devices = [render_devices] if isinstance(render_devices, str) else render_devices
|
||||||
elif render_devices is not None:
|
|
||||||
if isinstance(render_devices, str):
|
|
||||||
render_devices = [render_devices]
|
|
||||||
if isinstance(render_devices, list) and len(render_devices) > 0:
|
|
||||||
render_devices = list(filter(lambda x: x.startswith("cuda:") or x == "mps", render_devices))
|
|
||||||
if len(render_devices) == 0:
|
|
||||||
raise Exception(
|
|
||||||
'Invalid render_devices value in config.json. Valid: {"render_devices": ["cuda:0", "cuda:1"...]}, or {"render_devices": "cpu"} or {"render_devices": "mps"} or {"render_devices": "auto"}'
|
|
||||||
)
|
|
||||||
|
|
||||||
render_devices = list(filter(lambda x: is_device_compatible(x), render_devices))
|
# check for backend support
|
||||||
if len(render_devices) == 0:
|
validate_render_devices(render_devices)
|
||||||
raise Exception(
|
|
||||||
"Sorry, none of the render_devices configured in config.json are compatible with Stable Diffusion"
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
raise Exception(
|
|
||||||
'Invalid render_devices value in config.json. Valid: {"render_devices": ["cuda:0", "cuda:1"...]}, or {"render_devices": "cpu"} or {"render_devices": "auto"}'
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
render_devices = ["auto"]
|
|
||||||
|
|
||||||
if "auto" in render_devices:
|
if "auto" in render_devices:
|
||||||
render_devices = auto_pick_devices(active_devices)
|
render_devices = auto_pick_devices(active_devices)
|
||||||
@ -64,47 +55,39 @@ def get_device_delta(render_devices, active_devices):
|
|||||||
return devices_to_start, devices_to_stop
|
return devices_to_start, devices_to_stop
|
||||||
|
|
||||||
|
|
||||||
def is_mps_available():
|
def validate_render_devices(render_devices):
|
||||||
return (
|
supported_backends = ("auto",) + SUPPORTED_BACKENDS
|
||||||
platform.system() == "Darwin"
|
unsupported_render_devices = [d for d in render_devices if not d.lower().startswith(supported_backends)]
|
||||||
and hasattr(torch.backends, "mps")
|
|
||||||
and torch.backends.mps.is_available()
|
|
||||||
and torch.backends.mps.is_built()
|
|
||||||
)
|
|
||||||
|
|
||||||
|
if unsupported_render_devices:
|
||||||
def is_cuda_available():
|
raise ValueError(
|
||||||
return torch.cuda.is_available()
|
f"Invalid render devices in config: {unsupported_render_devices}. Valid render devices: {supported_backends}"
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
def auto_pick_devices(currently_active_devices):
|
def auto_pick_devices(currently_active_devices):
|
||||||
global mem_free_threshold
|
global mem_free_threshold
|
||||||
|
|
||||||
if is_mps_available():
|
torch_platform_name = get_installed_torch_platform()[0]
|
||||||
return ["mps"]
|
|
||||||
|
|
||||||
if not is_cuda_available():
|
if is_cpu_device(torch_platform_name):
|
||||||
return ["cpu"]
|
return [torch_platform_name]
|
||||||
|
|
||||||
device_count = torch.cuda.device_count()
|
|
||||||
if device_count == 1:
|
|
||||||
return ["cuda:0"] if is_device_compatible("cuda:0") else ["cpu"]
|
|
||||||
|
|
||||||
|
device_count = get_device_count()
|
||||||
log.debug("Autoselecting GPU. Using most free memory.")
|
log.debug("Autoselecting GPU. Using most free memory.")
|
||||||
devices = []
|
devices = []
|
||||||
for device in range(device_count):
|
for device_id in range(device_count):
|
||||||
device = f"cuda:{device}"
|
device_id = f"{torch_platform_name}:{device_id}" if device_count > 1 else torch_platform_name
|
||||||
if not is_device_compatible(device):
|
device = get_device(device_id)
|
||||||
continue
|
|
||||||
|
|
||||||
mem_free, mem_total = torch.cuda.mem_get_info(device)
|
mem_free, mem_total = mem_get_info(device)
|
||||||
mem_free /= float(10**9)
|
mem_free /= float(10**9)
|
||||||
mem_total /= float(10**9)
|
mem_total /= float(10**9)
|
||||||
device_name = torch.cuda.get_device_name(device)
|
device_name = get_device_name(device)
|
||||||
log.debug(
|
log.debug(
|
||||||
f"{device} detected: {device_name} - Memory (free/total): {round(mem_free, 2)}Gb / {round(mem_total, 2)}Gb"
|
f"{device_id} detected: {device_name} - Memory (free/total): {round(mem_free, 2)}Gb / {round(mem_total, 2)}Gb"
|
||||||
)
|
)
|
||||||
devices.append({"device": device, "device_name": device_name, "mem_free": mem_free})
|
devices.append({"device": device_id, "device_name": device_name, "mem_free": mem_free})
|
||||||
|
|
||||||
devices.sort(key=lambda x: x["mem_free"], reverse=True)
|
devices.sort(key=lambda x: x["mem_free"], reverse=True)
|
||||||
max_mem_free = devices[0]["mem_free"]
|
max_mem_free = devices[0]["mem_free"]
|
||||||
@ -117,69 +100,45 @@ def auto_pick_devices(currently_active_devices):
|
|||||||
# always be very low (since their VRAM contains the model).
|
# always be very low (since their VRAM contains the model).
|
||||||
# These already-running devices probably aren't terrible, since they were picked in the past.
|
# These already-running devices probably aren't terrible, since they were picked in the past.
|
||||||
# Worst case, the user can restart the program and that'll get rid of them.
|
# Worst case, the user can restart the program and that'll get rid of them.
|
||||||
devices = list(
|
devices = [
|
||||||
filter(
|
x["device"] for x in devices if x["mem_free"] >= mem_free_threshold or x["device"] in currently_active_devices
|
||||||
(lambda x: x["mem_free"] > mem_free_threshold or x["device"] in currently_active_devices),
|
]
|
||||||
devices,
|
|
||||||
)
|
|
||||||
)
|
|
||||||
devices = list(map(lambda x: x["device"], devices))
|
|
||||||
return devices
|
return devices
|
||||||
|
|
||||||
|
|
||||||
def device_init(context, device):
|
def device_init(context, device_id):
|
||||||
"""
|
context.device = device_id
|
||||||
This function assumes the 'device' has already been verified to be compatible.
|
|
||||||
`get_device_delta()` has already filtered out incompatible devices.
|
|
||||||
"""
|
|
||||||
|
|
||||||
validate_device_id(device, log_prefix="device_init")
|
if is_cpu_device(context.torch_device):
|
||||||
|
|
||||||
if "cuda" not in device:
|
|
||||||
context.device = device
|
|
||||||
context.device_name = get_processor_name()
|
context.device_name = get_processor_name()
|
||||||
context.half_precision = False
|
context.half_precision = False
|
||||||
log.debug(f"Render device available as {context.device_name}")
|
else:
|
||||||
return
|
context.device_name = get_device_name(context.torch_device)
|
||||||
|
|
||||||
context.device_name = torch.cuda.get_device_name(device)
|
# Some graphics cards have bugs in their firmware that prevent image generation at half precision
|
||||||
context.device = device
|
if needs_to_force_full_precision(context.device_name):
|
||||||
|
log.warn(f"forcing full precision on this GPU, to avoid corrupted images. GPU: {context.device_name}")
|
||||||
|
context.half_precision = False
|
||||||
|
|
||||||
# Force full precision on 1660 and 1650 NVIDIA cards to avoid creating green images
|
log.info(f'Setting {device_id} as active, with precision: {"half" if context.half_precision else "full"}')
|
||||||
if needs_to_force_full_precision(context):
|
|
||||||
log.warn(f"forcing full precision on this GPU, to avoid green images. GPU detected: {context.device_name}")
|
|
||||||
# Apply force_full_precision now before models are loaded.
|
|
||||||
context.half_precision = False
|
|
||||||
|
|
||||||
log.info(f'Setting {device} as active, with precision: {"half" if context.half_precision else "full"}')
|
|
||||||
torch.cuda.device(device)
|
|
||||||
|
|
||||||
|
|
||||||
def needs_to_force_full_precision(context):
|
def needs_to_force_full_precision(device_name):
|
||||||
if "FORCE_FULL_PRECISION" in os.environ:
|
if "FORCE_FULL_PRECISION" in os.environ:
|
||||||
return True
|
return True
|
||||||
|
|
||||||
device_name = context.device_name.lower()
|
return has_half_precision_bug(device_name.lower())
|
||||||
return (
|
|
||||||
("nvidia" in device_name or "geforce" in device_name or "quadro" in device_name)
|
|
||||||
and (
|
|
||||||
" 1660" in device_name
|
|
||||||
or " 1650" in device_name
|
|
||||||
or " 1630" in device_name
|
|
||||||
or " t400" in device_name
|
|
||||||
or " t550" in device_name
|
|
||||||
or " t600" in device_name
|
|
||||||
or " t1000" in device_name
|
|
||||||
or " t1200" in device_name
|
|
||||||
or " t2000" in device_name
|
|
||||||
)
|
|
||||||
) or ("tesla k40m" in device_name)
|
|
||||||
|
|
||||||
|
|
||||||
def get_max_vram_usage_level(device):
|
def get_max_vram_usage_level(device):
|
||||||
if "cuda" in device:
|
"Expects a torch.device as the argument"
|
||||||
_, mem_total = torch.cuda.mem_get_info(device)
|
|
||||||
else:
|
if is_cpu_device(device):
|
||||||
|
return "high"
|
||||||
|
|
||||||
|
_, mem_total = mem_get_info(device)
|
||||||
|
|
||||||
|
if mem_total < 0.001: # probably a torch platform without a mem_get_info() implementation
|
||||||
return "high"
|
return "high"
|
||||||
|
|
||||||
mem_total /= float(10**9)
|
mem_total /= float(10**9)
|
||||||
@ -191,51 +150,6 @@ def get_max_vram_usage_level(device):
|
|||||||
return "high"
|
return "high"
|
||||||
|
|
||||||
|
|
||||||
def validate_device_id(device, log_prefix=""):
|
|
||||||
def is_valid():
|
|
||||||
if not isinstance(device, str):
|
|
||||||
return False
|
|
||||||
if device == "cpu" or device == "mps":
|
|
||||||
return True
|
|
||||||
if not device.startswith("cuda:") or not device[5:].isnumeric():
|
|
||||||
return False
|
|
||||||
return True
|
|
||||||
|
|
||||||
if not is_valid():
|
|
||||||
raise EnvironmentError(
|
|
||||||
f"{log_prefix}: device id should be 'cpu', 'mps', or 'cuda:N' (where N is an integer index for the GPU). Got: {device}"
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def is_device_compatible(device):
|
|
||||||
"""
|
|
||||||
Returns True/False, and prints any compatibility errors
|
|
||||||
"""
|
|
||||||
# static variable "history".
|
|
||||||
is_device_compatible.history = getattr(is_device_compatible, "history", {})
|
|
||||||
try:
|
|
||||||
validate_device_id(device, log_prefix="is_device_compatible")
|
|
||||||
except:
|
|
||||||
log.error(str(e))
|
|
||||||
return False
|
|
||||||
|
|
||||||
if device in ("cpu", "mps"):
|
|
||||||
return True
|
|
||||||
# Memory check
|
|
||||||
try:
|
|
||||||
_, mem_total = torch.cuda.mem_get_info(device)
|
|
||||||
mem_total /= float(10**9)
|
|
||||||
if mem_total < 1.9:
|
|
||||||
if is_device_compatible.history.get(device) == None:
|
|
||||||
log.warn(f"GPU {device} with less than 2 GB of VRAM is not compatible with Stable Diffusion")
|
|
||||||
is_device_compatible.history[device] = 1
|
|
||||||
return False
|
|
||||||
except RuntimeError as e:
|
|
||||||
log.error(str(e))
|
|
||||||
return False
|
|
||||||
return True
|
|
||||||
|
|
||||||
|
|
||||||
def get_processor_name():
|
def get_processor_name():
|
||||||
try:
|
try:
|
||||||
import subprocess
|
import subprocess
|
||||||
|
@ -3,13 +3,13 @@ import shutil
|
|||||||
from glob import glob
|
from glob import glob
|
||||||
import traceback
|
import traceback
|
||||||
from typing import Union
|
from typing import Union
|
||||||
|
from os import path
|
||||||
|
|
||||||
from easydiffusion import app
|
from easydiffusion import app
|
||||||
from easydiffusion.types import ModelsData
|
from easydiffusion.types import ModelsData
|
||||||
from easydiffusion.utils import log
|
from easydiffusion.utils import log
|
||||||
from sdkit import Context
|
from sdkit import Context
|
||||||
from sdkit.models import scan_model, download_model, get_model_info_from_db
|
from sdkit.models import scan_model, download_model, get_model_info_from_db
|
||||||
from sdkit.models.model_loader.controlnet_filters import filters as cn_filters
|
|
||||||
from sdkit.utils import hash_file_quick
|
from sdkit.utils import hash_file_quick
|
||||||
from sdkit.models.model_loader.embeddings import get_embedding_token
|
from sdkit.models.model_loader.embeddings import get_embedding_token
|
||||||
|
|
||||||
@ -23,10 +23,11 @@ KNOWN_MODEL_TYPES = [
|
|||||||
"codeformer",
|
"codeformer",
|
||||||
"embeddings",
|
"embeddings",
|
||||||
"controlnet",
|
"controlnet",
|
||||||
|
"text-encoder",
|
||||||
]
|
]
|
||||||
MODEL_EXTENSIONS = {
|
MODEL_EXTENSIONS = {
|
||||||
"stable-diffusion": [".ckpt", ".safetensors", ".sft", ".gguf"],
|
"stable-diffusion": [".ckpt", ".safetensors", ".sft", ".gguf"],
|
||||||
"vae": [".vae.pt", ".ckpt", ".safetensors", ".sft"],
|
"vae": [".vae.pt", ".ckpt", ".safetensors", ".sft", ".gguf"],
|
||||||
"hypernetwork": [".pt", ".safetensors", ".sft"],
|
"hypernetwork": [".pt", ".safetensors", ".sft"],
|
||||||
"gfpgan": [".pth"],
|
"gfpgan": [".pth"],
|
||||||
"realesrgan": [".pth"],
|
"realesrgan": [".pth"],
|
||||||
@ -34,10 +35,11 @@ MODEL_EXTENSIONS = {
|
|||||||
"codeformer": [".pth"],
|
"codeformer": [".pth"],
|
||||||
"embeddings": [".pt", ".bin", ".safetensors", ".sft"],
|
"embeddings": [".pt", ".bin", ".safetensors", ".sft"],
|
||||||
"controlnet": [".pth", ".safetensors", ".sft"],
|
"controlnet": [".pth", ".safetensors", ".sft"],
|
||||||
|
"text-encoder": [".safetensors", ".sft", ".gguf"],
|
||||||
}
|
}
|
||||||
DEFAULT_MODELS = {
|
DEFAULT_MODELS = {
|
||||||
"stable-diffusion": [
|
"stable-diffusion": [
|
||||||
{"file_name": "sd-v1-4.ckpt", "model_id": "1.4"},
|
{"file_name": "sd-v1-5.safetensors", "model_id": "1.5-pruned-emaonly-fp16"},
|
||||||
],
|
],
|
||||||
"gfpgan": [
|
"gfpgan": [
|
||||||
{"file_name": "GFPGANv1.4.pth", "model_id": "1.4"},
|
{"file_name": "GFPGANv1.4.pth", "model_id": "1.4"},
|
||||||
@ -60,6 +62,7 @@ ALTERNATE_FOLDER_NAMES = { # for WebUI compatibility
|
|||||||
"realesrgan": "RealESRGAN",
|
"realesrgan": "RealESRGAN",
|
||||||
"lora": "Lora",
|
"lora": "Lora",
|
||||||
"controlnet": "ControlNet",
|
"controlnet": "ControlNet",
|
||||||
|
"text-encoder": "text_encoder",
|
||||||
}
|
}
|
||||||
|
|
||||||
known_models = {}
|
known_models = {}
|
||||||
@ -87,7 +90,7 @@ def load_default_models(context: Context):
|
|||||||
scan_model=context.model_paths[model_type] != None
|
scan_model=context.model_paths[model_type] != None
|
||||||
and not context.model_paths[model_type].endswith(".safetensors"),
|
and not context.model_paths[model_type].endswith(".safetensors"),
|
||||||
)
|
)
|
||||||
if model_type in context.model_load_errors:
|
if hasattr(context, "model_load_errors") and model_type in context.model_load_errors:
|
||||||
del context.model_load_errors[model_type]
|
del context.model_load_errors[model_type]
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
log.error(f"[red]Error while loading {model_type} model: {context.model_paths[model_type]}[/red]")
|
log.error(f"[red]Error while loading {model_type} model: {context.model_paths[model_type]}[/red]")
|
||||||
@ -99,6 +102,8 @@ def load_default_models(context: Context):
|
|||||||
log.exception(e)
|
log.exception(e)
|
||||||
del context.model_paths[model_type]
|
del context.model_paths[model_type]
|
||||||
|
|
||||||
|
if not hasattr(context, "model_load_errors"):
|
||||||
|
context.model_load_errors = {}
|
||||||
context.model_load_errors[model_type] = str(e) # storing the entire Exception can lead to memory leaks
|
context.model_load_errors[model_type] = str(e) # storing the entire Exception can lead to memory leaks
|
||||||
|
|
||||||
|
|
||||||
@ -194,15 +199,24 @@ def reload_models_if_necessary(context: Context, models_data: ModelsData, models
|
|||||||
extra_params = models_data.model_params.get(model_type, {})
|
extra_params = models_data.model_params.get(model_type, {})
|
||||||
try:
|
try:
|
||||||
action_fn(context, model_type, scan_model=False, **extra_params) # we've scanned them already
|
action_fn(context, model_type, scan_model=False, **extra_params) # we've scanned them already
|
||||||
if model_type in context.model_load_errors:
|
if hasattr(context, "model_load_errors") and model_type in context.model_load_errors:
|
||||||
del context.model_load_errors[model_type]
|
del context.model_load_errors[model_type]
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
log.exception(e)
|
log.exception(e)
|
||||||
if action_fn == backend.load_model:
|
if action_fn == backend.load_model:
|
||||||
|
if not hasattr(context, "model_load_errors"):
|
||||||
|
context.model_load_errors = {}
|
||||||
context.model_load_errors[model_type] = str(e) # storing the entire Exception can lead to memory leaks
|
context.model_load_errors[model_type] = str(e) # storing the entire Exception can lead to memory leaks
|
||||||
|
|
||||||
|
if hasattr(backend, "flush_model_changes"):
|
||||||
|
backend.flush_model_changes(context)
|
||||||
|
|
||||||
|
|
||||||
def resolve_model_paths(models_data: ModelsData):
|
def resolve_model_paths(models_data: ModelsData):
|
||||||
|
from easydiffusion.backend_manager import backend
|
||||||
|
|
||||||
|
cn_filters = backend.list_controlnet_filters()
|
||||||
|
|
||||||
model_paths = models_data.model_paths
|
model_paths = models_data.model_paths
|
||||||
skip_models = cn_filters + [
|
skip_models = cn_filters + [
|
||||||
"latent_upscaler",
|
"latent_upscaler",
|
||||||
@ -217,21 +231,32 @@ def resolve_model_paths(models_data: ModelsData):
|
|||||||
for model_type in model_paths:
|
for model_type in model_paths:
|
||||||
if model_type in skip_models: # doesn't use model paths
|
if model_type in skip_models: # doesn't use model paths
|
||||||
continue
|
continue
|
||||||
if model_type == "codeformer" and model_paths[model_type]:
|
|
||||||
download_if_necessary("codeformer", "codeformer.pth", "codeformer-0.1.0")
|
if model_type in ("vae", "codeformer", "controlnet", "text-encoder") and model_paths[model_type]:
|
||||||
elif model_type == "controlnet" and model_paths[model_type]:
|
model_ids = model_paths[model_type]
|
||||||
model_id = model_paths[model_type]
|
model_ids = model_ids if isinstance(model_ids, list) else [model_ids]
|
||||||
model_info = get_model_info_from_db(model_type=model_type, model_id=model_id)
|
|
||||||
if model_info:
|
new_model_paths = []
|
||||||
filename = model_info.get("url", "").split("/")[-1]
|
|
||||||
download_if_necessary("controlnet", filename, model_id, skip_if_others_exist=False)
|
for model_id in model_ids:
|
||||||
|
# log.info(f"Checking for {model_id=}")
|
||||||
|
model_info = get_model_info_from_db(model_type=model_type, model_id=model_id)
|
||||||
|
if model_info:
|
||||||
|
filename = model_info.get("url", "").split("/")[-1]
|
||||||
|
download_if_necessary(model_type, filename, model_id, skip_if_others_exist=False)
|
||||||
|
|
||||||
|
new_model_paths.append(path.splitext(filename)[0])
|
||||||
|
else: # not in the model db, probably a regular file
|
||||||
|
new_model_paths.append(model_id)
|
||||||
|
|
||||||
|
model_paths[model_type] = new_model_paths
|
||||||
|
|
||||||
model_paths[model_type] = resolve_model_to_use(model_paths[model_type], model_type=model_type)
|
model_paths[model_type] = resolve_model_to_use(model_paths[model_type], model_type=model_type)
|
||||||
|
|
||||||
|
|
||||||
def fail_if_models_did_not_load(context: Context):
|
def fail_if_models_did_not_load(context: Context):
|
||||||
for model_type in KNOWN_MODEL_TYPES:
|
for model_type in KNOWN_MODEL_TYPES:
|
||||||
if model_type in context.model_load_errors:
|
if hasattr(context, "model_load_errors") and model_type in context.model_load_errors:
|
||||||
e = context.model_load_errors[model_type]
|
e = context.model_load_errors[model_type]
|
||||||
raise Exception(f"Could not load the {model_type} model! Reason: " + e)
|
raise Exception(f"Could not load the {model_type} model! Reason: " + e)
|
||||||
|
|
||||||
@ -249,17 +274,31 @@ def download_default_models_if_necessary():
|
|||||||
|
|
||||||
|
|
||||||
def download_if_necessary(model_type: str, file_name: str, model_id: str, skip_if_others_exist=True):
|
def download_if_necessary(model_type: str, file_name: str, model_id: str, skip_if_others_exist=True):
|
||||||
model_dir = get_model_dirs(model_type)[0]
|
from easydiffusion.backend_manager import backend
|
||||||
model_path = os.path.join(model_dir, file_name)
|
|
||||||
expected_hash = get_model_info_from_db(model_type=model_type, model_id=model_id)["quick_hash"]
|
expected_hash = get_model_info_from_db(model_type=model_type, model_id=model_id)["quick_hash"]
|
||||||
|
|
||||||
other_models_exist = any_model_exists(model_type) and skip_if_others_exist
|
other_models_exist = any_model_exists(model_type) and skip_if_others_exist
|
||||||
known_model_exists = os.path.exists(model_path)
|
|
||||||
known_model_is_corrupt = known_model_exists and hash_file_quick(model_path) != expected_hash
|
|
||||||
|
|
||||||
if known_model_is_corrupt or (not other_models_exist and not known_model_exists):
|
for model_dir in get_model_dirs(model_type):
|
||||||
print("> download", model_type, model_id)
|
model_path = os.path.join(model_dir, file_name)
|
||||||
download_model(model_type, model_id, download_base_dir=app.MODELS_DIR, download_config_if_available=False)
|
|
||||||
|
known_model_exists = os.path.exists(model_path)
|
||||||
|
known_model_is_corrupt = known_model_exists and hash_file_quick(model_path) != expected_hash
|
||||||
|
|
||||||
|
needs_download = known_model_is_corrupt or (not other_models_exist and not known_model_exists)
|
||||||
|
|
||||||
|
# log.info(f"{model_path=} {needs_download=}")
|
||||||
|
# if known_model_exists:
|
||||||
|
# log.info(f"{expected_hash=} {hash_file_quick(model_path)=}")
|
||||||
|
# log.info(f"{known_model_is_corrupt=} {other_models_exist=} {known_model_exists=}")
|
||||||
|
|
||||||
|
if not needs_download:
|
||||||
|
return
|
||||||
|
|
||||||
|
print("> download", model_type, model_id)
|
||||||
|
download_model(model_type, model_id, download_base_dir=app.MODELS_DIR, download_config_if_available=False)
|
||||||
|
|
||||||
|
backend.refresh_models()
|
||||||
|
|
||||||
|
|
||||||
def migrate_legacy_model_location():
|
def migrate_legacy_model_location():
|
||||||
@ -309,14 +348,16 @@ def make_model_folders():
|
|||||||
|
|
||||||
help_file_name = f"Place your {model_type} model files here.txt"
|
help_file_name = f"Place your {model_type} model files here.txt"
|
||||||
help_file_contents = f'Supported extensions: {" or ".join(MODEL_EXTENSIONS.get(model_type))}'
|
help_file_contents = f'Supported extensions: {" or ".join(MODEL_EXTENSIONS.get(model_type))}'
|
||||||
|
try:
|
||||||
with open(os.path.join(model_dir_path, help_file_name), "w", encoding="utf-8") as f:
|
with open(os.path.join(model_dir_path, help_file_name), "w", encoding="utf-8") as f:
|
||||||
f.write(help_file_contents)
|
f.write(help_file_contents)
|
||||||
|
except Exception as e:
|
||||||
|
log.exception(e)
|
||||||
|
|
||||||
|
|
||||||
def is_malicious_model(file_path):
|
def is_malicious_model(file_path):
|
||||||
try:
|
try:
|
||||||
if file_path.endswith(".safetensors"):
|
if file_path.endswith((".safetensors", ".sft", ".gguf")):
|
||||||
return False
|
return False
|
||||||
scan_result = scan_model(file_path)
|
scan_result = scan_model(file_path)
|
||||||
if scan_result.issues_count > 0 or scan_result.infected_files > 0:
|
if scan_result.issues_count > 0 or scan_result.infected_files > 0:
|
||||||
@ -354,7 +395,7 @@ def getModels(scan_for_malicious: bool = True):
|
|||||||
models = {
|
models = {
|
||||||
"options": {
|
"options": {
|
||||||
"stable-diffusion": [],
|
"stable-diffusion": [],
|
||||||
"vae": [],
|
"vae": [{"ae": "ae (Flux VAE fp16)"}],
|
||||||
"hypernetwork": [],
|
"hypernetwork": [],
|
||||||
"lora": [],
|
"lora": [],
|
||||||
"codeformer": [{"codeformer": "CodeFormer"}],
|
"codeformer": [{"codeformer": "CodeFormer"}],
|
||||||
@ -374,6 +415,11 @@ def getModels(scan_for_malicious: bool = True):
|
|||||||
# {"control_v11e_sd15_shuffle": "Shuffle"},
|
# {"control_v11e_sd15_shuffle": "Shuffle"},
|
||||||
# {"control_v11f1e_sd15_tile": "Tile"},
|
# {"control_v11f1e_sd15_tile": "Tile"},
|
||||||
],
|
],
|
||||||
|
"text-encoder": [
|
||||||
|
{"t5xxl_fp16": "T5 XXL fp16"},
|
||||||
|
{"clip_l": "CLIP L"},
|
||||||
|
{"clip_g": "CLIP G"},
|
||||||
|
],
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -457,6 +503,7 @@ def getModels(scan_for_malicious: bool = True):
|
|||||||
listModels(model_type="lora")
|
listModels(model_type="lora")
|
||||||
listModels(model_type="embeddings", nameFilter=get_embedding_token)
|
listModels(model_type="embeddings", nameFilter=get_embedding_token)
|
||||||
listModels(model_type="controlnet")
|
listModels(model_type="controlnet")
|
||||||
|
listModels(model_type="text-encoder")
|
||||||
|
|
||||||
if scan_for_malicious and models_scanned > 0:
|
if scan_for_malicious and models_scanned > 0:
|
||||||
log.info(f"[green]Scanned {models_scanned} models. Nothing infected[/]")
|
log.info(f"[green]Scanned {models_scanned} models. Nothing infected[/]")
|
||||||
|
@ -5,7 +5,8 @@ from importlib.metadata import version as pkg_version
|
|||||||
|
|
||||||
from easydiffusion import app
|
from easydiffusion import app
|
||||||
|
|
||||||
# future home of scripts/check_modules.py
|
# was meant to be a rewrite of scripts/check_modules.py
|
||||||
|
# but probably dead for now
|
||||||
|
|
||||||
manifest = {
|
manifest = {
|
||||||
"tensorrt": {
|
"tensorrt": {
|
||||||
|
@ -38,7 +38,7 @@ def set_vram_optimizations(context):
|
|||||||
config = getConfig()
|
config = getConfig()
|
||||||
vram_usage_level = config.get("vram_usage_level", "balanced")
|
vram_usage_level = config.get("vram_usage_level", "balanced")
|
||||||
|
|
||||||
if vram_usage_level != context.vram_usage_level:
|
if hasattr(context, "vram_usage_level") and vram_usage_level != context.vram_usage_level:
|
||||||
context.vram_usage_level = vram_usage_level
|
context.vram_usage_level = vram_usage_level
|
||||||
return True
|
return True
|
||||||
|
|
||||||
|
@ -208,11 +208,13 @@ def set_app_config_internal(req: SetAppConfigRequest):
|
|||||||
|
|
||||||
|
|
||||||
def update_render_devices_in_config(config, render_devices):
|
def update_render_devices_in_config(config, render_devices):
|
||||||
if render_devices not in ("cpu", "auto") and not render_devices.startswith("cuda:"):
|
from easydiffusion.device_manager import validate_render_devices
|
||||||
raise HTTPException(status_code=400, detail=f"Invalid render device requested: {render_devices}")
|
|
||||||
|
|
||||||
if render_devices.startswith("cuda:"):
|
try:
|
||||||
render_devices = render_devices.split(",")
|
render_devices = render_devices.split(",")
|
||||||
|
validate_render_devices(render_devices)
|
||||||
|
except ValueError as e:
|
||||||
|
raise HTTPException(status_code=400, detail=str(e))
|
||||||
|
|
||||||
config["render_devices"] = render_devices
|
config["render_devices"] = render_devices
|
||||||
|
|
||||||
@ -366,7 +368,7 @@ def model_merge_internal(req: dict):
|
|||||||
|
|
||||||
mergeReq: MergeRequest = MergeRequest.parse_obj(req)
|
mergeReq: MergeRequest = MergeRequest.parse_obj(req)
|
||||||
|
|
||||||
sd_model_dir = model_manager.get_model_dir("stable-diffusion")[0]
|
sd_model_dir = model_manager.get_model_dirs("stable-diffusion")[0]
|
||||||
|
|
||||||
merge_models(
|
merge_models(
|
||||||
model_manager.resolve_model_to_use(mergeReq.model0, "stable-diffusion"),
|
model_manager.resolve_model_to_use(mergeReq.model0, "stable-diffusion"),
|
||||||
|
@ -21,6 +21,9 @@ from easydiffusion import device_manager
|
|||||||
from easydiffusion.tasks import Task
|
from easydiffusion.tasks import Task
|
||||||
from easydiffusion.utils import log
|
from easydiffusion.utils import log
|
||||||
|
|
||||||
|
from torchruntime.utils import get_device_count, get_device, get_device_name, get_installed_torch_platform
|
||||||
|
from sdkit.utils import is_cpu_device, mem_get_info
|
||||||
|
|
||||||
THREAD_NAME_PREFIX = ""
|
THREAD_NAME_PREFIX = ""
|
||||||
ERR_LOCK_FAILED = " failed to acquire lock within timeout."
|
ERR_LOCK_FAILED = " failed to acquire lock within timeout."
|
||||||
LOCK_TIMEOUT = 15 # Maximum locking time in seconds before failing a task.
|
LOCK_TIMEOUT = 15 # Maximum locking time in seconds before failing a task.
|
||||||
@ -339,34 +342,33 @@ def get_devices():
|
|||||||
"active": {},
|
"active": {},
|
||||||
}
|
}
|
||||||
|
|
||||||
def get_device_info(device):
|
def get_device_info(device_id):
|
||||||
if device in ("cpu", "mps"):
|
if is_cpu_device(device_id):
|
||||||
return {"name": device_manager.get_processor_name()}
|
return {"name": device_manager.get_processor_name()}
|
||||||
|
|
||||||
mem_free, mem_total = torch.cuda.mem_get_info(device)
|
device = get_device(device_id)
|
||||||
|
|
||||||
|
mem_free, mem_total = mem_get_info(device)
|
||||||
mem_free /= float(10**9)
|
mem_free /= float(10**9)
|
||||||
mem_total /= float(10**9)
|
mem_total /= float(10**9)
|
||||||
|
|
||||||
return {
|
return {
|
||||||
"name": torch.cuda.get_device_name(device),
|
"name": get_device_name(device),
|
||||||
"mem_free": mem_free,
|
"mem_free": mem_free,
|
||||||
"mem_total": mem_total,
|
"mem_total": mem_total,
|
||||||
"max_vram_usage_level": device_manager.get_max_vram_usage_level(device),
|
"max_vram_usage_level": device_manager.get_max_vram_usage_level(device),
|
||||||
}
|
}
|
||||||
|
|
||||||
# list the compatible devices
|
# list the compatible devices
|
||||||
cuda_count = torch.cuda.device_count()
|
torch_platform_name = get_installed_torch_platform()[0]
|
||||||
for device in range(cuda_count):
|
device_count = get_device_count()
|
||||||
device = f"cuda:{device}"
|
for device_id in range(device_count):
|
||||||
if not device_manager.is_device_compatible(device):
|
device_id = f"{torch_platform_name}:{device_id}" if device_count > 1 else torch_platform_name
|
||||||
continue
|
|
||||||
|
|
||||||
devices["all"].update({device: get_device_info(device)})
|
devices["all"].update({device_id: get_device_info(device_id)})
|
||||||
|
|
||||||
if device_manager.is_mps_available():
|
if torch_platform_name != "cpu":
|
||||||
devices["all"].update({"mps": get_device_info("mps")})
|
devices["all"].update({"cpu": get_device_info("cpu")})
|
||||||
|
|
||||||
devices["all"].update({"cpu": get_device_info("cpu")})
|
|
||||||
|
|
||||||
# list the activated devices
|
# list the activated devices
|
||||||
if not manager_lock.acquire(blocking=True, timeout=LOCK_TIMEOUT):
|
if not manager_lock.acquire(blocking=True, timeout=LOCK_TIMEOUT):
|
||||||
@ -378,8 +380,8 @@ def get_devices():
|
|||||||
weak_data = weak_thread_data.get(rthread)
|
weak_data = weak_thread_data.get(rthread)
|
||||||
if not weak_data or not "device" in weak_data or not "device_name" in weak_data:
|
if not weak_data or not "device" in weak_data or not "device_name" in weak_data:
|
||||||
continue
|
continue
|
||||||
device = weak_data["device"]
|
device_id = weak_data["device"]
|
||||||
devices["active"].update({device: get_device_info(device)})
|
devices["active"].update({device_id: get_device_info(device_id)})
|
||||||
finally:
|
finally:
|
||||||
manager_lock.release()
|
manager_lock.release()
|
||||||
|
|
||||||
@ -437,12 +439,6 @@ def start_render_thread(device):
|
|||||||
|
|
||||||
|
|
||||||
def stop_render_thread(device):
|
def stop_render_thread(device):
|
||||||
try:
|
|
||||||
device_manager.validate_device_id(device, log_prefix="stop_render_thread")
|
|
||||||
except:
|
|
||||||
log.error(traceback.format_exc())
|
|
||||||
return False
|
|
||||||
|
|
||||||
if not manager_lock.acquire(blocking=True, timeout=LOCK_TIMEOUT):
|
if not manager_lock.acquire(blocking=True, timeout=LOCK_TIMEOUT):
|
||||||
raise Exception("stop_render_thread" + ERR_LOCK_FAILED)
|
raise Exception("stop_render_thread" + ERR_LOCK_FAILED)
|
||||||
log.info(f"Stopping Rendering Thread on device: {device}")
|
log.info(f"Stopping Rendering Thread on device: {device}")
|
||||||
|
@ -240,6 +240,10 @@ def generate_images_internal(
|
|||||||
if req.init_image is not None and int(req.num_inference_steps * req.prompt_strength) == 0:
|
if req.init_image is not None and int(req.num_inference_steps * req.prompt_strength) == 0:
|
||||||
req.prompt_strength = 1 / req.num_inference_steps if req.num_inference_steps > 0 else 1
|
req.prompt_strength = 1 / req.num_inference_steps if req.num_inference_steps > 0 else 1
|
||||||
|
|
||||||
|
if req.init_image_mask:
|
||||||
|
req.init_image_mask = get_image(req.init_image_mask)
|
||||||
|
req.init_image_mask = resize_img(req.init_image_mask.convert("RGB"), req.width, req.height, clamp_to_8=True)
|
||||||
|
|
||||||
backend.set_options(
|
backend.set_options(
|
||||||
context,
|
context,
|
||||||
output_format=output_format.output_format,
|
output_format=output_format.output_format,
|
||||||
|
@ -80,6 +80,7 @@ class RenderTaskData(TaskData):
|
|||||||
latent_upscaler_steps: int = 10
|
latent_upscaler_steps: int = 10
|
||||||
use_stable_diffusion_model: Union[str, List[str]] = "sd-v1-4"
|
use_stable_diffusion_model: Union[str, List[str]] = "sd-v1-4"
|
||||||
use_vae_model: Union[str, List[str]] = None
|
use_vae_model: Union[str, List[str]] = None
|
||||||
|
use_text_encoder_model: Union[str, List[str]] = None
|
||||||
use_hypernetwork_model: Union[str, List[str]] = None
|
use_hypernetwork_model: Union[str, List[str]] = None
|
||||||
use_lora_model: Union[str, List[str]] = None
|
use_lora_model: Union[str, List[str]] = None
|
||||||
use_controlnet_model: Union[str, List[str]] = None
|
use_controlnet_model: Union[str, List[str]] = None
|
||||||
@ -211,6 +212,7 @@ def convert_legacy_render_req_to_new(old_req: dict):
|
|||||||
# move the model info
|
# move the model info
|
||||||
model_paths["stable-diffusion"] = old_req.get("use_stable_diffusion_model")
|
model_paths["stable-diffusion"] = old_req.get("use_stable_diffusion_model")
|
||||||
model_paths["vae"] = old_req.get("use_vae_model")
|
model_paths["vae"] = old_req.get("use_vae_model")
|
||||||
|
model_paths["text-encoder"] = old_req.get("use_text_encoder_model")
|
||||||
model_paths["hypernetwork"] = old_req.get("use_hypernetwork_model")
|
model_paths["hypernetwork"] = old_req.get("use_hypernetwork_model")
|
||||||
model_paths["lora"] = old_req.get("use_lora_model")
|
model_paths["lora"] = old_req.get("use_lora_model")
|
||||||
model_paths["controlnet"] = old_req.get("use_controlnet_model")
|
model_paths["controlnet"] = old_req.get("use_controlnet_model")
|
||||||
|
@ -37,8 +37,8 @@
|
|||||||
Easy Diffusion
|
Easy Diffusion
|
||||||
<small>
|
<small>
|
||||||
<span id="version">
|
<span id="version">
|
||||||
<span class="gated-feature" data-feature-keys="backend_ed_classic backend_ed_diffusers">v3.0.10</span>
|
<span class="gated-feature" data-feature-keys="backend_ed_classic backend_ed_diffusers">v3.0.13</span>
|
||||||
<span class="gated-feature" data-feature-keys="backend_webui">v3.5.0</span>
|
<span class="gated-feature" data-feature-keys="backend_webui">v3.5.9</span>
|
||||||
</span> <span id="updateBranchLabel"></span>
|
</span> <span id="updateBranchLabel"></span>
|
||||||
<div id="engine-logo" class="gated-feature" data-feature-keys="backend_webui">(Powered by <a id="backend-url" href="https://github.com/lllyasviel/stable-diffusion-webui-forge" target="_blank">Stable Diffusion WebUI Forge</a>)</div>
|
<div id="engine-logo" class="gated-feature" data-feature-keys="backend_webui">(Powered by <a id="backend-url" href="https://github.com/lllyasviel/stable-diffusion-webui-forge" target="_blank">Stable Diffusion WebUI Forge</a>)</div>
|
||||||
</small>
|
</small>
|
||||||
@ -302,6 +302,14 @@
|
|||||||
<input id="vae_model" type="text" spellcheck="false" autocomplete="off" class="model-filter" data-path="" />
|
<input id="vae_model" type="text" spellcheck="false" autocomplete="off" class="model-filter" data-path="" />
|
||||||
<a href="https://github.com/easydiffusion/easydiffusion/wiki/VAE-Variational-Auto-Encoder" target="_blank"><i class="fa-solid fa-circle-question help-btn"><span class="simple-tooltip top-left">Click to learn more about VAEs</span></i></a>
|
<a href="https://github.com/easydiffusion/easydiffusion/wiki/VAE-Variational-Auto-Encoder" target="_blank"><i class="fa-solid fa-circle-question help-btn"><span class="simple-tooltip top-left">Click to learn more about VAEs</span></i></a>
|
||||||
</td></tr>
|
</td></tr>
|
||||||
|
<tr id="text_encoder_model_container" class="pl-5 gated-feature" data-feature-keys="backend_webui">
|
||||||
|
<td>
|
||||||
|
<label for="text_encoder_model">Text Encoder:</label>
|
||||||
|
</td>
|
||||||
|
<td>
|
||||||
|
<div id="text_encoder_model" data-path=""></div>
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
<tr id="samplerSelection" class="pl-5"><td><label for="sampler_name">Sampler:</label></td><td>
|
<tr id="samplerSelection" class="pl-5"><td><label for="sampler_name">Sampler:</label></td><td>
|
||||||
<select id="sampler_name" name="sampler_name">
|
<select id="sampler_name" name="sampler_name">
|
||||||
<option value="plms">PLMS</option>
|
<option value="plms">PLMS</option>
|
||||||
@ -339,7 +347,7 @@
|
|||||||
</select>
|
</select>
|
||||||
<a href="https://github.com/easydiffusion/easydiffusion/wiki/How-to-Use#samplers" target="_blank"><i class="fa-solid fa-circle-question help-btn"><span class="simple-tooltip top-left">Click to learn more about samplers</span></i></a>
|
<a href="https://github.com/easydiffusion/easydiffusion/wiki/How-to-Use#samplers" target="_blank"><i class="fa-solid fa-circle-question help-btn"><span class="simple-tooltip top-left">Click to learn more about samplers</span></i></a>
|
||||||
</td></tr>
|
</td></tr>
|
||||||
<tr class="pl-5 warning-label displayNone" id="fluxSamplerWarning"><td></td><td>Please avoid 'Euler Ancestral' with Flux!</td></tr>
|
<tr class="pl-5 warning-label displayNone" id="fluxSamplerWarning"><td>Tip:</td><td>This sampler does not work well with Flux!</td></tr>
|
||||||
<tr id="schedulerSelection" class="pl-5 gated-feature" data-feature-keys="backend_webui"><td><label for="scheduler_name">Scheduler:</label></td><td>
|
<tr id="schedulerSelection" class="pl-5 gated-feature" data-feature-keys="backend_webui"><td><label for="scheduler_name">Scheduler:</label></td><td>
|
||||||
<select id="scheduler_name" name="scheduler_name">
|
<select id="scheduler_name" name="scheduler_name">
|
||||||
<option value="automatic">Automatic</option>
|
<option value="automatic">Automatic</option>
|
||||||
@ -349,17 +357,18 @@
|
|||||||
<option value="polyexponential">Polyexponential</option>
|
<option value="polyexponential">Polyexponential</option>
|
||||||
<option value="sgm_uniform">SGM Uniform</option>
|
<option value="sgm_uniform">SGM Uniform</option>
|
||||||
<option value="kl_optimal">KL Optimal</option>
|
<option value="kl_optimal">KL Optimal</option>
|
||||||
<option value="align_your_steps">Align Your Steps</option>
|
|
||||||
<option value="simple" selected>Simple</option>
|
<option value="simple" selected>Simple</option>
|
||||||
<option value="normal">Normal</option>
|
<option value="normal">Normal</option>
|
||||||
<option value="ddim">DDIM</option>
|
<option value="ddim">DDIM</option>
|
||||||
<option value="beta">Beta</option>
|
<option value="beta">Beta</option>
|
||||||
<option value="turbo">Turbo</option>
|
<option value="turbo">Turbo</option>
|
||||||
|
<option value="align_your_steps">Align Your Steps</option>
|
||||||
<option value="align_your_steps_GITS">Align Your Steps GITS</option>
|
<option value="align_your_steps_GITS">Align Your Steps GITS</option>
|
||||||
<option value="align_your_steps_11">Align Your Steps 11</option>
|
<option value="align_your_steps_11">Align Your Steps 11</option>
|
||||||
<option value="align_your_steps_32">Align Your Steps 32</option>
|
<option value="align_your_steps_32">Align Your Steps 32</option>
|
||||||
</select>
|
</select>
|
||||||
</td></tr>
|
</td></tr>
|
||||||
|
<tr class="pl-5 warning-label displayNone" id="fluxSchedulerWarning"><td>Tip:</td><td>This scheduler does not work well with Flux!</td></tr>
|
||||||
<tr class="pl-5"><td><label>Image Size: </label></td><td id="image-size-options">
|
<tr class="pl-5"><td><label>Image Size: </label></td><td id="image-size-options">
|
||||||
<select id="width" name="width" value="512">
|
<select id="width" name="width" value="512">
|
||||||
<option value="128">128</option>
|
<option value="128">128</option>
|
||||||
@ -437,10 +446,11 @@
|
|||||||
<div id="small_image_warning" class="displayNone warning-label">Small image sizes can cause bad image quality</div>
|
<div id="small_image_warning" class="displayNone warning-label">Small image sizes can cause bad image quality</div>
|
||||||
</td></tr>
|
</td></tr>
|
||||||
<tr class="pl-5"><td><label for="num_inference_steps">Inference Steps:</label></td><td> <input id="num_inference_steps" name="num_inference_steps" type="number" min="1" step="1" style="width: 42pt" value="25" onkeypress="preventNonNumericalInput(event)" inputmode="numeric"></td></tr>
|
<tr class="pl-5"><td><label for="num_inference_steps">Inference Steps:</label></td><td> <input id="num_inference_steps" name="num_inference_steps" type="number" min="1" step="1" style="width: 42pt" value="25" onkeypress="preventNonNumericalInput(event)" inputmode="numeric"></td></tr>
|
||||||
|
<tr class="pl-5 warning-label displayNone" id="fluxSchedulerStepsWarning"><td>Tip:</td><td>This scheduler needs 15 steps or more</td></tr>
|
||||||
<tr class="pl-5"><td><label for="guidance_scale_slider">Guidance Scale:</label></td><td> <input id="guidance_scale_slider" name="guidance_scale_slider" class="editor-slider" value="75" type="range" min="11" max="500"> <input id="guidance_scale" name="guidance_scale" size="4" pattern="^[0-9\.]+$" onkeypress="preventNonNumericalInput(event)" inputmode="decimal"></td></tr>
|
<tr class="pl-5"><td><label for="guidance_scale_slider">Guidance Scale:</label></td><td> <input id="guidance_scale_slider" name="guidance_scale_slider" class="editor-slider" value="75" type="range" min="11" max="500"> <input id="guidance_scale" name="guidance_scale" size="4" pattern="^[0-9\.]+$" onkeypress="preventNonNumericalInput(event)" inputmode="decimal"></td></tr>
|
||||||
<tr class="pl-5 displayNone warning-label" id="guidanceWarning"><td></td><td id="guidanceWarningText"></td></tr>
|
<tr class="pl-5 displayNone warning-label" id="guidanceWarning"><td></td><td id="guidanceWarningText"></td></tr>
|
||||||
<tr id="prompt_strength_container" class="pl-5"><td><label for="prompt_strength_slider">Prompt Strength:</label></td><td> <input id="prompt_strength_slider" name="prompt_strength_slider" class="editor-slider" value="80" type="range" min="0" max="99"> <input id="prompt_strength" name="prompt_strength" size="4" pattern="^[0-9\.]+$" onkeypress="preventNonNumericalInput(event)" inputmode="decimal"><br/></td></tr>
|
<tr id="prompt_strength_container" class="pl-5"><td><label for="prompt_strength_slider">Prompt Strength:</label></td><td> <input id="prompt_strength_slider" name="prompt_strength_slider" class="editor-slider" value="80" type="range" min="0" max="99"> <input id="prompt_strength" name="prompt_strength" size="4" pattern="^[0-9\.]+$" onkeypress="preventNonNumericalInput(event)" inputmode="decimal"><br/></td></tr>
|
||||||
<tr id="distilled_guidance_scale_container" class="pl-5 displayNone"><td><label for="distilled_guidance_scale_slider">Distilled Guidance:</label></td><td> <input id="distilled_guidance_scale_slider" name="distilled_guidance_scale_slider" class="editor-slider" value="35" type="range" min="11" max="500"> <input id="distilled_guidance_scale" name="distilled_guidance_scale" size="4" pattern="^[0-9\.]+$" onkeypress="preventNonNumericalInput(event)" inputmode="decimal"></td></tr>
|
<tr id="distilled_guidance_scale_container" class="pl-5 gated-feature" data-feature-keys="backend_webui"><td><label for="distilled_guidance_scale_slider">Distilled Guidance:</label></td><td> <input id="distilled_guidance_scale_slider" name="distilled_guidance_scale_slider" class="editor-slider" value="35" type="range" min="11" max="500"> <input id="distilled_guidance_scale" name="distilled_guidance_scale" size="4" pattern="^[0-9\.]+$" onkeypress="preventNonNumericalInput(event)" inputmode="decimal"></td></tr>
|
||||||
<tr id="lora_model_container" class="pl-5 gated-feature" data-feature-keys="backend_ed_diffusers backend_webui">
|
<tr id="lora_model_container" class="pl-5 gated-feature" data-feature-keys="backend_ed_diffusers backend_webui">
|
||||||
<td>
|
<td>
|
||||||
<label for="lora_model">LoRA:</label>
|
<label for="lora_model">LoRA:</label>
|
||||||
|
@ -31,6 +31,8 @@ const SETTINGS_IDS_LIST = [
|
|||||||
"stream_image_progress",
|
"stream_image_progress",
|
||||||
"use_face_correction",
|
"use_face_correction",
|
||||||
"gfpgan_model",
|
"gfpgan_model",
|
||||||
|
"codeformer_fidelity",
|
||||||
|
"codeformer_upscale_faces",
|
||||||
"use_upscale",
|
"use_upscale",
|
||||||
"upscale_amount",
|
"upscale_amount",
|
||||||
"latent_upscaler_steps",
|
"latent_upscaler_steps",
|
||||||
@ -58,6 +60,7 @@ const SETTINGS_IDS_LIST = [
|
|||||||
"extract_lora_from_prompt",
|
"extract_lora_from_prompt",
|
||||||
"embedding-card-size-selector",
|
"embedding-card-size-selector",
|
||||||
"lora_model",
|
"lora_model",
|
||||||
|
"text_encoder_model",
|
||||||
"enable_vae_tiling",
|
"enable_vae_tiling",
|
||||||
"controlnet_alpha",
|
"controlnet_alpha",
|
||||||
]
|
]
|
||||||
@ -180,23 +183,6 @@ function loadSettings() {
|
|||||||
}
|
}
|
||||||
})
|
})
|
||||||
CURRENTLY_LOADING_SETTINGS = false
|
CURRENTLY_LOADING_SETTINGS = false
|
||||||
} else if (localStorage.length < 2) {
|
|
||||||
// localStorage is too short for OldSettings
|
|
||||||
// So this is likely the first time Easy Diffusion is running.
|
|
||||||
// Initialize vram_usage_level based on the available VRAM
|
|
||||||
function initGPUProfile(event) {
|
|
||||||
if (
|
|
||||||
"detail" in event &&
|
|
||||||
"active" in event.detail &&
|
|
||||||
"cuda:0" in event.detail.active &&
|
|
||||||
event.detail.active["cuda:0"].mem_total < 4.5
|
|
||||||
) {
|
|
||||||
vramUsageLevelField.value = "low"
|
|
||||||
vramUsageLevelField.dispatchEvent(new Event("change"))
|
|
||||||
}
|
|
||||||
document.removeEventListener("system_info_update", initGPUProfile)
|
|
||||||
}
|
|
||||||
document.addEventListener("system_info_update", initGPUProfile)
|
|
||||||
} else {
|
} else {
|
||||||
CURRENTLY_LOADING_SETTINGS = true
|
CURRENTLY_LOADING_SETTINGS = true
|
||||||
tryLoadOldSettings()
|
tryLoadOldSettings()
|
||||||
|
@ -394,6 +394,45 @@ const TASK_MAPPING = {
|
|||||||
return val
|
return val
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
|
use_text_encoder_model: {
|
||||||
|
name: "Text Encoder model",
|
||||||
|
setUI: (use_text_encoder_model) => {
|
||||||
|
let modelPaths = []
|
||||||
|
use_text_encoder_model = use_text_encoder_model === null ? "" : use_text_encoder_model
|
||||||
|
use_text_encoder_model = Array.isArray(use_text_encoder_model) ? use_text_encoder_model : [use_text_encoder_model]
|
||||||
|
use_text_encoder_model.forEach((m) => {
|
||||||
|
if (m.includes("models\\text-encoder\\")) {
|
||||||
|
m = m.split("models\\text-encoder\\")[1]
|
||||||
|
} else if (m.includes("models\\\\text-encoder\\\\")) {
|
||||||
|
m = m.split("models\\\\text-encoder\\\\")[1]
|
||||||
|
} else if (m.includes("models/text-encoder/")) {
|
||||||
|
m = m.split("models/text-encoder/")[1]
|
||||||
|
}
|
||||||
|
m = m.replaceAll("\\\\", "/")
|
||||||
|
m = getModelPath(m, [".safetensors", ".sft"])
|
||||||
|
modelPaths.push(m)
|
||||||
|
})
|
||||||
|
textEncoderModelField.modelNames = modelPaths
|
||||||
|
},
|
||||||
|
readUI: () => {
|
||||||
|
return textEncoderModelField.modelNames
|
||||||
|
},
|
||||||
|
parse: (val) => {
|
||||||
|
val = !val || val === "None" ? "" : val
|
||||||
|
if (typeof val === "string" && val.includes(",")) {
|
||||||
|
val = val.split(",")
|
||||||
|
val = val.map((v) => v.trim())
|
||||||
|
val = val.map((v) => v.replaceAll("\\", "\\\\"))
|
||||||
|
val = val.map((v) => v.replaceAll('"', ""))
|
||||||
|
val = val.map((v) => v.replaceAll("'", ""))
|
||||||
|
val = val.map((v) => '"' + v + '"')
|
||||||
|
val = "[" + val + "]"
|
||||||
|
val = JSON.parse(val)
|
||||||
|
}
|
||||||
|
val = Array.isArray(val) ? val : [val]
|
||||||
|
return val
|
||||||
|
},
|
||||||
|
},
|
||||||
use_hypernetwork_model: {
|
use_hypernetwork_model: {
|
||||||
name: "Hypernetwork model",
|
name: "Hypernetwork model",
|
||||||
setUI: (use_hypernetwork_model) => {
|
setUI: (use_hypernetwork_model) => {
|
||||||
@ -620,6 +659,7 @@ const TASK_TEXT_MAPPING = {
|
|||||||
hypernetwork_strength: "Hypernetwork Strength",
|
hypernetwork_strength: "Hypernetwork Strength",
|
||||||
use_lora_model: "LoRA model",
|
use_lora_model: "LoRA model",
|
||||||
lora_alpha: "LoRA Strength",
|
lora_alpha: "LoRA Strength",
|
||||||
|
use_text_encoder_model: "Text Encoder model",
|
||||||
use_controlnet_model: "ControlNet model",
|
use_controlnet_model: "ControlNet model",
|
||||||
control_filter_to_apply: "ControlNet Filter",
|
control_filter_to_apply: "ControlNet Filter",
|
||||||
control_alpha: "ControlNet Strength",
|
control_alpha: "ControlNet Strength",
|
||||||
|
@ -54,6 +54,7 @@ const taskConfigSetup = {
|
|||||||
label: "Hypernetwork Strength",
|
label: "Hypernetwork Strength",
|
||||||
visible: ({ reqBody }) => !!reqBody?.use_hypernetwork_model,
|
visible: ({ reqBody }) => !!reqBody?.use_hypernetwork_model,
|
||||||
},
|
},
|
||||||
|
use_text_encoder_model: { label: "Text Encoder", visible: ({ reqBody }) => !!reqBody?.use_text_encoder_model },
|
||||||
use_lora_model: { label: "Lora Model", visible: ({ reqBody }) => !!reqBody?.use_lora_model },
|
use_lora_model: { label: "Lora Model", visible: ({ reqBody }) => !!reqBody?.use_lora_model },
|
||||||
lora_alpha: { label: "Lora Strength", visible: ({ reqBody }) => !!reqBody?.use_lora_model },
|
lora_alpha: { label: "Lora Strength", visible: ({ reqBody }) => !!reqBody?.use_lora_model },
|
||||||
preserve_init_image_color_profile: "Preserve Color Profile",
|
preserve_init_image_color_profile: "Preserve Color Profile",
|
||||||
@ -141,6 +142,7 @@ let tilingField = document.querySelector("#tiling")
|
|||||||
let controlnetModelField = new ModelDropdown(document.querySelector("#controlnet_model"), "controlnet", "None", false)
|
let controlnetModelField = new ModelDropdown(document.querySelector("#controlnet_model"), "controlnet", "None", false)
|
||||||
let vaeModelField = new ModelDropdown(document.querySelector("#vae_model"), "vae", "None")
|
let vaeModelField = new ModelDropdown(document.querySelector("#vae_model"), "vae", "None")
|
||||||
let loraModelField = new MultiModelSelector(document.querySelector("#lora_model"), "lora", "LoRA", 0.5, 0.02)
|
let loraModelField = new MultiModelSelector(document.querySelector("#lora_model"), "lora", "LoRA", 0.5, 0.02)
|
||||||
|
let textEncoderModelField = new MultiModelSelector(document.querySelector("#text_encoder_model"), "text-encoder", "Text Encoder", 0.5, 0.02, false)
|
||||||
let hypernetworkModelField = new ModelDropdown(document.querySelector("#hypernetwork_model"), "hypernetwork", "None")
|
let hypernetworkModelField = new ModelDropdown(document.querySelector("#hypernetwork_model"), "hypernetwork", "None")
|
||||||
let hypernetworkStrengthSlider = document.querySelector("#hypernetwork_strength_slider")
|
let hypernetworkStrengthSlider = document.querySelector("#hypernetwork_strength_slider")
|
||||||
let hypernetworkStrengthField = document.querySelector("#hypernetwork_strength")
|
let hypernetworkStrengthField = document.querySelector("#hypernetwork_strength")
|
||||||
@ -623,6 +625,13 @@ function onUseAsInputClick(req, img) {
|
|||||||
initImagePreview.src = imgData
|
initImagePreview.src = imgData
|
||||||
|
|
||||||
maskSetting.checked = false
|
maskSetting.checked = false
|
||||||
|
|
||||||
|
//Force the image settings size to match the input, as inpaint currently only works correctly
|
||||||
|
//if input image and generate sizes match.
|
||||||
|
addImageSizeOption(img.naturalWidth);
|
||||||
|
addImageSizeOption(img.naturalHeight);
|
||||||
|
widthField.value = img.naturalWidth;
|
||||||
|
heightField.value = img.naturalHeight;
|
||||||
}
|
}
|
||||||
|
|
||||||
function onUseForControlnetClick(req, img) {
|
function onUseForControlnetClick(req, img) {
|
||||||
@ -1389,6 +1398,7 @@ function getCurrentUserRequest() {
|
|||||||
newTask.reqBody.hypernetwork_strength = parseFloat(hypernetworkStrengthField.value)
|
newTask.reqBody.hypernetwork_strength = parseFloat(hypernetworkStrengthField.value)
|
||||||
}
|
}
|
||||||
if (testDiffusers.checked) {
|
if (testDiffusers.checked) {
|
||||||
|
// lora
|
||||||
let loraModelData = loraModelField.value
|
let loraModelData = loraModelField.value
|
||||||
let modelNames = loraModelData["modelNames"]
|
let modelNames = loraModelData["modelNames"]
|
||||||
let modelStrengths = loraModelData["modelWeights"]
|
let modelStrengths = loraModelData["modelWeights"]
|
||||||
@ -1401,6 +1411,18 @@ function getCurrentUserRequest() {
|
|||||||
newTask.reqBody.lora_alpha = modelStrengths
|
newTask.reqBody.lora_alpha = modelStrengths
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// text encoder
|
||||||
|
let textEncoderModelNames = textEncoderModelField.modelNames
|
||||||
|
|
||||||
|
if (textEncoderModelNames.length > 0) {
|
||||||
|
textEncoderModelNames = textEncoderModelNames.length == 1 ? textEncoderModelNames[0] : textEncoderModelNames
|
||||||
|
|
||||||
|
newTask.reqBody.use_text_encoder_model = textEncoderModelNames
|
||||||
|
} else {
|
||||||
|
newTask.reqBody.use_text_encoder_model = ""
|
||||||
|
}
|
||||||
|
|
||||||
|
// vae tiling
|
||||||
if (tilingField.value !== "none") {
|
if (tilingField.value !== "none") {
|
||||||
newTask.reqBody.tiling = tilingField.value
|
newTask.reqBody.tiling = tilingField.value
|
||||||
}
|
}
|
||||||
@ -1879,8 +1901,36 @@ controlImagePreview.addEventListener("load", onControlnetModelChange)
|
|||||||
controlImagePreview.addEventListener("unload", onControlnetModelChange)
|
controlImagePreview.addEventListener("unload", onControlnetModelChange)
|
||||||
onControlnetModelChange()
|
onControlnetModelChange()
|
||||||
|
|
||||||
// tip for Flux
|
document.addEventListener("refreshModels", function() {
|
||||||
|
onFixFaceModelChange()
|
||||||
|
onControlnetModelChange()
|
||||||
|
})
|
||||||
|
|
||||||
|
// utilities for Flux and Chroma
|
||||||
let sdModelField = document.querySelector("#stable_diffusion_model")
|
let sdModelField = document.querySelector("#stable_diffusion_model")
|
||||||
|
|
||||||
|
// function checkAndSetDependentModels() {
|
||||||
|
// let sdModel = sdModelField.value.toLowerCase()
|
||||||
|
// let isFlux = sdModel.includes("flux")
|
||||||
|
// let isChroma = sdModel.includes("chroma")
|
||||||
|
|
||||||
|
// if (isFlux || isChroma) {
|
||||||
|
// vaeModelField.value = "ae"
|
||||||
|
|
||||||
|
// if (isFlux) {
|
||||||
|
// textEncoderModelField.modelNames = ["t5xxl_fp16", "clip_l"]
|
||||||
|
// } else {
|
||||||
|
// textEncoderModelField.modelNames = ["t5xxl_fp16"]
|
||||||
|
// }
|
||||||
|
// } else {
|
||||||
|
// if (vaeModelField.value == "ae") {
|
||||||
|
// vaeModelField.value = ""
|
||||||
|
// }
|
||||||
|
// textEncoderModelField.modelNames = []
|
||||||
|
// }
|
||||||
|
// }
|
||||||
|
// sdModelField.addEventListener("change", checkAndSetDependentModels)
|
||||||
|
|
||||||
function checkGuidanceValue() {
|
function checkGuidanceValue() {
|
||||||
let guidance = parseFloat(guidanceScaleField.value)
|
let guidance = parseFloat(guidanceScaleField.value)
|
||||||
let guidanceWarning = document.querySelector("#guidanceWarning")
|
let guidanceWarning = document.querySelector("#guidanceWarning")
|
||||||
@ -1905,15 +1955,16 @@ sdModelField.addEventListener("change", checkGuidanceValue)
|
|||||||
guidanceScaleField.addEventListener("change", checkGuidanceValue)
|
guidanceScaleField.addEventListener("change", checkGuidanceValue)
|
||||||
guidanceScaleSlider.addEventListener("change", checkGuidanceValue)
|
guidanceScaleSlider.addEventListener("change", checkGuidanceValue)
|
||||||
|
|
||||||
function checkGuidanceScaleVisibility() {
|
// disabling until we can detect flux models more reliably
|
||||||
let guidanceScaleContainer = document.querySelector("#distilled_guidance_scale_container")
|
// function checkGuidanceScaleVisibility() {
|
||||||
if (sdModelField.value.toLowerCase().includes("flux")) {
|
// let guidanceScaleContainer = document.querySelector("#distilled_guidance_scale_container")
|
||||||
guidanceScaleContainer.classList.remove("displayNone")
|
// if (sdModelField.value.toLowerCase().includes("flux")) {
|
||||||
} else {
|
// guidanceScaleContainer.classList.remove("displayNone")
|
||||||
guidanceScaleContainer.classList.add("displayNone")
|
// } else {
|
||||||
}
|
// guidanceScaleContainer.classList.add("displayNone")
|
||||||
}
|
// }
|
||||||
sdModelField.addEventListener("change", checkGuidanceScaleVisibility)
|
// }
|
||||||
|
// sdModelField.addEventListener("change", checkGuidanceScaleVisibility)
|
||||||
|
|
||||||
function checkFluxSampler() {
|
function checkFluxSampler() {
|
||||||
let samplerWarning = document.querySelector("#fluxSamplerWarning")
|
let samplerWarning = document.querySelector("#fluxSamplerWarning")
|
||||||
@ -1927,13 +1978,53 @@ function checkFluxSampler() {
|
|||||||
samplerWarning.classList.add("displayNone")
|
samplerWarning.classList.add("displayNone")
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
function checkFluxScheduler() {
|
||||||
|
const badSchedulers = ["automatic", "uniform", "turbo", "align_your_steps", "align_your_steps_GITS", "align_your_steps_11", "align_your_steps_32"]
|
||||||
|
|
||||||
|
let schedulerWarning = document.querySelector("#fluxSchedulerWarning")
|
||||||
|
if (sdModelField.value.toLowerCase().includes("flux")) {
|
||||||
|
if (badSchedulers.includes(schedulerField.value)) {
|
||||||
|
schedulerWarning.classList.remove("displayNone")
|
||||||
|
} else {
|
||||||
|
schedulerWarning.classList.add("displayNone")
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
schedulerWarning.classList.add("displayNone")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function checkFluxSchedulerSteps() {
|
||||||
|
const problematicSchedulers = ["karras", "exponential", "polyexponential"]
|
||||||
|
|
||||||
|
let schedulerWarning = document.querySelector("#fluxSchedulerStepsWarning")
|
||||||
|
if (sdModelField.value.toLowerCase().includes("flux") && parseInt(numInferenceStepsField.value) < 15) {
|
||||||
|
if (problematicSchedulers.includes(schedulerField.value)) {
|
||||||
|
schedulerWarning.classList.remove("displayNone")
|
||||||
|
} else {
|
||||||
|
schedulerWarning.classList.add("displayNone")
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
schedulerWarning.classList.add("displayNone")
|
||||||
|
}
|
||||||
|
}
|
||||||
sdModelField.addEventListener("change", checkFluxSampler)
|
sdModelField.addEventListener("change", checkFluxSampler)
|
||||||
samplerField.addEventListener("change", checkFluxSampler)
|
samplerField.addEventListener("change", checkFluxSampler)
|
||||||
|
|
||||||
|
sdModelField.addEventListener("change", checkFluxScheduler)
|
||||||
|
schedulerField.addEventListener("change", checkFluxScheduler)
|
||||||
|
|
||||||
|
sdModelField.addEventListener("change", checkFluxSchedulerSteps)
|
||||||
|
schedulerField.addEventListener("change", checkFluxSchedulerSteps)
|
||||||
|
numInferenceStepsField.addEventListener("change", checkFluxSchedulerSteps)
|
||||||
|
|
||||||
document.addEventListener("refreshModels", function() {
|
document.addEventListener("refreshModels", function() {
|
||||||
|
// checkAndSetDependentModels()
|
||||||
checkGuidanceValue()
|
checkGuidanceValue()
|
||||||
checkGuidanceScaleVisibility()
|
// checkGuidanceScaleVisibility()
|
||||||
checkFluxSampler()
|
checkFluxSampler()
|
||||||
|
checkFluxScheduler()
|
||||||
|
checkFluxSchedulerSteps()
|
||||||
})
|
})
|
||||||
|
|
||||||
// function onControlImageFilterChange() {
|
// function onControlImageFilterChange() {
|
||||||
|
@ -10,6 +10,7 @@ class MultiModelSelector {
|
|||||||
root
|
root
|
||||||
modelType
|
modelType
|
||||||
modelNameFriendly
|
modelNameFriendly
|
||||||
|
showWeights
|
||||||
defaultWeight
|
defaultWeight
|
||||||
weightStep
|
weightStep
|
||||||
|
|
||||||
@ -35,13 +36,13 @@ class MultiModelSelector {
|
|||||||
if (typeof modelData !== "object") {
|
if (typeof modelData !== "object") {
|
||||||
throw new Error("Multi-model selector expects an object containing modelNames and modelWeights as keys!")
|
throw new Error("Multi-model selector expects an object containing modelNames and modelWeights as keys!")
|
||||||
}
|
}
|
||||||
if (!("modelNames" in modelData) || !("modelWeights" in modelData)) {
|
if (!("modelNames" in modelData) || (this.showWeights && !("modelWeights" in modelData))) {
|
||||||
throw new Error("modelNames or modelWeights not present in the data passed to the multi-model selector")
|
throw new Error("modelNames or modelWeights not present in the data passed to the multi-model selector")
|
||||||
}
|
}
|
||||||
|
|
||||||
let newModelNames = modelData["modelNames"]
|
let newModelNames = modelData["modelNames"]
|
||||||
let newModelWeights = modelData["modelWeights"]
|
let newModelWeights = modelData["modelWeights"]
|
||||||
if (newModelNames.length !== newModelWeights.length) {
|
if (newModelWeights && newModelNames.length !== newModelWeights.length) {
|
||||||
throw new Error("Need to pass an equal number of modelNames and modelWeights!")
|
throw new Error("Need to pass an equal number of modelNames and modelWeights!")
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -50,7 +51,7 @@ class MultiModelSelector {
|
|||||||
// the root of all this unholiness is because searchable-models automatically dispatches an update event
|
// the root of all this unholiness is because searchable-models automatically dispatches an update event
|
||||||
// as soon as the value is updated via JS, which is against the DOM pattern of not dispatching an event automatically
|
// as soon as the value is updated via JS, which is against the DOM pattern of not dispatching an event automatically
|
||||||
// unless the caller explicitly dispatches the event.
|
// unless the caller explicitly dispatches the event.
|
||||||
this.modelWeights = newModelWeights
|
this.modelWeights = newModelWeights || []
|
||||||
this.modelNames = newModelNames
|
this.modelNames = newModelNames
|
||||||
}
|
}
|
||||||
get disabled() {
|
get disabled() {
|
||||||
@ -91,10 +92,11 @@ class MultiModelSelector {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
constructor(root, modelType, modelNameFriendly = undefined, defaultWeight = 0.5, weightStep = 0.02) {
|
constructor(root, modelType, modelNameFriendly = undefined, defaultWeight = 0.5, weightStep = 0.02, showWeights = true) {
|
||||||
this.root = root
|
this.root = root
|
||||||
this.modelType = modelType
|
this.modelType = modelType
|
||||||
this.modelNameFriendly = modelNameFriendly || modelType
|
this.modelNameFriendly = modelNameFriendly || modelType
|
||||||
|
this.showWeights = showWeights
|
||||||
this.defaultWeight = defaultWeight
|
this.defaultWeight = defaultWeight
|
||||||
this.weightStep = weightStep
|
this.weightStep = weightStep
|
||||||
|
|
||||||
@ -135,10 +137,13 @@ class MultiModelSelector {
|
|||||||
|
|
||||||
const modelElement = document.createElement("div")
|
const modelElement = document.createElement("div")
|
||||||
modelElement.className = "model_entry"
|
modelElement.className = "model_entry"
|
||||||
modelElement.innerHTML = `
|
let html = `<input id="${this.modelType}_${idx}" class="model_name model-filter" type="text" spellcheck="false" autocomplete="off" data-path="" />`
|
||||||
<input id="${this.modelType}_${idx}" class="model_name model-filter" type="text" spellcheck="false" autocomplete="off" data-path="" />
|
|
||||||
<input class="model_weight" type="number" step="${this.weightStep}" value="${this.defaultWeight}" pattern="^-?[0-9]*\.?[0-9]*$" onkeypress="preventNonNumericalInput(event)">
|
if (this.showWeights) {
|
||||||
`
|
html += `<input class="model_weight" type="number" step="${this.weightStep}" value="${this.defaultWeight}" pattern="^-?[0-9]*\.?[0-9]*$" onkeypress="preventNonNumericalInput(event)">`
|
||||||
|
}
|
||||||
|
modelElement.innerHTML = html
|
||||||
|
|
||||||
this.modelContainer.appendChild(modelElement)
|
this.modelContainer.appendChild(modelElement)
|
||||||
|
|
||||||
let modelNameEl = modelElement.querySelector(".model_name")
|
let modelNameEl = modelElement.querySelector(".model_name")
|
||||||
@ -160,8 +165,8 @@ class MultiModelSelector {
|
|||||||
|
|
||||||
modelNameEl.addEventListener("change", makeUpdateEvent("change"))
|
modelNameEl.addEventListener("change", makeUpdateEvent("change"))
|
||||||
modelNameEl.addEventListener("input", makeUpdateEvent("input"))
|
modelNameEl.addEventListener("input", makeUpdateEvent("input"))
|
||||||
modelWeightEl.addEventListener("change", makeUpdateEvent("change"))
|
modelWeightEl?.addEventListener("change", makeUpdateEvent("change"))
|
||||||
modelWeightEl.addEventListener("input", makeUpdateEvent("input"))
|
modelWeightEl?.addEventListener("input", makeUpdateEvent("input"))
|
||||||
|
|
||||||
let removeBtn = document.createElement("button")
|
let removeBtn = document.createElement("button")
|
||||||
removeBtn.className = "remove_model_btn"
|
removeBtn.className = "remove_model_btn"
|
||||||
@ -218,10 +223,14 @@ class MultiModelSelector {
|
|||||||
}
|
}
|
||||||
|
|
||||||
get modelWeights() {
|
get modelWeights() {
|
||||||
return this.getModelElements(true).map((e) => e.weight.value)
|
return this.getModelElements(true).map((e) => e.weight?.value)
|
||||||
}
|
}
|
||||||
|
|
||||||
set modelWeights(newModelWeights) {
|
set modelWeights(newModelWeights) {
|
||||||
|
if (!this.showWeights) {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
this.resizeEntryList(newModelWeights.length)
|
this.resizeEntryList(newModelWeights.length)
|
||||||
|
|
||||||
if (newModelWeights.length === 0) {
|
if (newModelWeights.length === 0) {
|
||||||
|
@ -658,7 +658,7 @@ function setDeviceInfo(devices) {
|
|||||||
|
|
||||||
function ID_TO_TEXT(d) {
|
function ID_TO_TEXT(d) {
|
||||||
let info = devices.all[d]
|
let info = devices.all[d]
|
||||||
if ("mem_free" in info && "mem_total" in info) {
|
if ("mem_free" in info && "mem_total" in info && info["mem_total"] > 0) {
|
||||||
return `${info.name} <small>(${d}) (${info.mem_free.toFixed(1)}Gb free / ${info.mem_total.toFixed(
|
return `${info.name} <small>(${d}) (${info.mem_free.toFixed(1)}Gb free / ${info.mem_total.toFixed(
|
||||||
1
|
1
|
||||||
)} Gb total)</small>`
|
)} Gb total)</small>`
|
||||||
|
80
ui/plugins/ui/snow.plugin.js
Normal file
80
ui/plugins/ui/snow.plugin.js
Normal file
@ -0,0 +1,80 @@
|
|||||||
|
// christmas hack, courtesy: https://pajasevi.github.io/CSSnowflakes/
|
||||||
|
|
||||||
|
;(function(){
|
||||||
|
"use strict";
|
||||||
|
|
||||||
|
function makeItSnow() {
|
||||||
|
const styleSheet = document.createElement("style")
|
||||||
|
styleSheet.textContent = `
|
||||||
|
/* customizable snowflake styling */
|
||||||
|
.snowflake {
|
||||||
|
color: #fff;
|
||||||
|
font-size: 1em;
|
||||||
|
font-family: Arial, sans-serif;
|
||||||
|
text-shadow: 0 0 5px #000;
|
||||||
|
}
|
||||||
|
|
||||||
|
.snowflake,.snowflake .inner{animation-iteration-count:infinite;animation-play-state:running}@keyframes snowflakes-fall{0%{transform:translateY(0)}100%{transform:translateY(110vh)}}@keyframes snowflakes-shake{0%,100%{transform:translateX(0)}50%{transform:translateX(80px)}}.snowflake{position:fixed;top:-10%;z-index:9999;-webkit-user-select:none;user-select:none;cursor:default;animation-name:snowflakes-shake;animation-duration:3s;animation-timing-function:ease-in-out}.snowflake .inner{animation-duration:10s;animation-name:snowflakes-fall;animation-timing-function:linear}.snowflake:nth-of-type(0){left:1%;animation-delay:0s}.snowflake:nth-of-type(0) .inner{animation-delay:0s}.snowflake:first-of-type{left:10%;animation-delay:1s}.snowflake:first-of-type .inner,.snowflake:nth-of-type(8) .inner{animation-delay:1s}.snowflake:nth-of-type(2){left:20%;animation-delay:.5s}.snowflake:nth-of-type(2) .inner,.snowflake:nth-of-type(6) .inner{animation-delay:6s}.snowflake:nth-of-type(3){left:30%;animation-delay:2s}.snowflake:nth-of-type(11) .inner,.snowflake:nth-of-type(3) .inner{animation-delay:4s}.snowflake:nth-of-type(4){left:40%;animation-delay:2s}.snowflake:nth-of-type(10) .inner,.snowflake:nth-of-type(4) .inner{animation-delay:2s}.snowflake:nth-of-type(5){left:50%;animation-delay:3s}.snowflake:nth-of-type(5) .inner{animation-delay:8s}.snowflake:nth-of-type(6){left:60%;animation-delay:2s}.snowflake:nth-of-type(7){left:70%;animation-delay:1s}.snowflake:nth-of-type(7) .inner{animation-delay:2.5s}.snowflake:nth-of-type(8){left:80%;animation-delay:0s}.snowflake:nth-of-type(9){left:90%;animation-delay:1.5s}.snowflake:nth-of-type(9) .inner{animation-delay:3s}.snowflake:nth-of-type(10){left:25%;animation-delay:0s}.snowflake:nth-of-type(11){left:65%;animation-delay:2.5s}
|
||||||
|
`
|
||||||
|
document.head.appendChild(styleSheet)
|
||||||
|
|
||||||
|
const snowflakes = document.createElement("div")
|
||||||
|
snowflakes.id = "snowflakes-container"
|
||||||
|
snowflakes.innerHTML = `
|
||||||
|
<div class="snowflakes" aria-hidden="true">
|
||||||
|
<div class="snowflake">
|
||||||
|
<div class="inner">❅</div>
|
||||||
|
</div>
|
||||||
|
<div class="snowflake">
|
||||||
|
<div class="inner">❅</div>
|
||||||
|
</div>
|
||||||
|
<div class="snowflake">
|
||||||
|
<div class="inner">❅</div>
|
||||||
|
</div>
|
||||||
|
<div class="snowflake">
|
||||||
|
<div class="inner">❅</div>
|
||||||
|
</div>
|
||||||
|
<div class="snowflake">
|
||||||
|
<div class="inner">❅</div>
|
||||||
|
</div>
|
||||||
|
<div class="snowflake">
|
||||||
|
<div class="inner">❅</div>
|
||||||
|
</div>
|
||||||
|
<div class="snowflake">
|
||||||
|
<div class="inner">❅</div>
|
||||||
|
</div>
|
||||||
|
<div class="snowflake">
|
||||||
|
<div class="inner">❅</div>
|
||||||
|
</div>
|
||||||
|
<div class="snowflake">
|
||||||
|
<div class="inner">❅</div>
|
||||||
|
</div>
|
||||||
|
<div class="snowflake">
|
||||||
|
<div class="inner">❅</div>
|
||||||
|
</div>
|
||||||
|
<div class="snowflake">
|
||||||
|
<div class="inner">❅</div>
|
||||||
|
</div>
|
||||||
|
<div class="snowflake">
|
||||||
|
<div class="inner">❅</div>
|
||||||
|
</div>
|
||||||
|
</div>`
|
||||||
|
|
||||||
|
document.body.appendChild(snowflakes)
|
||||||
|
|
||||||
|
const script = document.createElement("script")
|
||||||
|
script.innerHTML = `
|
||||||
|
$(document).ready(function() {
|
||||||
|
setTimeout(function() {
|
||||||
|
$("#snowflakes-container").fadeOut("slow", function() {$(this).remove()})
|
||||||
|
}, 10 * 1000)
|
||||||
|
})
|
||||||
|
`
|
||||||
|
document.body.appendChild(script)
|
||||||
|
}
|
||||||
|
|
||||||
|
let date = new Date()
|
||||||
|
if (date.getMonth() === 11 && date.getDate() >= 12) {
|
||||||
|
makeItSnow()
|
||||||
|
}
|
||||||
|
})()
|
Reference in New Issue
Block a user