mirror of
https://github.com/easydiffusion/easydiffusion.git
synced 2024-11-22 16:23:28 +01:00
Merge remote-tracking branch 'upstream/main' into beta
This commit is contained in:
commit
9764d9109f
@ -13,12 +13,11 @@ If you would like to contribute to this project, there is a discord for dicussio
|
||||
This is in-flux, but one way to get a development environment running for editing the UI of this project is:
|
||||
(swap `.sh` or `.bat` in instructions depending on your environment, and be sure to adjust any paths to match where you're working)
|
||||
|
||||
1) `git clone` the repository, e.g. to `/projects/stable-diffusion-ui-repo`
|
||||
2) Download the pre-built end user archive from the link on github, and extract it, e.g. to `/projects/stable-diffusion-ui-archive`
|
||||
3) `cd /projects/stable-diffusion-ui-archive` and run the script to set up and start the project, e.g. `start.sh`
|
||||
4) Check you can view and generate images on `localhost:9000`
|
||||
5) Close the server, and edit `/projects/stable-diffusion-ui-archive/scripts/on_env_start.sh`
|
||||
6) Comment out the lines near the bottom that copies the `files/ui` folder, e.g:
|
||||
1) Install the project to a new location using the [usual installation process](https://github.com/cmdr2/stable-diffusion-ui#installation), e.g. to `/projects/stable-diffusion-ui-archive`
|
||||
2) Start the newly installed project, and check that you can view and generate images on `localhost:9000`
|
||||
3) Next, please clone the project repository using `git clone` (e.g. to `/projects/stable-diffusion-ui-repo`)
|
||||
4) Close the server (started in step 2), and edit `/projects/stable-diffusion-ui-archive/scripts/on_env_start.sh` (or `on_env_start.bat`)
|
||||
5) Comment out the lines near the bottom that copies the `files/ui` folder, e.g:
|
||||
|
||||
for `.sh`
|
||||
```
|
||||
@ -33,13 +32,13 @@ REM @xcopy sd-ui-files\ui ui /s /i /Y
|
||||
REM @copy sd-ui-files\scripts\on_sd_start.bat scripts\ /Y
|
||||
REM @copy "sd-ui-files\scripts\Start Stable Diffusion UI.cmd" . /Y
|
||||
```
|
||||
7) Comment out the line at the top of `/projects/stable-diffusion-ui-archive/scripts/on_sd_start.sh` that copies `on_env_start`. For e.g. `@copy sd-ui-files\scripts\on_env_start.bat scripts\ /Y`
|
||||
6) Next, comment out the line at the top of `/projects/stable-diffusion-ui-archive/scripts/on_sd_start.sh` (or `on_sd_start.bat`) that copies `on_env_start`. For e.g. `@rem @copy sd-ui-files\scripts\on_env_start.bat scripts\ /Y`
|
||||
8) Delete the current `ui` folder at `/projects/stable-diffusion-ui-archive/ui`
|
||||
9) Now make a symlink between the repository clone (where you will be making changes) and this archive (where you will be running stable diffusion):
|
||||
`ln -s /projects/stable-diffusion-ui-repo/ui /projects/stable-diffusion-ui-archive/ui`
|
||||
or for Windows
|
||||
`mklink /D \projects\stable-diffusion-ui-archive\ui \projects\stable-diffusion-ui-repo\ui` (link name first, source repo dir second)
|
||||
9) Run the archive again `start.sh` and ensure you can still use the UI.
|
||||
`mklink /J \projects\stable-diffusion-ui-archive\ui \projects\stable-diffusion-ui-repo\ui` (link name first, source repo dir second)
|
||||
9) Run the project again (like in step 2) and ensure you can still use the UI.
|
||||
10) Congrats, now any changes you make in your repo `ui` folder are linked to this running archive of the app and can be previewed in the browser.
|
||||
|
||||
Check the `ui/frontend/build/README.md` for instructions on running and building the React code.
|
||||
@ -47,9 +46,5 @@ Check the `ui/frontend/build/README.md` for instructions on running and building
|
||||
## Development environment for Installer changes
|
||||
Build the Windows installer using Windows, and the Linux installer using Linux. Don't mix the two, and don't use WSL. An Ubuntu VM is fine for building the Linux installer on a Windows host.
|
||||
|
||||
1. Install Miniconda 3 or Anaconda.
|
||||
2. Install `conda install -c conda-forge -y conda-pack`
|
||||
3. Open the Anaconda Prompt. Do not use WSL if you're building for Windows.
|
||||
4. Run `build.bat` or `./build.sh` depending on whether you're in Windows or Linux.
|
||||
5. Compress the `stable-diffusion-ui` folder created inside the `dist` folder. Make a `zip` for Windows, and `tar.xz` for Linux (smaller files, and Linux users already have tar).
|
||||
6. Make a new GitHub release and upload the Windows and Linux installer builds.
|
||||
1. Run `build.bat` or `./build.sh` depending on whether you're in Windows or Linux.
|
||||
2. Make a new GitHub release and upload the Windows and Linux installer builds created inside the `dist` folder.
|
||||
|
@ -66,7 +66,7 @@ Useful for judging (and stopping) an image quickly, without waiting for it to fi
|
||||
# System Requirements
|
||||
1. Windows 10/11, or Linux. Experimental support for Mac is coming soon.
|
||||
2. An NVIDIA graphics card, preferably with 4GB or more of VRAM. If you don't have a compatible graphics card, it'll automatically run in the slower "CPU Mode".
|
||||
3. Minimum 8 GB of RAM.
|
||||
3. Minimum 8 GB of RAM and 25GB of disk space.
|
||||
|
||||
You don't need to install or struggle with Python, Anaconda, Docker etc. The installer will take care of whatever is needed.
|
||||
|
||||
|
1
build.sh
1
build.sh
@ -28,6 +28,7 @@ mkdir -p dist/linux-mac/stable-diffusion-ui/scripts
|
||||
|
||||
cp scripts/on_env_start.sh dist/linux-mac/stable-diffusion-ui/scripts/
|
||||
cp scripts/bootstrap.sh dist/linux-mac/stable-diffusion-ui/scripts/
|
||||
cp scripts/functions.sh dist/linux-mac/stable-diffusion-ui/scripts/
|
||||
cp scripts/start.sh dist/linux-mac/stable-diffusion-ui/
|
||||
cp LICENSE dist/linux-mac/stable-diffusion-ui/
|
||||
cp "CreativeML Open RAIL-M License" dist/linux-mac/stable-diffusion-ui/
|
||||
|
@ -8,6 +8,8 @@ set PATH=C:\Windows\System32;%PATH%
|
||||
if exist "installer" set PATH=%cd%\installer;%cd%\installer\Library\bin;%cd%\installer\Scripts;%cd%\installer\Library\usr\bin;%PATH%
|
||||
if exist "installer_files\env" set PATH=%cd%\installer_files\env;%cd%\installer_files\env\Library\bin;%cd%\installer_files\env\Scripts;%cd%\installer_files\Library\usr\bin;%PATH%
|
||||
|
||||
set PYTHONPATH=%cd%\installer;%cd%\installer_files\env
|
||||
|
||||
@rem activate the installer env
|
||||
call conda activate
|
||||
|
||||
|
@ -1,5 +1,6 @@
|
||||
@echo off
|
||||
|
||||
cd /d %~dp0
|
||||
set PATH=C:\Windows\System32;%PATH%
|
||||
|
||||
@rem set legacy installer's PATH, if it exists
|
||||
@ -11,6 +12,8 @@ call scripts\bootstrap.bat
|
||||
@rem set new installer's PATH, if it downloaded any packages
|
||||
if exist "installer_files\env" set PATH=%cd%\installer_files\env;%cd%\installer_files\env\Library\bin;%cd%\installer_files\env\Scripts;%cd%\installer_files\Library\usr\bin;%PATH%
|
||||
|
||||
set PYTHONPATH=%cd%\installer;%cd%\installer_files\env
|
||||
|
||||
@rem Test the bootstrap
|
||||
call where git
|
||||
call git --version
|
||||
|
@ -13,6 +13,11 @@ set LEGACY_INSTALL_ENV_DIR=%cd%\installer
|
||||
set MICROMAMBA_DOWNLOAD_URL=https://github.com/cmdr2/stable-diffusion-ui/releases/download/v1.1/micromamba.exe
|
||||
set umamba_exists=F
|
||||
|
||||
set OLD_APPDATA=%APPDATA%
|
||||
set OLD_USERPROFILE=%USERPROFILE%
|
||||
set APPDATA=%cd%\installer_files\appdata
|
||||
set USERPROFILE=%cd%\profile
|
||||
|
||||
@rem figure out whether git and conda needs to be installed
|
||||
if exist "%INSTALL_ENV_DIR%" set PATH=%INSTALL_ENV_DIR%;%INSTALL_ENV_DIR%\Library\bin;%INSTALL_ENV_DIR%\Scripts;%INSTALL_ENV_DIR%\Library\usr\bin;%PATH%
|
||||
|
||||
@ -35,7 +40,16 @@ if "%PACKAGES_TO_INSTALL%" NEQ "" (
|
||||
echo "Downloading micromamba from %MICROMAMBA_DOWNLOAD_URL% to %MAMBA_ROOT_PREFIX%\micromamba.exe"
|
||||
|
||||
mkdir "%MAMBA_ROOT_PREFIX%"
|
||||
call curl -L "%MICROMAMBA_DOWNLOAD_URL%" > "%MAMBA_ROOT_PREFIX%\micromamba.exe"
|
||||
call curl -Lk "%MICROMAMBA_DOWNLOAD_URL%" > "%MAMBA_ROOT_PREFIX%\micromamba.exe"
|
||||
|
||||
@REM if "%ERRORLEVEL%" NEQ "0" (
|
||||
@REM echo "There was a problem downloading micromamba. Cannot continue."
|
||||
@REM pause
|
||||
@REM exit /b
|
||||
@REM )
|
||||
|
||||
mkdir "%APPDATA%"
|
||||
mkdir "%USERPROFILE%"
|
||||
|
||||
@rem test the mamba binary
|
||||
echo Micromamba version:
|
||||
@ -57,3 +71,7 @@ if "%PACKAGES_TO_INSTALL%" NEQ "" (
|
||||
exit /b
|
||||
)
|
||||
)
|
||||
|
||||
@rem revert to the old APPDATA. only needed it for bypassing a bug in micromamba (with special characters)
|
||||
set APPDATA=%OLD_APPDATA%
|
||||
set USERPROFILE=%OLD_USERPROFILE%
|
||||
|
@ -6,6 +6,9 @@
|
||||
|
||||
# This enables a user to install this project without manually installing conda and git.
|
||||
|
||||
source ./scripts/functions.sh
|
||||
|
||||
set -o pipefail
|
||||
|
||||
OS_NAME=$(uname -s)
|
||||
case "${OS_NAME}" in
|
||||
@ -29,6 +32,7 @@ export MAMBA_ROOT_PREFIX="$(pwd)/installer_files/mamba"
|
||||
INSTALL_ENV_DIR="$(pwd)/installer_files/env"
|
||||
LEGACY_INSTALL_ENV_DIR="$(pwd)/installer"
|
||||
MICROMAMBA_DOWNLOAD_URL="https://micro.mamba.pm/api/micromamba/${OS_NAME}-${OS_ARCH}/latest"
|
||||
umamba_exists="F"
|
||||
|
||||
# figure out whether git and conda needs to be installed
|
||||
if [ -e "$INSTALL_ENV_DIR" ]; then export PATH="$INSTALL_ENV_DIR/bin:$PATH"; fi
|
||||
@ -38,15 +42,25 @@ PACKAGES_TO_INSTALL=""
|
||||
if [ ! -e "$LEGACY_INSTALL_ENV_DIR/etc/profile.d/conda.sh" ] && [ ! -e "$INSTALL_ENV_DIR/etc/profile.d/conda.sh" ]; then PACKAGES_TO_INSTALL="$PACKAGES_TO_INSTALL conda"; fi
|
||||
if ! hash "git" &>/dev/null; then PACKAGES_TO_INSTALL="$PACKAGES_TO_INSTALL git"; fi
|
||||
|
||||
if "$MAMBA_ROOT_PREFIX/micromamba" --version &>/dev/null; then umamba_exists="T"; fi
|
||||
|
||||
# (if necessary) install git and conda into a contained environment
|
||||
if [ "$PACKAGES_TO_INSTALL" != "" ]; then
|
||||
# download micromamba
|
||||
if [ ! -e "$MAMBA_ROOT_PREFIX/micromamba" ]; then
|
||||
if [ "$umamba_exists" == "F" ]; then
|
||||
echo "Downloading micromamba from $MICROMAMBA_DOWNLOAD_URL to $MAMBA_ROOT_PREFIX/micromamba"
|
||||
|
||||
mkdir -p "$MAMBA_ROOT_PREFIX"
|
||||
curl -L "$MICROMAMBA_DOWNLOAD_URL" | tar -xvj bin/micromamba -O > "$MAMBA_ROOT_PREFIX/micromamba"
|
||||
|
||||
if [ "$?" != "0" ]; then
|
||||
echo
|
||||
echo "EE micromamba download failed"
|
||||
echo "EE If the lines above contain 'bzip2: Cannot exec', your system doesn't have bzip2 installed"
|
||||
echo "EE If there are network errors, please check your internet setup"
|
||||
fail "micromamba download failed"
|
||||
fi
|
||||
|
||||
chmod u+x "$MAMBA_ROOT_PREFIX/micromamba"
|
||||
|
||||
# test the mamba binary
|
||||
@ -56,15 +70,17 @@ if [ "$PACKAGES_TO_INSTALL" != "" ]; then
|
||||
|
||||
# create the installer env
|
||||
if [ ! -e "$INSTALL_ENV_DIR" ]; then
|
||||
"$MAMBA_ROOT_PREFIX/micromamba" create -y --prefix "$INSTALL_ENV_DIR"
|
||||
"$MAMBA_ROOT_PREFIX/micromamba" create -y --prefix "$INSTALL_ENV_DIR" || fail "unable to create the install environment"
|
||||
fi
|
||||
|
||||
if [ ! -e "$INSTALL_ENV_DIR" ]; then
|
||||
fail "There was a problem while installing$PACKAGES_TO_INSTALL using micromamba. Cannot continue."
|
||||
fi
|
||||
|
||||
echo "Packages to install:$PACKAGES_TO_INSTALL"
|
||||
|
||||
"$MAMBA_ROOT_PREFIX/micromamba" install -y --prefix "$INSTALL_ENV_DIR" -c conda-forge $PACKAGES_TO_INSTALL
|
||||
|
||||
if [ ! -e "$INSTALL_ENV_DIR" ]; then
|
||||
echo "There was a problem while installing$PACKAGES_TO_INSTALL using micromamba. Cannot continue."
|
||||
exit
|
||||
if [ "$?" != "0" ]; then
|
||||
fail "Installation of the packages '$PACKAGES_TO_INSTALL' failed."
|
||||
fi
|
||||
fi
|
||||
|
@ -1,5 +1,7 @@
|
||||
#!/bin/bash
|
||||
|
||||
cd "$(dirname "${BASH_SOURCE[0]}")"
|
||||
|
||||
if [ "$0" == "bash" ]; then
|
||||
echo "Opening Stable Diffusion UI - Developer Console.."
|
||||
echo ""
|
||||
@ -35,5 +37,6 @@ if [ "$0" == "bash" ]; then
|
||||
|
||||
echo ""
|
||||
else
|
||||
bash --init-file developer_console.sh
|
||||
file_name=$(basename "${BASH_SOURCE[0]}")
|
||||
bash --init-file "$file_name"
|
||||
fi
|
||||
|
32
scripts/functions.sh
Normal file
32
scripts/functions.sh
Normal file
@ -0,0 +1,32 @@
|
||||
#
|
||||
# utility functions for all scripts
|
||||
#
|
||||
|
||||
fail() {
|
||||
echo
|
||||
echo "EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE"
|
||||
echo
|
||||
if [ "$1" != "" ]; then
|
||||
echo ERROR: $1
|
||||
else
|
||||
echo An error occurred.
|
||||
fi
|
||||
cat <<EOF
|
||||
|
||||
Error downloading Stable Diffusion UI. Sorry about that, please try to:
|
||||
1. Run this installer again.
|
||||
2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting
|
||||
3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB
|
||||
4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues
|
||||
|
||||
Thanks!
|
||||
|
||||
|
||||
EOF
|
||||
read -p "Press any key to continue"
|
||||
exit 1
|
||||
|
||||
}
|
||||
|
||||
|
||||
|
@ -50,7 +50,7 @@ if "%update_branch%"=="" (
|
||||
)
|
||||
)
|
||||
|
||||
@xcopy sd-ui-files\ui ui /s /i /Y
|
||||
@xcopy sd-ui-files\ui ui /s /i /Y /q
|
||||
@copy sd-ui-files\scripts\on_sd_start.bat scripts\ /Y
|
||||
@copy sd-ui-files\scripts\bootstrap.bat scripts\ /Y
|
||||
@copy "sd-ui-files\scripts\Start Stable Diffusion UI.cmd" . /Y
|
||||
|
@ -1,5 +1,7 @@
|
||||
#!/bin/bash
|
||||
|
||||
source ./scripts/functions.sh
|
||||
|
||||
printf "\n\nStable Diffusion UI\n\n"
|
||||
|
||||
if [ -f "scripts/config.sh" ]; then
|
||||
@ -27,9 +29,7 @@ else
|
||||
if git clone -b "$update_branch" https://github.com/cmdr2/stable-diffusion-ui.git sd-ui-files ; then
|
||||
echo sd_ui_git_cloned >> scripts/install_status.txt
|
||||
else
|
||||
printf "\n\nError downloading Stable Diffusion UI. Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n"
|
||||
read -p "Press any key to continue"
|
||||
exit
|
||||
fail "git clone failed"
|
||||
fi
|
||||
fi
|
||||
|
||||
|
@ -71,6 +71,8 @@ if exist "Open Developer Console.cmd" del "Open Developer Console.cmd"
|
||||
set TMP=%cd%\tmp
|
||||
set TEMP=%cd%\tmp
|
||||
|
||||
set PYTHONPATH=%cd%;%cd%\env\lib\site-packages
|
||||
|
||||
@call conda env create --prefix env -f environment.yaml || (
|
||||
@echo. & echo "Error installing the packages necessary for Stable Diffusion. Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo.
|
||||
pause
|
||||
@ -108,6 +110,8 @@ set PATH=C:\Windows\System32;%PATH%
|
||||
set TMP=%cd%\tmp
|
||||
set TEMP=%cd%\tmp
|
||||
|
||||
set PYTHONPATH=%cd%;%cd%\env\lib\site-packages
|
||||
|
||||
@call pip install -e git+https://github.com/TencentARC/GFPGAN#egg=GFPGAN || (
|
||||
@echo. & echo "Error installing the packages necessary for GFPGAN (Face Correction). Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo.
|
||||
pause
|
||||
@ -141,6 +145,8 @@ set PATH=C:\Windows\System32;%PATH%
|
||||
set TMP=%cd%\tmp
|
||||
set TEMP=%cd%\tmp
|
||||
|
||||
set PYTHONPATH=%cd%;%cd%\env\lib\site-packages
|
||||
|
||||
@call pip install -e git+https://github.com/xinntao/Real-ESRGAN#egg=realesrgan || (
|
||||
@echo. & echo "Error installing the packages necessary for ESRGAN (Resolution Upscaling). Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo.
|
||||
pause
|
||||
@ -168,6 +174,8 @@ set PATH=C:\Windows\System32;%PATH%
|
||||
set TMP=%cd%\tmp
|
||||
set TEMP=%cd%\tmp
|
||||
|
||||
set PYTHONPATH=%cd%;%cd%\env\lib\site-packages
|
||||
|
||||
@call conda install -c conda-forge -y --prefix env uvicorn fastapi || (
|
||||
echo "Error installing the packages necessary for Stable Diffusion UI. Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!"
|
||||
pause
|
||||
@ -375,6 +383,9 @@ call python --version
|
||||
@set SD_UI_PATH=%cd%\ui
|
||||
@cd stable-diffusion
|
||||
|
||||
@uvicorn server:app --app-dir "%SD_UI_PATH%" --port 9000 --host 0.0.0.0
|
||||
@if NOT DEFINED SD_UI_BIND_PORT set SD_UI_BIND_PORT=9000
|
||||
@if NOT DEFINED SD_UI_BIND_IP set SD_UI_BIND_IP=0.0.0.0
|
||||
@uvicorn server:app --app-dir "%SD_UI_PATH%" --port %SD_UI_BIND_PORT% --host %SD_UI_BIND_IP%
|
||||
|
||||
|
||||
@pause
|
||||
|
@ -1,5 +1,7 @@
|
||||
#!/bin/bash
|
||||
|
||||
source ./scripts/functions.sh
|
||||
|
||||
cp sd-ui-files/scripts/on_env_start.sh scripts/
|
||||
cp sd-ui-files/scripts/bootstrap.sh scripts/
|
||||
|
||||
@ -7,14 +9,14 @@ cp sd-ui-files/scripts/bootstrap.sh scripts/
|
||||
CONDA_BASEPATH=$(conda info --base)
|
||||
source "$CONDA_BASEPATH/etc/profile.d/conda.sh" # avoids the 'shell not initialized' error
|
||||
|
||||
conda activate
|
||||
conda activate || fail "Failed to activate conda"
|
||||
|
||||
# remove the old version of the dev console script, if it's still present
|
||||
if [ -e "open_dev_console.sh" ]; then
|
||||
rm "open_dev_console.sh"
|
||||
fi
|
||||
|
||||
python -c "import os; import shutil; frm = 'sd-ui-files/ui/hotfix/9c24e6cd9f499d02c4f21a033736dabd365962dc80fe3aeb57a8f85ea45a20a3.26fead7ea4f0f843f6eb4055dfd25693f1a71f3c6871b184042d4b126244e142'; dst = os.path.join(os.path.expanduser('~'), '.cache', 'huggingface', 'transformers', '9c24e6cd9f499d02c4f21a033736dabd365962dc80fe3aeb57a8f85ea45a20a3.26fead7ea4f0f843f6eb4055dfd25693f1a71f3c6871b184042d4b126244e142'); shutil.copyfile(frm, dst) if os.path.exists(dst) else print(''); print('Hotfixed broken JSON file from OpenAI');"
|
||||
python -c "import os; import shutil; frm = 'sd-ui-files/ui/hotfix/9c24e6cd9f499d02c4f21a033736dabd365962dc80fe3aeb57a8f85ea45a20a3.26fead7ea4f0f843f6eb4055dfd25693f1a71f3c6871b184042d4b126244e142'; dst = os.path.join(os.path.expanduser('~'), '.cache', 'huggingface', 'transformers', '9c24e6cd9f499d02c4f21a033736dabd365962dc80fe3aeb57a8f85ea45a20a3.26fead7ea4f0f843f6eb4055dfd25693f1a71f3c6871b184042d4b126244e142'); shutil.copyfile(frm, dst) if os.path.exists(dst) else print(''); print('Hotfixed broken JSON file from OpenAI');"
|
||||
|
||||
# Caution, this file will make your eyes and brain bleed. It's such an unholy mess.
|
||||
# Note to self: Please rewrite this in Python. For the sake of your own sanity.
|
||||
@ -28,8 +30,8 @@ if [ -e "scripts/install_status.txt" ] && [ `grep -c sd_git_cloned scripts/insta
|
||||
git pull
|
||||
git -c advice.detachedHead=false checkout f6cfebffa752ee11a7b07497b8529d5971de916c
|
||||
|
||||
git apply ../ui/sd_internal/ddim_callback.patch
|
||||
git apply ../ui/sd_internal/env_yaml.patch
|
||||
git apply ../ui/sd_internal/ddim_callback.patch || fail "ddim patch failed"
|
||||
git apply ../ui/sd_internal/env_yaml.patch || fail "yaml patch failed"
|
||||
|
||||
cd ..
|
||||
else
|
||||
@ -38,16 +40,14 @@ else
|
||||
if git clone https://github.com/basujindal/stable-diffusion.git ; then
|
||||
echo sd_git_cloned >> scripts/install_status.txt
|
||||
else
|
||||
printf "\n\nError downloading Stable Diffusion. Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n"
|
||||
read -p "Press any key to continue"
|
||||
exit
|
||||
fail "git clone of basujindal/stable-diffusion.git failed"
|
||||
fi
|
||||
|
||||
cd stable-diffusion
|
||||
git -c advice.detachedHead=false checkout f6cfebffa752ee11a7b07497b8529d5971de916c
|
||||
|
||||
git apply ../ui/sd_internal/ddim_callback.patch
|
||||
git apply ../ui/sd_internal/env_yaml.patch
|
||||
git apply ../ui/sd_internal/ddim_callback.patch || fail "ddim patch failed"
|
||||
git apply ../ui/sd_internal/env_yaml.patch || fail "yaml patch failed"
|
||||
|
||||
cd ..
|
||||
fi
|
||||
@ -57,37 +57,32 @@ cd stable-diffusion
|
||||
if [ `grep -c conda_sd_env_created ../scripts/install_status.txt` -gt "0" ]; then
|
||||
echo "Packages necessary for Stable Diffusion were already installed"
|
||||
|
||||
conda activate ./env
|
||||
conda activate ./env || fail "conda activate failed"
|
||||
else
|
||||
printf "\n\nDownloading packages necessary for Stable Diffusion..\n"
|
||||
printf "\n\n***** This will take some time (depending on the speed of the Internet connection) and may appear to be stuck, but please be patient ***** ..\n\n"
|
||||
|
||||
# prevent conda from using packages from the user's home directory, to avoid conflicts
|
||||
export PYTHONNOUSERSITE=1
|
||||
export PYTHONPATH="$(pwd):$(pwd)/env/lib/site-packages"
|
||||
|
||||
if conda env create --prefix env --force -f environment.yaml ; then
|
||||
echo "Installed. Testing.."
|
||||
else
|
||||
printf "\n\nError installing the packages necessary for Stable Diffusion. Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n"
|
||||
read -p "Press any key to continue"
|
||||
exit
|
||||
fail "'conda env create' failed"
|
||||
fi
|
||||
|
||||
conda activate ./env
|
||||
conda activate ./env || fail "conda activate failed"
|
||||
|
||||
if conda install -c conda-forge --prefix ./env -y antlr4-python3-runtime=4.8 ; then
|
||||
echo "Installed. Testing.."
|
||||
else
|
||||
printf "\n\nError installing antlr4-python3-runtime for Stable Diffusion. Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n"
|
||||
read -p "Press any key to continue"
|
||||
exit
|
||||
fail "Error installing antlr4-python3-runtime"
|
||||
fi
|
||||
|
||||
out_test=`python -c "import torch; import ldm; import transformers; import numpy; import antlr4; print(42)"`
|
||||
if [ "$out_test" != "42" ]; then
|
||||
printf "\n\nDependency test failed! Error installing the packages necessary for Stable Diffusion. Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n"
|
||||
read -p "Press any key to continue"
|
||||
exit
|
||||
fail "Dependency test failed"
|
||||
fi
|
||||
|
||||
echo conda_sd_env_created >> ../scripts/install_status.txt
|
||||
@ -99,20 +94,20 @@ else
|
||||
printf "\n\nDownloading packages necessary for GFPGAN (Face Correction)..\n"
|
||||
|
||||
export PYTHONNOUSERSITE=1
|
||||
export PYTHONPATH="$(pwd):$(pwd)/env/lib/site-packages"
|
||||
|
||||
if pip install -e git+https://github.com/TencentARC/GFPGAN#egg=GFPGAN ; then
|
||||
echo "Installed. Testing.."
|
||||
else
|
||||
printf "\n\nError installing the packages necessary for GFPGAN (Face Correction). Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n"
|
||||
read -p "Press any key to continue"
|
||||
exit
|
||||
fail "Error installing the packages necessary for GFPGAN (Face Correction)."
|
||||
fi
|
||||
|
||||
out_test=`python -c "from gfpgan import GFPGANer; print(42)"`
|
||||
if [ "$out_test" != "42" ]; then
|
||||
printf "\n\nDependency test failed! Error installing the packages necessary for GFPGAN (Face Correction). Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n"
|
||||
read -p "Press any key to continue"
|
||||
exit
|
||||
echo "EE The dependency check has failed. This usually means that some system libraries are missing."
|
||||
echo "EE On Debian/Ubuntu systems, this are often these packages: libsm6 libxext6 libxrender-dev"
|
||||
echo "EE Other Linux distributions might have different package names for these libraries."
|
||||
fail "GFPGAN dependency test failed"
|
||||
fi
|
||||
|
||||
echo conda_sd_gfpgan_deps_installed >> ../scripts/install_status.txt
|
||||
@ -124,20 +119,17 @@ else
|
||||
printf "\n\nDownloading packages necessary for ESRGAN (Resolution Upscaling)..\n"
|
||||
|
||||
export PYTHONNOUSERSITE=1
|
||||
export PYTHONPATH="$(pwd):$(pwd)/env/lib/site-packages"
|
||||
|
||||
if pip install -e git+https://github.com/xinntao/Real-ESRGAN#egg=realesrgan ; then
|
||||
echo "Installed. Testing.."
|
||||
else
|
||||
printf "\n\nError installing the packages necessary for ESRGAN (Resolution Upscaling). Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n"
|
||||
read -p "Press any key to continue"
|
||||
exit
|
||||
fail "Error installing the packages necessary for ESRGAN"
|
||||
fi
|
||||
|
||||
out_test=`python -c "from basicsr.archs.rrdbnet_arch import RRDBNet; from realesrgan import RealESRGANer; print(42)"`
|
||||
if [ "$out_test" != "42" ]; then
|
||||
printf "\n\nDependency test failed! Error installing the packages necessary for ESRGAN (Resolution Upscaling). Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n"
|
||||
read -p "Press any key to continue"
|
||||
exit
|
||||
fail "ESRGAN dependency test failed"
|
||||
fi
|
||||
|
||||
echo conda_sd_esrgan_deps_installed >> ../scripts/install_status.txt
|
||||
@ -149,19 +141,16 @@ else
|
||||
printf "\n\nDownloading packages necessary for Stable Diffusion UI..\n\n"
|
||||
|
||||
export PYTHONNOUSERSITE=1
|
||||
export PYTHONPATH="$(pwd):$(pwd)/env/lib/site-packages"
|
||||
|
||||
if conda install -c conda-forge --prefix ./env -y uvicorn fastapi ; then
|
||||
echo "Installed. Testing.."
|
||||
else
|
||||
printf "\n\nError installing the packages necessary for Stable Diffusion UI. Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n"
|
||||
read -p "Press any key to continue"
|
||||
exit
|
||||
fail "'conda install uvicorn' failed"
|
||||
fi
|
||||
|
||||
if ! command -v uvicorn &> /dev/null; then
|
||||
printf "\n\nUI packages not found! Error installing the packages necessary for Stable Diffusion UI. Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n"
|
||||
read -p "Press any key to continue"
|
||||
exit
|
||||
fail "UI packages not found!"
|
||||
fi
|
||||
|
||||
echo conda_sd_ui_deps_installed >> ../scripts/install_status.txt
|
||||
@ -193,15 +182,10 @@ if [ ! -f "sd-v1-4.ckpt" ]; then
|
||||
if [ -f "sd-v1-4.ckpt" ]; then
|
||||
model_size=`find "sd-v1-4.ckpt" -printf "%s"`
|
||||
if [ ! "$model_size" == "4265380512" ]; then
|
||||
printf "\n\nError: The downloaded model file was invalid! Bytes downloaded: $model_size\n\n"
|
||||
printf "\n\nError downloading the data files (weights) for Stable Diffusion. Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n"
|
||||
read -p "Press any key to continue"
|
||||
exit
|
||||
fail "The downloaded model file was invalid! Bytes downloaded: $model_size"
|
||||
fi
|
||||
else
|
||||
printf "\n\nError downloading the data files (weights) for Stable Diffusion. Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n"
|
||||
read -p "Press any key to continue"
|
||||
exit
|
||||
fail "Error downloading the data files (weights) for Stable Diffusion"
|
||||
fi
|
||||
fi
|
||||
|
||||
@ -225,15 +209,10 @@ if [ ! -f "GFPGANv1.3.pth" ]; then
|
||||
if [ -f "GFPGANv1.3.pth" ]; then
|
||||
model_size=`find "GFPGANv1.3.pth" -printf "%s"`
|
||||
if [ ! "$model_size" -eq "348632874" ]; then
|
||||
printf "\n\nError: The downloaded GFPGAN model file was invalid! Bytes downloaded: $model_size\n\n"
|
||||
printf "\n\nError downloading the data files (weights) for GFPGAN (Face Correction). Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n"
|
||||
read -p "Press any key to continue"
|
||||
exit
|
||||
fail "The downloaded GFPGAN model file was invalid! Bytes downloaded: $model_size"
|
||||
fi
|
||||
else
|
||||
printf "\n\nError downloading the data files (weights) for GFPGAN (Face Correction). Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n"
|
||||
read -p "Press any key to continue"
|
||||
exit
|
||||
fail "Error downloading the data files (weights) for GFPGAN (Face Correction)."
|
||||
fi
|
||||
fi
|
||||
|
||||
@ -257,15 +236,10 @@ if [ ! -f "RealESRGAN_x4plus.pth" ]; then
|
||||
if [ -f "RealESRGAN_x4plus.pth" ]; then
|
||||
model_size=`find "RealESRGAN_x4plus.pth" -printf "%s"`
|
||||
if [ ! "$model_size" -eq "67040989" ]; then
|
||||
printf "\n\nError: The downloaded ESRGAN x4plus model file was invalid! Bytes downloaded: $model_size\n\n"
|
||||
printf "\n\nError downloading the data files (weights) for ESRGAN (Resolution Upscaling) x4plus. Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n"
|
||||
read -p "Press any key to continue"
|
||||
exit
|
||||
fail "The downloaded ESRGAN x4plus model file was invalid! Bytes downloaded: $model_size"
|
||||
fi
|
||||
else
|
||||
printf "\n\nError downloading the data files (weights) for ESRGAN (Resolution Upscaling) x4plus. Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n"
|
||||
read -p "Press any key to continue"
|
||||
exit
|
||||
fail "Error downloading the data files (weights) for ESRGAN (Resolution Upscaling) x4plus"
|
||||
fi
|
||||
fi
|
||||
|
||||
@ -289,15 +263,10 @@ if [ ! -f "RealESRGAN_x4plus_anime_6B.pth" ]; then
|
||||
if [ -f "RealESRGAN_x4plus_anime_6B.pth" ]; then
|
||||
model_size=`find "RealESRGAN_x4plus_anime_6B.pth" -printf "%s"`
|
||||
if [ ! "$model_size" -eq "17938799" ]; then
|
||||
printf "\n\nError: The downloaded ESRGAN x4plus_anime model file was invalid! Bytes downloaded: $model_size\n\n"
|
||||
printf "\n\nError downloading the data files (weights) for ESRGAN (Resolution Upscaling) x4plus_anime. Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n"
|
||||
read -p "Press any key to continue"
|
||||
exit
|
||||
fail "The downloaded ESRGAN x4plus_anime model file was invalid! Bytes downloaded: $model_size"
|
||||
fi
|
||||
else
|
||||
printf "\n\nError downloading the data files (weights) for ESRGAN (Resolution Upscaling) x4plus_anime. Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n"
|
||||
read -p "Press any key to continue"
|
||||
exit
|
||||
fail "Error downloading the data files (weights) for ESRGAN (Resolution Upscaling) x4plus_anime."
|
||||
fi
|
||||
fi
|
||||
|
||||
@ -352,6 +321,6 @@ cd ..
|
||||
export SD_UI_PATH=`pwd`/ui
|
||||
cd stable-diffusion
|
||||
|
||||
uvicorn server:app --app-dir "$SD_UI_PATH" --port 9000 --host 0.0.0.0
|
||||
uvicorn server:app --app-dir "$SD_UI_PATH" --port ${SD_UI_BIND_PORT:-9000} --host ${SD_UI_BIND_IP:-0.0.0.0}
|
||||
|
||||
read -p "Press any key to continue"
|
||||
|
@ -6,17 +6,17 @@ cd "$(dirname "${BASH_SOURCE[0]}")"
|
||||
if [ -e "installer" ]; then export PATH="$(pwd)/installer/bin:$PATH"; fi
|
||||
|
||||
# Setup the packages required for the installer
|
||||
scripts/bootstrap.sh
|
||||
scripts/bootstrap.sh || exit 1
|
||||
|
||||
# set new installer's PATH, if it downloaded any packages
|
||||
if [ -e "installer_files/env" ]; then export PATH="$(pwd)/installer_files/env/bin:$PATH"; fi
|
||||
|
||||
# Test the bootstrap
|
||||
which git
|
||||
git --version
|
||||
git --version || exit 1
|
||||
|
||||
which conda
|
||||
conda --version
|
||||
conda --version || exit 1
|
||||
|
||||
# Download the rest of the installer and UI
|
||||
scripts/on_env_start.sh
|
||||
|
@ -6,8 +6,8 @@
|
||||
<link rel="icon" type="image/png" href="/media/images/favicon-16x16.png" sizes="16x16">
|
||||
<link rel="icon" type="image/png" href="/media/images/favicon-32x32.png" sizes="32x32">
|
||||
<link rel="stylesheet" href="/media/css/fonts.css?v=1">
|
||||
<link rel="stylesheet" href="/media/css/themes.css?v=2">
|
||||
<link rel="stylesheet" href="/media/css/main.css?v=9">
|
||||
<link rel="stylesheet" href="/media/css/themes.css?v=3">
|
||||
<link rel="stylesheet" href="/media/css/main.css?v=17">
|
||||
<link rel="stylesheet" href="/media/css/auto-save.css?v=5">
|
||||
<link rel="stylesheet" href="/media/css/modifier-thumbnails.css?v=4">
|
||||
<link rel="stylesheet" href="/media/css/fontawesome-all.min.css?v=1">
|
||||
@ -19,7 +19,7 @@
|
||||
<div id="container">
|
||||
<div id="top-nav">
|
||||
<div id="logo">
|
||||
<h1>Stable Diffusion UI <small>v2.3.13 <span id="updateBranchLabel"></span></small></h1>
|
||||
<h1>Stable Diffusion UI <small>v2.4.5 <span id="updateBranchLabel"></span></small></h1>
|
||||
</div>
|
||||
<div id="server-status">
|
||||
<div id="server-status-color">●</div>
|
||||
@ -227,7 +227,10 @@
|
||||
|
||||
<div id="preview" class="col-free">
|
||||
<div id="initial-text">
|
||||
Type a prompt and press the "Make Image" button.<br/><br/>You can set an "Initial Image" if you want to guide the AI.<br/><br/>You can also add modifiers like "Realistic", "Pencil Sketch", "ArtStation" etc by browsing through the "Image Modifiers" section and selecting the desired modifiers.<br/><br/>Click "Advanced Settings" for additional settings like seed, image size, number of images to generate etc.<br/><br/>Enjoy! :)
|
||||
Type a prompt and press the "Make Image" button.<br/><br/>You can set an "Initial Image" if you want to guide the AI.<br/><br/>
|
||||
You can also add modifiers like "Realistic", "Pencil Sketch", "ArtStation" etc by browsing through the "Image Modifiers" section
|
||||
and selecting the desired modifiers.<br/><br/>
|
||||
Click "Image Settings" for additional settings like seed, image size, number of images to generate etc.<br/><br/>Enjoy! :)
|
||||
</div>
|
||||
<div id="preview-tools">
|
||||
<button id="clear-all-previews" class="secondaryButton"><i class="fa-solid fa-trash-can"></i> Clear All</button>
|
||||
@ -239,13 +242,20 @@
|
||||
<div id="system-settings" class="tab-content-inner">
|
||||
<h1>System Settings</h1>
|
||||
<table class="form-table"></table>
|
||||
<br/>
|
||||
<button id="save-system-settings-btn" class="primaryButton">Save</button>
|
||||
<br/><br/>
|
||||
<div>
|
||||
<h3><i class="fa fa-microchip icon"></i> System Info</h3>
|
||||
<div id="system-info"></div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<div id="tab-content-about" class="tab-content">
|
||||
<div class="tab-content-inner">
|
||||
<div class="float-container">
|
||||
<div class="float-child">
|
||||
<h1>Help</h1>
|
||||
<h1>Help</h1>
|
||||
<ul id="help-links">
|
||||
<li><span class="help-section">Using the software</span>
|
||||
<ul>
|
||||
@ -270,7 +280,7 @@
|
||||
</div>
|
||||
|
||||
<div class="float-child">
|
||||
<h1>Community</h1>
|
||||
<h1>Community</h1>
|
||||
<ul id="community-links">
|
||||
<li><a href="https://discord.com/invite/u9yhsFmEkB" target="_blank"><i class="fa-brands fa-discord fa-fw"></i> Discord user community</a></li>
|
||||
<li><a href="https://www.reddit.com/r/StableDiffusionUI/" target="_blank"><i class="fa-brands fa-reddit fa-fw"></i> Reddit community</a></li>
|
||||
@ -317,15 +327,15 @@
|
||||
</div>
|
||||
</body>
|
||||
|
||||
<script src="media/js/parameters.js?v=4"></script>
|
||||
<script src="media/js/parameters.js?v=9"></script>
|
||||
<script src="media/js/plugins.js?v=1"></script>
|
||||
<script src="media/js/utils.js?v=6"></script>
|
||||
<script src="media/js/inpainting-editor.js?v=1"></script>
|
||||
<script src="media/js/image-modifiers.js?v=6"></script>
|
||||
<script src="media/js/auto-save.js?v=7"></script>
|
||||
<script src="media/js/main.js?v=11"></script>
|
||||
<script src="media/js/auto-save.js?v=8"></script>
|
||||
<script src="media/js/main.js?v=22.1"></script>
|
||||
<script src="media/js/themes.js?v=4"></script>
|
||||
<script src="media/js/dnd.js?v=8"></script>
|
||||
<script src="media/js/dnd.js?v=9"></script>
|
||||
<script>
|
||||
async function init() {
|
||||
await initSettings()
|
||||
|
@ -122,6 +122,7 @@ label {
|
||||
padding: 16px;
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
flex: 0 0 370pt;
|
||||
}
|
||||
#editor label {
|
||||
font-weight: normal;
|
||||
@ -160,7 +161,7 @@ label {
|
||||
#makeImage {
|
||||
flex: 0 0 70px;
|
||||
background: var(--accent-color);
|
||||
border: var(--make-image-border);
|
||||
border: var(--primary-button-border);
|
||||
color: rgb(255, 221, 255);
|
||||
width: 100%;
|
||||
height: 30pt;
|
||||
@ -177,6 +178,7 @@ label {
|
||||
height: 30pt;
|
||||
border-radius: 6px;
|
||||
display: none;
|
||||
margin-top: 2pt;
|
||||
}
|
||||
#stopImage:hover {
|
||||
background: rgb(177, 27, 0);
|
||||
@ -390,7 +392,7 @@ img {
|
||||
}
|
||||
|
||||
.imageTaskContainer {
|
||||
border: 1px solid var(--background-color1);
|
||||
border: 1px solid var(--background-color2);
|
||||
margin-bottom: 10pt;
|
||||
padding: 5pt;
|
||||
border-radius: 5pt;
|
||||
@ -418,6 +420,12 @@ img {
|
||||
border: 1px solid rgb(107, 75, 0);
|
||||
color:rgb(255, 242, 211)
|
||||
}
|
||||
.primaryButton {
|
||||
flex: 0 0 70px;
|
||||
background: var(--accent-color);
|
||||
border: var(--primary-button-border);
|
||||
color: rgb(255, 221, 255);
|
||||
}
|
||||
.secondaryButton {
|
||||
background: rgb(132, 8, 0);
|
||||
border: 1px solid rgb(122, 29, 0);
|
||||
@ -895,3 +903,22 @@ input::file-selector-button {
|
||||
margin-bottom: 15px;
|
||||
box-shadow: 0 4px 8px 0 rgba(0, 0, 0, 0.15), 0 6px 20px 0 rgba(0, 0, 0, 0.15);
|
||||
}
|
||||
|
||||
i.active {
|
||||
background: var(--accent-color);
|
||||
}
|
||||
#system-info {
|
||||
max-width: 800px;
|
||||
font-size: 10pt;
|
||||
}
|
||||
#system-info .value {
|
||||
text-align: left;
|
||||
padding-left: 10pt;
|
||||
}
|
||||
#system-info label {
|
||||
float: right;
|
||||
font-weight: bold;
|
||||
}
|
||||
#save-system-settings-btn {
|
||||
padding: 4pt 8pt;
|
||||
}
|
||||
|
@ -23,7 +23,7 @@
|
||||
--input-border-size: 1px;
|
||||
--accent-color: hsl(var(--accent-hue), 100%, var(--accent-lightness));
|
||||
--accent-color-hover: hsl(var(--accent-hue), 100%, var(--accent-lightness-hover));
|
||||
--make-image-border: 2px solid hsl(var(--accent-hue), 100%, calc(var(--accent-lightness) - 21%));
|
||||
--primary-button-border: none;
|
||||
}
|
||||
|
||||
.theme-light {
|
||||
@ -47,7 +47,7 @@
|
||||
|
||||
--accent-hue: 235;
|
||||
--accent-lightness: 65%;
|
||||
--make-image-border: none;
|
||||
--primary-button-border: none;
|
||||
|
||||
--button-color: var(--accent-color);
|
||||
--button-border: none;
|
||||
@ -69,7 +69,7 @@
|
||||
--background-color4: hsl(var(--main-hue), var(--main-saturation), calc(var(--value-base) - (3 * var(--value-step))));
|
||||
|
||||
--accent-hue: 212;
|
||||
--make-image-border: none;
|
||||
--primary-button-border: none;
|
||||
|
||||
--button-color: var(--accent-color);
|
||||
--button-border: none;
|
||||
@ -91,7 +91,7 @@
|
||||
--background-color3: hsl(var(--main-hue), var(--main-saturation), calc(var(--value-base) - (2 * var(--value-step))));
|
||||
--background-color4: hsl(var(--main-hue), var(--main-saturation), calc(var(--value-base) - (3 * var(--value-step))));
|
||||
|
||||
--make-image-border: none;
|
||||
--primary-button-border: none;
|
||||
|
||||
--button-color: var(--accent-color);
|
||||
--button-border: none;
|
||||
@ -112,7 +112,7 @@
|
||||
--background-color3: hsl(var(--main-hue), var(--main-saturation), calc(var(--value-base) + (2 * var(--value-step))));
|
||||
--background-color4: hsl(var(--main-hue), var(--main-saturation), calc(var(--value-base) + (1.4 * var(--value-step))));
|
||||
|
||||
--make-image-border: none;
|
||||
--primary-button-border: none;
|
||||
|
||||
--button-color: var(--accent-color);
|
||||
--button-border: none;
|
||||
@ -134,7 +134,7 @@
|
||||
--background-color4: hsl(var(--main-hue), var(--main-saturation), calc(var(--value-base) - (3 * var(--value-step))));
|
||||
|
||||
--accent-hue: 212;
|
||||
--make-image-border: none;
|
||||
--primary-button-border: none;
|
||||
|
||||
--button-color: var(--accent-color);
|
||||
--button-border: none;
|
||||
|
@ -282,7 +282,12 @@ function tryLoadOldSettings() {
|
||||
Object.keys(individual_settings_map).forEach(localStorageKey => {
|
||||
var localStorageValue = localStorage.getItem(localStorageKey);
|
||||
if (localStorageValue !== null) {
|
||||
var setting = SETTINGS[individual_settings_map[localStorageKey]]
|
||||
let key = individual_settings_map[localStorageKey]
|
||||
var setting = SETTINGS[key]
|
||||
if (!setting) {
|
||||
console.warn(`Attempted to map old setting ${key}, but no setting found`);
|
||||
return null;
|
||||
}
|
||||
if (setting.element.type == "checkbox" && (typeof localStorageValue === "string" || localStorageValue instanceof String)) {
|
||||
localStorageValue = localStorageValue == "true"
|
||||
}
|
||||
|
@ -396,16 +396,44 @@ const TASK_REQ_NO_EXPORT = [
|
||||
"save_to_disk_path"
|
||||
]
|
||||
|
||||
// Adds a copy icon if the browser grants permission to write to clipboard.
|
||||
// Retrieve clipboard content and try to parse it
|
||||
async function pasteFromClipboard() {
|
||||
//const text = await navigator.clipboard.readText()
|
||||
let text = await navigator.clipboard.readText();
|
||||
text=text.trim();
|
||||
if (text.startsWith('{') && text.endsWith('}')) {
|
||||
try {
|
||||
const task = JSON.parse(text)
|
||||
restoreTaskToUI(task)
|
||||
} catch (e) {
|
||||
console.warn(`Clipboard JSON couldn't be parsed.`, e)
|
||||
}
|
||||
return
|
||||
}
|
||||
// Normal txt file.
|
||||
const task = parseTaskFromText(text)
|
||||
if (task) {
|
||||
restoreTaskToUI(task)
|
||||
} else {
|
||||
console.warn(`Clipboard content - File couldn't be parsed.`)
|
||||
}
|
||||
}
|
||||
|
||||
// Adds a copy and a paste icon if the browser grants permission to write to clipboard.
|
||||
function checkWriteToClipboardPermission (result) {
|
||||
if (result.state == "granted" || result.state == "prompt") {
|
||||
const resetSettings = document.getElementById('reset-image-settings')
|
||||
|
||||
// COPY ICON
|
||||
const copyIcon = document.createElement('i')
|
||||
// copyIcon.id = 'copy-image-settings'
|
||||
copyIcon.className = 'fa-solid fa-clipboard section-button'
|
||||
copyIcon.innerHTML = `<span class="simple-tooltip right">Copy Image Settings</span>`
|
||||
copyIcon.addEventListener('click', (event) => {
|
||||
event.stopPropagation()
|
||||
// Add css class 'active'
|
||||
copyIcon.classList.add('active')
|
||||
// In 1000 ms remove the 'active' class
|
||||
asyncDelay(1000).then(() => copyIcon.classList.remove('active'))
|
||||
const uiState = readUI()
|
||||
TASK_REQ_NO_EXPORT.forEach((key) => delete uiState.reqBody[key])
|
||||
if (uiState.reqBody.init_image && !IMAGE_REGEX.test(uiState.reqBody.init_image)) {
|
||||
@ -415,8 +443,24 @@ function checkWriteToClipboardPermission (result) {
|
||||
navigator.clipboard.writeText(JSON.stringify(uiState, undefined, 4))
|
||||
})
|
||||
resetSettings.parentNode.insertBefore(copyIcon, resetSettings)
|
||||
|
||||
// PASTE ICON
|
||||
const pasteIcon = document.createElement('i')
|
||||
pasteIcon.className = 'fa-solid fa-paste section-button'
|
||||
pasteIcon.innerHTML = `<span class="simple-tooltip right">Paste Image Settings</span>`
|
||||
pasteIcon.addEventListener('click', (event) => {
|
||||
event.stopPropagation()
|
||||
// Add css class 'active'
|
||||
pasteIcon.classList.add('active')
|
||||
// In 1000 ms remove the 'active' class
|
||||
asyncDelay(1000).then(() => pasteIcon.classList.remove('active'))
|
||||
pasteFromClipboard()
|
||||
})
|
||||
resetSettings.parentNode.insertBefore(pasteIcon, resetSettings)
|
||||
}
|
||||
}
|
||||
|
||||
// Determine which access we have to the clipboard. Clipboard access is only available on localhost or via TLS.
|
||||
navigator.permissions.query({ name: "clipboard-write" }).then(checkWriteToClipboardPermission, (e) => {
|
||||
if (e instanceof TypeError && typeof navigator?.clipboard?.writeText === 'function') {
|
||||
// Fix for firefox https://bugzilla.mozilla.org/show_bug.cgi?id=1560373
|
||||
|
@ -25,14 +25,6 @@ let initImagePreview = document.querySelector("#init_image_preview")
|
||||
let initImageSizeBox = document.querySelector("#init_image_size_box")
|
||||
let maskImageSelector = document.querySelector("#mask")
|
||||
let maskImagePreview = document.querySelector("#mask_preview")
|
||||
let turboField = document.querySelector('#turbo')
|
||||
let useCPUField = document.querySelector('#use_cpu')
|
||||
let useGPUsField = document.querySelector('#use_gpus')
|
||||
let useFullPrecisionField = document.querySelector('#use_full_precision')
|
||||
let saveToDiskField = document.querySelector('#save_to_disk')
|
||||
let diskPathField = document.querySelector('#diskPath')
|
||||
// let allowNSFWField = document.querySelector("#allow_nsfw")
|
||||
let useBetaChannelField = document.querySelector("#use_beta_channel")
|
||||
let promptStrengthSlider = document.querySelector('#prompt_strength_slider')
|
||||
let promptStrengthField = document.querySelector('#prompt_strength')
|
||||
let samplerField = document.querySelector('#sampler')
|
||||
@ -59,21 +51,10 @@ let initialText = document.querySelector("#initial-text")
|
||||
let previewTools = document.querySelector("#preview-tools")
|
||||
let clearAllPreviewsBtn = document.querySelector("#clear-all-previews")
|
||||
|
||||
// let maskSetting = document.querySelector('#editor-inputs-mask_setting')
|
||||
// let maskImagePreviewContainer = document.querySelector('#mask_preview_container')
|
||||
// let maskImageClearBtn = document.querySelector('#mask_clear')
|
||||
let maskSetting = document.querySelector('#enable_mask')
|
||||
|
||||
let imagePreview = document.querySelector("#preview")
|
||||
|
||||
// let previewPrompt = document.querySelector('#preview-prompt')
|
||||
|
||||
let showConfigToggle = document.querySelector('#configToggleBtn')
|
||||
// let configBox = document.querySelector('#config')
|
||||
// let outputMsg = document.querySelector('#outputMsg')
|
||||
|
||||
let soundToggle = document.querySelector('#sound_toggle')
|
||||
|
||||
let serverStatusColor = document.querySelector('#server-status-color')
|
||||
let serverStatusMsg = document.querySelector('#server-status-msg')
|
||||
|
||||
@ -87,7 +68,6 @@ maskResetButton.style.fontWeight = 'normal'
|
||||
maskResetButton.style.fontSize = '10pt'
|
||||
|
||||
let serverState = {'status': 'Offline', 'time': Date.now()}
|
||||
let lastPromptUsed = ''
|
||||
let bellPending = false
|
||||
|
||||
let taskQueue = []
|
||||
@ -189,6 +169,34 @@ function playSound() {
|
||||
})
|
||||
}
|
||||
}
|
||||
function setSystemInfo(devices) {
|
||||
let cpu = devices.all.cpu.name
|
||||
let allGPUs = Object.keys(devices.all).filter(d => d != 'cpu')
|
||||
let activeGPUs = Object.keys(devices.active)
|
||||
|
||||
function ID_TO_TEXT(d) {
|
||||
let info = devices.all[d]
|
||||
if ("mem_free" in info && "mem_total" in info) {
|
||||
return `${info.name} <small>(${d}) (${info.mem_free.toFixed(1)}Gb free / ${info.mem_total.toFixed(1)} Gb total)</small>`
|
||||
} else {
|
||||
return `${info.name} <small>(${d}) (no memory info)</small>`
|
||||
}
|
||||
}
|
||||
|
||||
allGPUs = allGPUs.map(ID_TO_TEXT)
|
||||
activeGPUs = activeGPUs.map(ID_TO_TEXT)
|
||||
|
||||
let systemInfo = `
|
||||
<table>
|
||||
<tr><td><label>Processor:</label></td><td class="value">${cpu}</td></tr>
|
||||
<tr><td><label>Compatible Graphics Cards (all):</label></td><td class="value">${allGPUs.join('</br>')}</td></tr>
|
||||
<tr><td></td><td> </td></tr>
|
||||
<tr><td><label>Used for rendering 🔥:</label></td><td class="value">${activeGPUs.join('</br>')}</td></tr>
|
||||
</table>`
|
||||
|
||||
let systemInfoEl = document.querySelector('#system-info')
|
||||
systemInfoEl.innerHTML = systemInfo
|
||||
}
|
||||
|
||||
async function healthCheck() {
|
||||
try {
|
||||
@ -222,8 +230,12 @@ async function healthCheck() {
|
||||
setServerStatus('error', serverState.status.toLowerCase())
|
||||
break
|
||||
}
|
||||
if (serverState.devices) {
|
||||
setSystemInfo(serverState.devices)
|
||||
}
|
||||
serverState.time = Date.now()
|
||||
} catch (e) {
|
||||
console.log(e)
|
||||
serverState = {'status': 'Offline', 'time': Date.now()}
|
||||
setServerStatus('error', 'offline')
|
||||
}
|
||||
@ -412,7 +424,7 @@ async function doMakeImage(task) {
|
||||
const RETRY_DELAY_IF_BUFFER_IS_EMPTY = 1000 // ms
|
||||
const RETRY_DELAY_IF_SERVER_IS_BUSY = 30 * 1000 // ms, status_code 503, already a task running
|
||||
const TASK_START_DELAY_ON_SERVER = 1500 // ms
|
||||
const SERVER_STATE_VALIDITY_DURATION = 10 * 1000 // ms
|
||||
const SERVER_STATE_VALIDITY_DURATION = 90 * 1000 // ms
|
||||
|
||||
const reqBody = task.reqBody
|
||||
const batchCount = task.batchCount
|
||||
@ -428,7 +440,6 @@ async function doMakeImage(task) {
|
||||
|
||||
let res = undefined
|
||||
try {
|
||||
const lastTask = serverState.task
|
||||
let renderRequest = undefined
|
||||
do {
|
||||
res = await fetch('/render', {
|
||||
@ -633,7 +644,6 @@ async function doMakeImage(task) {
|
||||
return false
|
||||
}
|
||||
|
||||
lastPromptUsed = reqBody['prompt']
|
||||
showImages(reqBody, stepUpdate, outputContainer, false)
|
||||
} catch (e) {
|
||||
console.log('request error', e)
|
||||
@ -773,7 +783,6 @@ function getCurrentUserRequest() {
|
||||
height: heightField.value,
|
||||
// allow_nsfw: allowNSFWField.checked,
|
||||
turbo: turboField.checked,
|
||||
render_device: getCurrentRenderDeviceSelection(),
|
||||
use_full_precision: useFullPrecisionField.checked,
|
||||
use_stable_diffusion_model: stableDiffusionModelField.value,
|
||||
use_vae_model: vaeModelField.value,
|
||||
@ -811,14 +820,6 @@ function getCurrentUserRequest() {
|
||||
return newTask
|
||||
}
|
||||
|
||||
function getCurrentRenderDeviceSelection() {
|
||||
if (useCPUField.checked) {
|
||||
return 'cpu'
|
||||
}
|
||||
|
||||
return $(useGPUsField).val().join(',')
|
||||
}
|
||||
|
||||
function makeImage() {
|
||||
if (!isServerAvailable()) {
|
||||
alert('The server is not available.')
|
||||
@ -835,21 +836,21 @@ function makeImage() {
|
||||
}
|
||||
|
||||
function createTask(task) {
|
||||
let taskConfig = `Seed: ${task.seed}, Sampler: ${task.reqBody.sampler}, Inference Steps: ${task.reqBody.num_inference_steps}, Guidance Scale: ${task.reqBody.guidance_scale}, Model: ${task.reqBody.use_stable_diffusion_model}`
|
||||
let taskConfig = `<b>Seed:</b> ${task.seed}, <b>Sampler:</b> ${task.reqBody.sampler}, <b>Inference Steps:</b> ${task.reqBody.num_inference_steps}, <b>Guidance Scale:</b> ${task.reqBody.guidance_scale}, <b>Model:</b> ${task.reqBody.use_stable_diffusion_model}`
|
||||
if (task.reqBody.use_vae_model.trim() !== '') {
|
||||
taskConfig += `, VAE: ${task.reqBody.use_vae_model}`
|
||||
taskConfig += `, <b>VAE:</b> ${task.reqBody.use_vae_model}`
|
||||
}
|
||||
if (task.reqBody.negative_prompt.trim() !== '') {
|
||||
taskConfig += `, Negative Prompt: ${task.reqBody.negative_prompt}`
|
||||
taskConfig += `, <b>Negative Prompt:</b> ${task.reqBody.negative_prompt}`
|
||||
}
|
||||
if (task.reqBody.init_image !== undefined) {
|
||||
taskConfig += `, Prompt Strength: ${task.reqBody.prompt_strength}`
|
||||
taskConfig += `, <b>Prompt Strength:</b> ${task.reqBody.prompt_strength}`
|
||||
}
|
||||
if (task.reqBody.use_face_correction) {
|
||||
taskConfig += `, Fix Faces: ${task.reqBody.use_face_correction}`
|
||||
taskConfig += `, <b>Fix Faces:</b> ${task.reqBody.use_face_correction}`
|
||||
}
|
||||
if (task.reqBody.use_upscale) {
|
||||
taskConfig += `, Upscale: ${task.reqBody.use_upscale}`
|
||||
taskConfig += `, <b>Upscale:</b> ${task.reqBody.use_upscale}`
|
||||
}
|
||||
|
||||
let taskEntry = document.createElement('div')
|
||||
@ -958,9 +959,10 @@ function getPrompts() {
|
||||
prompts = prompts.filter(prompt => prompt !== '')
|
||||
|
||||
if (activeTags.length > 0) {
|
||||
const promptTags = activeTags.map(x => x.name).join(", ")
|
||||
prompts = prompts.map((prompt) => `${prompt}, ${promptTags}`)
|
||||
const promptTags = activeTags.map(x => x.name).join(", ")
|
||||
prompts = prompts.map((prompt) => `${prompt}, ${promptTags}`)
|
||||
}
|
||||
|
||||
let promptsToMake = applySetOperator(prompts)
|
||||
promptsToMake = applyPermuteOperator(promptsToMake)
|
||||
|
||||
@ -1115,15 +1117,16 @@ function onDimensionChange() {
|
||||
}
|
||||
|
||||
diskPathField.disabled = !saveToDiskField.checked
|
||||
saveToDiskField.addEventListener('change', function(e) {
|
||||
diskPathField.disabled = !this.checked
|
||||
})
|
||||
|
||||
upscaleModelField.disabled = !useUpscalingField.checked
|
||||
useUpscalingField.addEventListener('change', function(e) {
|
||||
upscaleModelField.disabled = !this.checked
|
||||
})
|
||||
|
||||
if (useBetaChannelField.checked) {
|
||||
updateBranchLabel.innerText = "(beta)"
|
||||
}
|
||||
|
||||
makeImageBtn.addEventListener('click', makeImage)
|
||||
|
||||
document.onkeydown = function(e) {
|
||||
@ -1173,63 +1176,6 @@ promptStrengthSlider.addEventListener('input', updatePromptStrength)
|
||||
promptStrengthField.addEventListener('input', updatePromptStrengthSlider)
|
||||
updatePromptStrength()
|
||||
|
||||
useCPUField.addEventListener('click', function() {
|
||||
let gpuSettingEntry = getParameterSettingsEntry('use_gpus')
|
||||
if (this.checked) {
|
||||
gpuSettingEntry.style.display = 'none'
|
||||
} else if ($(useGPUsField).val().length >= MIN_GPUS_TO_SHOW_SELECTION) {
|
||||
gpuSettingEntry.style.display = ''
|
||||
}
|
||||
})
|
||||
|
||||
async function changeAppConfig(configDelta) {
|
||||
// if (!isServerAvailable()) {
|
||||
// // logError('The server is still starting up..')
|
||||
// alert('The server is still starting up..')
|
||||
// e.preventDefault()
|
||||
// return false
|
||||
// }
|
||||
|
||||
try {
|
||||
let res = await fetch('/app_config', {
|
||||
method: 'POST',
|
||||
headers: {
|
||||
'Content-Type': 'application/json'
|
||||
},
|
||||
body: JSON.stringify(configDelta)
|
||||
})
|
||||
res = await res.json()
|
||||
|
||||
console.log('set config status response', res)
|
||||
} catch (e) {
|
||||
console.log('set config status error', e)
|
||||
}
|
||||
}
|
||||
|
||||
useBetaChannelField.addEventListener('click', async function(e) {
|
||||
let updateBranch = (this.checked ? 'beta' : 'main')
|
||||
|
||||
await changeAppConfig({
|
||||
'update_branch': updateBranch
|
||||
})
|
||||
})
|
||||
|
||||
async function getAppConfig() {
|
||||
try {
|
||||
let res = await fetch('/get/app_config')
|
||||
const config = await res.json()
|
||||
|
||||
if (config.update_branch === 'beta') {
|
||||
useBetaChannelField.checked = true
|
||||
updateBranchLabel.innerText = "(beta)"
|
||||
}
|
||||
|
||||
console.log('get config status response', config)
|
||||
} catch (e) {
|
||||
console.log('get config status error', e)
|
||||
}
|
||||
}
|
||||
|
||||
async function getModels() {
|
||||
try {
|
||||
var sd_model_setting_key = "stable_diffusion_model"
|
||||
@ -1366,61 +1312,6 @@ promptsFromFileSelector.addEventListener('change', function() {
|
||||
}
|
||||
})
|
||||
|
||||
async function getDiskPath() {
|
||||
try {
|
||||
var diskPath = getSetting("diskPath")
|
||||
if (diskPath == '' || diskPath == undefined || diskPath == "undefined") {
|
||||
let res = await fetch('/get/output_dir')
|
||||
if (res.status === 200) {
|
||||
res = await res.json()
|
||||
res = res.output_dir
|
||||
|
||||
setSetting("diskPath", res)
|
||||
}
|
||||
}
|
||||
} catch (e) {
|
||||
console.log('error fetching output dir path', e)
|
||||
}
|
||||
}
|
||||
|
||||
async function getDevices() {
|
||||
try {
|
||||
let res = await fetch('/get/devices')
|
||||
if (res.status === 200) {
|
||||
res = await res.json()
|
||||
|
||||
let allDeviceIds = Object.keys(res['all']).filter(d => d !== 'cpu')
|
||||
let activeDeviceIds = Object.keys(res['active']).filter(d => d !== 'cpu')
|
||||
|
||||
if (activeDeviceIds.length === 0) {
|
||||
useCPUField.checked = true
|
||||
}
|
||||
|
||||
if (allDeviceIds.length < MIN_GPUS_TO_SHOW_SELECTION) {
|
||||
let gpuSettingEntry = getParameterSettingsEntry('use_gpus')
|
||||
gpuSettingEntry.style.display = 'none'
|
||||
|
||||
if (allDeviceIds.length === 0) {
|
||||
useCPUField.checked = true
|
||||
useCPUField.disabled = true // no compatible GPUs, so make the CPU mandatory
|
||||
}
|
||||
}
|
||||
|
||||
useGPUsField.innerHTML = ''
|
||||
|
||||
allDeviceIds.forEach(device => {
|
||||
let deviceName = res['all'][device]
|
||||
let selected = (activeDeviceIds.includes(device) ? 'selected' : '')
|
||||
let deviceOption = `<option value="${device}" ${selected}>${deviceName}</option>`
|
||||
useGPUsField.insertAdjacentHTML('beforeend', deviceOption)
|
||||
})
|
||||
}
|
||||
} catch (e) {
|
||||
console.log('error fetching devices', e)
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
/* setup popup handlers */
|
||||
document.querySelectorAll('.popup').forEach(popup => {
|
||||
popup.addEventListener('click', event => {
|
||||
|
@ -1,5 +1,3 @@
|
||||
|
||||
|
||||
/**
|
||||
* Enum of parameter types
|
||||
* @readonly
|
||||
@ -59,6 +57,13 @@ var PARAMETERS = [
|
||||
note: "plays a sound on task completion",
|
||||
default: true,
|
||||
},
|
||||
{
|
||||
id: "ui_open_browser_on_start",
|
||||
type: ParameterType.checkbox,
|
||||
label: "Open browser on startup",
|
||||
note: "starts the default browser on startup",
|
||||
default: true,
|
||||
},
|
||||
{
|
||||
id: "turbo",
|
||||
type: ParameterType.checkbox,
|
||||
@ -73,11 +78,17 @@ var PARAMETERS = [
|
||||
note: "warning: this will be *very* slow",
|
||||
default: false,
|
||||
},
|
||||
{
|
||||
id: "auto_pick_gpus",
|
||||
type: ParameterType.checkbox,
|
||||
label: "Automatically pick the GPUs (experimental)",
|
||||
default: false,
|
||||
},
|
||||
{
|
||||
id: "use_gpus",
|
||||
type: ParameterType.select_multiple,
|
||||
label: "GPUs to use",
|
||||
note: "select multiple GPUs to process in parallel",
|
||||
label: "GPUs to use (experimental)",
|
||||
note: "to process in parallel",
|
||||
default: false,
|
||||
},
|
||||
{
|
||||
@ -129,7 +140,7 @@ function getParameterElement(parameter) {
|
||||
}
|
||||
}
|
||||
|
||||
var parametersTable = document.querySelector("#system-settings table")
|
||||
let parametersTable = document.querySelector("#system-settings table")
|
||||
/* fill in the system settings popup table */
|
||||
function initParameters() {
|
||||
PARAMETERS.forEach(parameter => {
|
||||
@ -144,5 +155,176 @@ function initParameters() {
|
||||
})
|
||||
}
|
||||
|
||||
initParameters();
|
||||
initParameters()
|
||||
|
||||
let turboField = document.querySelector('#turbo')
|
||||
let useCPUField = document.querySelector('#use_cpu')
|
||||
let autoPickGPUsField = document.querySelector('#auto_pick_gpus')
|
||||
let useGPUsField = document.querySelector('#use_gpus')
|
||||
let useFullPrecisionField = document.querySelector('#use_full_precision')
|
||||
let saveToDiskField = document.querySelector('#save_to_disk')
|
||||
let diskPathField = document.querySelector('#diskPath')
|
||||
let useBetaChannelField = document.querySelector("#use_beta_channel")
|
||||
let uiOpenBrowserOnStartField = document.querySelector("#ui_open_browser_on_start")
|
||||
|
||||
let saveSettingsBtn = document.querySelector('#save-system-settings-btn')
|
||||
|
||||
async function changeAppConfig(configDelta) {
|
||||
try {
|
||||
let res = await fetch('/app_config', {
|
||||
method: 'POST',
|
||||
headers: {
|
||||
'Content-Type': 'application/json'
|
||||
},
|
||||
body: JSON.stringify(configDelta)
|
||||
})
|
||||
res = await res.json()
|
||||
|
||||
console.log('set config status response', res)
|
||||
} catch (e) {
|
||||
console.log('set config status error', e)
|
||||
}
|
||||
}
|
||||
|
||||
async function getAppConfig() {
|
||||
try {
|
||||
let res = await fetch('/get/app_config')
|
||||
const config = await res.json()
|
||||
|
||||
if (config.update_branch === 'beta') {
|
||||
useBetaChannelField.checked = true
|
||||
}
|
||||
if (config.ui && config.ui.open_browser_on_start === false) {
|
||||
uiOpenBrowserOnStartField.checked = false
|
||||
}
|
||||
|
||||
console.log('get config status response', config)
|
||||
} catch (e) {
|
||||
console.log('get config status error', e)
|
||||
}
|
||||
}
|
||||
|
||||
saveToDiskField.addEventListener('change', function(e) {
|
||||
diskPathField.disabled = !this.checked
|
||||
})
|
||||
|
||||
function getCurrentRenderDeviceSelection() {
|
||||
let selectedGPUs = $('#use_gpus').val()
|
||||
|
||||
if (useCPUField.checked && !autoPickGPUsField.checked) {
|
||||
return 'cpu'
|
||||
}
|
||||
if (autoPickGPUsField.checked || selectedGPUs.length == 0) {
|
||||
return 'auto'
|
||||
}
|
||||
|
||||
return selectedGPUs.join(',')
|
||||
}
|
||||
|
||||
useCPUField.addEventListener('click', function() {
|
||||
let gpuSettingEntry = getParameterSettingsEntry('use_gpus')
|
||||
let autoPickGPUSettingEntry = getParameterSettingsEntry('auto_pick_gpus')
|
||||
if (this.checked) {
|
||||
gpuSettingEntry.style.display = 'none'
|
||||
autoPickGPUSettingEntry.style.display = 'none'
|
||||
autoPickGPUsField.setAttribute('data-old-value', autoPickGPUsField.checked)
|
||||
autoPickGPUsField.checked = false
|
||||
} else if (useGPUsField.options.length >= MIN_GPUS_TO_SHOW_SELECTION) {
|
||||
gpuSettingEntry.style.display = ''
|
||||
autoPickGPUSettingEntry.style.display = ''
|
||||
let oldVal = autoPickGPUsField.getAttribute('data-old-value')
|
||||
if (oldVal === null || oldVal === undefined) { // the UI started with CPU selected by default
|
||||
autoPickGPUsField.checked = true
|
||||
} else {
|
||||
autoPickGPUsField.checked = (oldVal === 'true')
|
||||
}
|
||||
gpuSettingEntry.style.display = (autoPickGPUsField.checked ? 'none' : '')
|
||||
}
|
||||
})
|
||||
|
||||
useGPUsField.addEventListener('click', function() {
|
||||
let selectedGPUs = $('#use_gpus').val()
|
||||
autoPickGPUsField.checked = (selectedGPUs.length === 0)
|
||||
})
|
||||
|
||||
autoPickGPUsField.addEventListener('click', function() {
|
||||
if (this.checked) {
|
||||
$('#use_gpus').val([])
|
||||
}
|
||||
|
||||
let gpuSettingEntry = getParameterSettingsEntry('use_gpus')
|
||||
gpuSettingEntry.style.display = (this.checked ? 'none' : '')
|
||||
})
|
||||
|
||||
async function getDiskPath() {
|
||||
try {
|
||||
var diskPath = getSetting("diskPath")
|
||||
if (diskPath == '' || diskPath == undefined || diskPath == "undefined") {
|
||||
let res = await fetch('/get/output_dir')
|
||||
if (res.status === 200) {
|
||||
res = await res.json()
|
||||
res = res.output_dir
|
||||
|
||||
setSetting("diskPath", res)
|
||||
}
|
||||
}
|
||||
} catch (e) {
|
||||
console.log('error fetching output dir path', e)
|
||||
}
|
||||
}
|
||||
|
||||
async function getDevices() {
|
||||
try {
|
||||
let res = await fetch('/get/devices')
|
||||
if (res.status === 200) {
|
||||
res = await res.json()
|
||||
|
||||
let allDeviceIds = Object.keys(res['all']).filter(d => d !== 'cpu')
|
||||
let activeDeviceIds = Object.keys(res['active']).filter(d => d !== 'cpu')
|
||||
|
||||
if (activeDeviceIds.length === 0) {
|
||||
useCPUField.checked = true
|
||||
}
|
||||
|
||||
if (allDeviceIds.length < MIN_GPUS_TO_SHOW_SELECTION || useCPUField.checked) {
|
||||
let gpuSettingEntry = getParameterSettingsEntry('use_gpus')
|
||||
gpuSettingEntry.style.display = 'none'
|
||||
let autoPickGPUSettingEntry = getParameterSettingsEntry('auto_pick_gpus')
|
||||
autoPickGPUSettingEntry.style.display = 'none'
|
||||
}
|
||||
|
||||
if (allDeviceIds.length === 0) {
|
||||
useCPUField.checked = true
|
||||
useCPUField.disabled = true // no compatible GPUs, so make the CPU mandatory
|
||||
}
|
||||
|
||||
autoPickGPUsField.checked = (res['config'] === 'auto')
|
||||
|
||||
useGPUsField.innerHTML = ''
|
||||
allDeviceIds.forEach(device => {
|
||||
let deviceName = res['all'][device]['name']
|
||||
let deviceOption = `<option value="${device}">${deviceName} (${device})</option>`
|
||||
useGPUsField.insertAdjacentHTML('beforeend', deviceOption)
|
||||
})
|
||||
|
||||
if (autoPickGPUsField.checked) {
|
||||
let gpuSettingEntry = getParameterSettingsEntry('use_gpus')
|
||||
gpuSettingEntry.style.display = 'none'
|
||||
} else {
|
||||
$('#use_gpus').val(activeDeviceIds)
|
||||
}
|
||||
}
|
||||
} catch (e) {
|
||||
console.log('error fetching devices', e)
|
||||
}
|
||||
}
|
||||
|
||||
saveSettingsBtn.addEventListener('click', function() {
|
||||
let updateBranch = (useBetaChannelField.checked ? 'beta' : 'main')
|
||||
|
||||
changeAppConfig({
|
||||
'render_devices': getCurrentRenderDeviceSelection(),
|
||||
'update_branch': updateBranch,
|
||||
'ui_open_browser_on_start': uiOpenBrowserOnStartField.checked
|
||||
})
|
||||
})
|
||||
|
168
ui/sd_internal/device_manager.py
Normal file
168
ui/sd_internal/device_manager.py
Normal file
@ -0,0 +1,168 @@
|
||||
import os
|
||||
import torch
|
||||
import traceback
|
||||
import re
|
||||
|
||||
COMPARABLE_GPU_PERCENTILE = 0.65 # if a GPU's free_mem is within this % of the GPU with the most free_mem, it will be picked
|
||||
|
||||
mem_free_threshold = 0
|
||||
|
||||
def get_device_delta(render_devices, active_devices):
|
||||
'''
|
||||
render_devices: 'cpu', or 'auto' or ['cuda:N'...]
|
||||
active_devices: ['cpu', 'cuda:N'...]
|
||||
'''
|
||||
|
||||
if render_devices in ('cpu', 'auto'):
|
||||
render_devices = [render_devices]
|
||||
elif render_devices is not None:
|
||||
if isinstance(render_devices, str):
|
||||
render_devices = [render_devices]
|
||||
if isinstance(render_devices, list) and len(render_devices) > 0:
|
||||
render_devices = list(filter(lambda x: x.startswith('cuda:'), render_devices))
|
||||
if len(render_devices) == 0:
|
||||
raise Exception('Invalid render_devices value in config.json. Valid: {"render_devices": ["cuda:0", "cuda:1"...]}, or {"render_devices": "cpu"} or {"render_devices": "auto"}')
|
||||
|
||||
render_devices = list(filter(lambda x: is_device_compatible(x), render_devices))
|
||||
if len(render_devices) == 0:
|
||||
raise Exception('Sorry, none of the render_devices configured in config.json are compatible with Stable Diffusion')
|
||||
else:
|
||||
raise Exception('Invalid render_devices value in config.json. Valid: {"render_devices": ["cuda:0", "cuda:1"...]}, or {"render_devices": "cpu"} or {"render_devices": "auto"}')
|
||||
else:
|
||||
render_devices = ['auto']
|
||||
|
||||
if 'auto' in render_devices:
|
||||
render_devices = auto_pick_devices(active_devices)
|
||||
if 'cpu' in render_devices:
|
||||
print('WARNING: Could not find a compatible GPU. Using the CPU, but this will be very slow!')
|
||||
|
||||
active_devices = set(active_devices)
|
||||
render_devices = set(render_devices)
|
||||
|
||||
devices_to_start = render_devices - active_devices
|
||||
devices_to_stop = active_devices - render_devices
|
||||
|
||||
return devices_to_start, devices_to_stop
|
||||
|
||||
def auto_pick_devices(currently_active_devices):
|
||||
global mem_free_threshold
|
||||
|
||||
if not torch.cuda.is_available(): return ['cpu']
|
||||
|
||||
device_count = torch.cuda.device_count()
|
||||
if device_count == 1:
|
||||
return ['cuda:0'] if is_device_compatible('cuda:0') else ['cpu']
|
||||
|
||||
print('Autoselecting GPU. Using most free memory.')
|
||||
devices = []
|
||||
for device in range(device_count):
|
||||
device = f'cuda:{device}'
|
||||
if not is_device_compatible(device):
|
||||
continue
|
||||
|
||||
mem_free, mem_total = torch.cuda.mem_get_info(device)
|
||||
mem_free /= float(10**9)
|
||||
mem_total /= float(10**9)
|
||||
device_name = torch.cuda.get_device_name(device)
|
||||
print(f'{device} detected: {device_name} - Memory (free/total): {round(mem_free, 2)}Gb / {round(mem_total, 2)}Gb')
|
||||
devices.append({'device': device, 'device_name': device_name, 'mem_free': mem_free})
|
||||
|
||||
devices.sort(key=lambda x:x['mem_free'], reverse=True)
|
||||
max_mem_free = devices[0]['mem_free']
|
||||
curr_mem_free_threshold = COMPARABLE_GPU_PERCENTILE * max_mem_free
|
||||
mem_free_threshold = max(curr_mem_free_threshold, mem_free_threshold)
|
||||
|
||||
# Auto-pick algorithm:
|
||||
# 1. Pick the top 75 percentile of the GPUs, sorted by free_mem.
|
||||
# 2. Also include already-running devices (GPU-only), otherwise their free_mem will
|
||||
# always be very low (since their VRAM contains the model).
|
||||
# These already-running devices probably aren't terrible, since they were picked in the past.
|
||||
# Worst case, the user can restart the program and that'll get rid of them.
|
||||
devices = list(filter((lambda x: x['mem_free'] > mem_free_threshold or x['device'] in currently_active_devices), devices))
|
||||
devices = list(map(lambda x: x['device'], devices))
|
||||
return devices
|
||||
|
||||
def device_init(thread_data, device):
|
||||
'''
|
||||
This function assumes the 'device' has already been verified to be compatible.
|
||||
`get_device_delta()` has already filtered out incompatible devices.
|
||||
'''
|
||||
|
||||
validate_device_id(device, log_prefix='device_init')
|
||||
|
||||
if device == 'cpu':
|
||||
thread_data.device = 'cpu'
|
||||
thread_data.device_name = get_processor_name()
|
||||
print('Render device CPU available as', thread_data.device_name)
|
||||
return
|
||||
|
||||
thread_data.device_name = torch.cuda.get_device_name(device)
|
||||
thread_data.device = device
|
||||
|
||||
# Force full precision on 1660 and 1650 NVIDIA cards to avoid creating green images
|
||||
device_name = thread_data.device_name.lower()
|
||||
thread_data.force_full_precision = ('nvidia' in device_name or 'geforce' in device_name) and (' 1660' in device_name or ' 1650' in device_name)
|
||||
if thread_data.force_full_precision:
|
||||
print('forcing full precision on NVIDIA 16xx cards, to avoid green images. GPU detected: ', thread_data.device_name)
|
||||
# Apply force_full_precision now before models are loaded.
|
||||
thread_data.precision = 'full'
|
||||
|
||||
print(f'Setting {device} as active')
|
||||
torch.cuda.device(device)
|
||||
|
||||
return
|
||||
|
||||
def validate_device_id(device, log_prefix=''):
|
||||
def is_valid():
|
||||
if not isinstance(device, str):
|
||||
return False
|
||||
if device == 'cpu':
|
||||
return True
|
||||
if not device.startswith('cuda:') or not device[5:].isnumeric():
|
||||
return False
|
||||
return True
|
||||
|
||||
if not is_valid():
|
||||
raise EnvironmentError(f"{log_prefix}: device id should be 'cpu', or 'cuda:N' (where N is an integer index for the GPU). Got: {device}")
|
||||
|
||||
def is_device_compatible(device):
|
||||
'''
|
||||
Returns True/False, and prints any compatibility errors
|
||||
'''
|
||||
try:
|
||||
validate_device_id(device, log_prefix='is_device_compatible')
|
||||
except:
|
||||
print(str(e))
|
||||
return False
|
||||
|
||||
if device == 'cpu': return True
|
||||
# Memory check
|
||||
try:
|
||||
_, mem_total = torch.cuda.mem_get_info(device)
|
||||
mem_total /= float(10**9)
|
||||
if mem_total < 3.0:
|
||||
print(f'GPU {device} with less than 3 GB of VRAM is not compatible with Stable Diffusion')
|
||||
return False
|
||||
except RuntimeError as e:
|
||||
print(str(e))
|
||||
return False
|
||||
return True
|
||||
|
||||
def get_processor_name():
|
||||
try:
|
||||
import platform, subprocess
|
||||
if platform.system() == "Windows":
|
||||
return platform.processor()
|
||||
elif platform.system() == "Darwin":
|
||||
os.environ['PATH'] = os.environ['PATH'] + os.pathsep + '/usr/sbin'
|
||||
command = "sysctl -n machdep.cpu.brand_string"
|
||||
return subprocess.check_output(command).strip()
|
||||
elif platform.system() == "Linux":
|
||||
command = "cat /proc/cpuinfo"
|
||||
all_info = subprocess.check_output(command, shell=True).decode().strip()
|
||||
for line in all_info.split("\n"):
|
||||
if "model name" in line:
|
||||
return re.sub(".*model name.*:", "", line, 1).strip()
|
||||
except:
|
||||
print(traceback.format_exc())
|
||||
return "cpu"
|
@ -35,8 +35,10 @@ logging.set_verbosity_error()
|
||||
# consts
|
||||
config_yaml = "optimizedSD/v1-inference.yaml"
|
||||
filename_regex = re.compile('[^a-zA-Z0-9]')
|
||||
force_gfpgan_to_cuda0 = True # workaround: gfpgan currently works only on cuda:0
|
||||
|
||||
# api stuff
|
||||
from sd_internal import device_manager
|
||||
from . import Request, Response, Image as ResponseImage
|
||||
import base64
|
||||
from io import BytesIO
|
||||
@ -45,62 +47,7 @@ from io import BytesIO
|
||||
from threading import local as LocalThreadVars
|
||||
thread_data = LocalThreadVars()
|
||||
|
||||
def get_processor_name():
|
||||
try:
|
||||
import platform, subprocess
|
||||
if platform.system() == "Windows":
|
||||
return platform.processor()
|
||||
elif platform.system() == "Darwin":
|
||||
os.environ['PATH'] = os.environ['PATH'] + os.pathsep + '/usr/sbin'
|
||||
command ="sysctl -n machdep.cpu.brand_string"
|
||||
return subprocess.check_output(command).strip()
|
||||
elif platform.system() == "Linux":
|
||||
command = "cat /proc/cpuinfo"
|
||||
all_info = subprocess.check_output(command, shell=True).decode().strip()
|
||||
for line in all_info.split("\n"):
|
||||
if "model name" in line:
|
||||
return re.sub( ".*model name.*:", "", line,1).strip()
|
||||
except:
|
||||
print(traceback.format_exc())
|
||||
return "cpu"
|
||||
|
||||
def device_would_fail(device):
|
||||
if device == 'cpu': return None
|
||||
# Returns None when no issues found, otherwise returns the detected error str.
|
||||
# Memory check
|
||||
try:
|
||||
mem_free, mem_total = torch.cuda.mem_get_info(device)
|
||||
mem_total /= float(10**9)
|
||||
if mem_total < 3.0:
|
||||
return 'GPUs with less than 3 GB of VRAM are not compatible with Stable Diffusion'
|
||||
except RuntimeError as e:
|
||||
return str(e) # Return cuda errors from mem_get_info as strings
|
||||
return None
|
||||
|
||||
def device_select(device):
|
||||
if device == 'cpu': return True
|
||||
if not torch.cuda.is_available(): return False
|
||||
failure_msg = device_would_fail(device)
|
||||
if failure_msg:
|
||||
if 'invalid device' in failure_msg:
|
||||
raise NameError(f'GPU "{device}" could not be found. Remove this device from config.render_devices or use one of "auto" or "cuda".')
|
||||
print(failure_msg)
|
||||
return False
|
||||
|
||||
thread_data.device_name = torch.cuda.get_device_name(device)
|
||||
thread_data.device = device
|
||||
|
||||
# Force full precision on 1660 and 1650 NVIDIA cards to avoid creating green images
|
||||
device_name = thread_data.device_name.lower()
|
||||
thread_data.force_full_precision = ('nvidia' in device_name or 'geforce' in device_name) and (' 1660' in device_name or ' 1650' in device_name)
|
||||
if thread_data.force_full_precision:
|
||||
print('forcing full precision on NVIDIA 16xx cards, to avoid green images. GPU detected: ', thread_data.device_name)
|
||||
# Apply force_full_precision now before models are loaded.
|
||||
thread_data.precision = 'full'
|
||||
|
||||
return True
|
||||
|
||||
def device_init(device_selection=None):
|
||||
def thread_init(device):
|
||||
# Thread bound properties
|
||||
thread_data.stop_processing = False
|
||||
thread_data.temp_images = {}
|
||||
@ -129,72 +76,7 @@ def device_init(device_selection=None):
|
||||
thread_data.force_full_precision = False
|
||||
thread_data.reduced_memory = True
|
||||
|
||||
device_selection = device_selection.lower()
|
||||
|
||||
if device_selection == 'cpu':
|
||||
thread_data.device = 'cpu'
|
||||
thread_data.device_name = get_processor_name()
|
||||
print('Render device CPU available as', thread_data.device_name)
|
||||
return
|
||||
if not torch.cuda.is_available():
|
||||
if device_selection == 'auto' or device_selection == 'current':
|
||||
print('WARNING: Could not find a compatible GPU. Using the CPU, but this will be very slow!')
|
||||
thread_data.device = 'cpu'
|
||||
thread_data.device_name = get_processor_name()
|
||||
return
|
||||
else:
|
||||
raise EnvironmentError(f'Could not find a compatible GPU for the requested device_selection: {device_selection}!')
|
||||
device_count = torch.cuda.device_count()
|
||||
if device_count <= 1 and device_selection == 'auto':
|
||||
device_selection = 'current' # Use 'auto' only when there is more than one compatible device found.
|
||||
if device_selection == 'auto':
|
||||
print('Autoselecting GPU. Using most free memory.')
|
||||
max_mem_free = 0
|
||||
best_device = None
|
||||
for device in range(device_count):
|
||||
mem_free, mem_total = torch.cuda.mem_get_info(device)
|
||||
mem_free /= float(10**9)
|
||||
mem_total /= float(10**9)
|
||||
device_name = torch.cuda.get_device_name(device)
|
||||
print(f'GPU:{device} detected: {device_name} - Memory: {round(mem_total - mem_free, 2)}Go / {round(mem_total, 2)}Go')
|
||||
if max_mem_free < mem_free:
|
||||
max_mem_free = mem_free
|
||||
best_device = device
|
||||
if best_device and device_select(device):
|
||||
print(f'Setting GPU:{device} as active')
|
||||
torch.cuda.device(device)
|
||||
return
|
||||
|
||||
if device_selection.startswith('gpu:'):
|
||||
device_selection = int(device_selection[4:])
|
||||
|
||||
if device_selection != 'cuda' and device_selection != 'current' and device_selection != 'gpu':
|
||||
if device_select(device_selection):
|
||||
if isinstance(device_selection, int):
|
||||
print(f'Setting GPU:{device_selection} as active')
|
||||
else:
|
||||
print(f'Setting {device_selection} as active')
|
||||
torch.cuda.device(device_selection)
|
||||
return
|
||||
# By default use current device.
|
||||
print('Checking current GPU...')
|
||||
device = torch.cuda.current_device()
|
||||
device_name = torch.cuda.get_device_name(device)
|
||||
print(f'GPU:{device} detected: {device_name}')
|
||||
if device_select(device):
|
||||
return
|
||||
print('WARNING: No compatible GPU found. Using the CPU, but this will be very slow!')
|
||||
thread_data.device = 'cpu'
|
||||
thread_data.device_name = get_processor_name()
|
||||
|
||||
def is_first_cuda_device(device):
|
||||
if device is None: return False
|
||||
if device == 0 or device == '0': return True
|
||||
if device == 'cuda' or device == 'cuda:0': return True
|
||||
if device == 'gpu' or device == 'gpu:0': return True
|
||||
if device == 'current': return True
|
||||
if device == torch.device(0): return True
|
||||
return False
|
||||
device_manager.device_init(thread_data, device)
|
||||
|
||||
def load_model_ckpt():
|
||||
if not thread_data.ckpt_file: raise ValueError(f'Thread ckpt_file is undefined.')
|
||||
@ -209,7 +91,7 @@ def load_model_ckpt():
|
||||
if thread_data.device == 'cpu':
|
||||
thread_data.precision = 'full'
|
||||
|
||||
print('loading', thread_data.ckpt_file + '.ckpt', 'to', thread_data.device, 'using precision', thread_data.precision)
|
||||
print('loading', thread_data.ckpt_file + '.ckpt', 'to device', thread_data.device, 'using precision', thread_data.precision)
|
||||
sd = load_model_from_config(thread_data.ckpt_file + '.ckpt')
|
||||
li, lo = [], []
|
||||
for key, value in sd.items():
|
||||
@ -296,16 +178,28 @@ def load_model_ckpt():
|
||||
|
||||
def unload_filters():
|
||||
if thread_data.model_gfpgan is not None:
|
||||
if thread_data.device != 'cpu': thread_data.model_gfpgan.gfpgan.to('cpu')
|
||||
|
||||
del thread_data.model_gfpgan
|
||||
thread_data.model_gfpgan = None
|
||||
|
||||
if thread_data.model_real_esrgan is not None:
|
||||
if thread_data.device != 'cpu': thread_data.model_real_esrgan.model.to('cpu')
|
||||
|
||||
del thread_data.model_real_esrgan
|
||||
thread_data.model_real_esrgan = None
|
||||
|
||||
gc()
|
||||
|
||||
def unload_models():
|
||||
if thread_data.model is not None:
|
||||
print('Unloading models...')
|
||||
if thread_data.device != 'cpu':
|
||||
thread_data.modelFS.to('cpu')
|
||||
thread_data.modelCS.to('cpu')
|
||||
thread_data.model.model1.to("cpu")
|
||||
thread_data.model.model2.to("cpu")
|
||||
|
||||
del thread_data.model
|
||||
del thread_data.modelCS
|
||||
del thread_data.modelFS
|
||||
@ -314,12 +208,14 @@ def unload_models():
|
||||
thread_data.modelCS = None
|
||||
thread_data.modelFS = None
|
||||
|
||||
gc()
|
||||
|
||||
def wait_model_move_to(model, target_device): # Send to target_device and wait until complete.
|
||||
if thread_data.device == target_device: return
|
||||
start_mem = torch.cuda.memory_allocated(thread_data.device) / 1e6
|
||||
if start_mem <= 0: return
|
||||
model_name = model.__class__.__name__
|
||||
print(f'Device:{thread_data.device} - Sending model {model_name} to {target_device} | Memory transfer starting. Memory Used: {round(start_mem)}Mo')
|
||||
print(f'Device {thread_data.device} - Sending model {model_name} to {target_device} | Memory transfer starting. Memory Used: {round(start_mem)}Mb')
|
||||
start_time = time.time()
|
||||
model.to(target_device)
|
||||
time_step = start_time
|
||||
@ -334,25 +230,19 @@ def wait_model_move_to(model, target_device): # Send to target_device and wait u
|
||||
if not is_transfering:
|
||||
break;
|
||||
if time.time() - time_step > WARNING_TIMEOUT: # Long delay, print to console to show activity.
|
||||
print(f'Device:{thread_data.device} - Waiting for Memory transfer. Memory Used: {round(mem)}Mo, Transfered: {round(start_mem - mem)}Mo')
|
||||
print(f'Device {thread_data.device} - Waiting for Memory transfer. Memory Used: {round(mem)}Mb, Transfered: {round(start_mem - mem)}Mb')
|
||||
time_step = time.time()
|
||||
print(f'Device:{thread_data.device} - {model_name} Moved: {round(start_mem - last_mem)}Mo in {round(time.time() - start_time, 3)} seconds to {target_device}')
|
||||
print(f'Device {thread_data.device} - {model_name} Moved: {round(start_mem - last_mem)}Mb in {round(time.time() - start_time, 3)} seconds to {target_device}')
|
||||
|
||||
def load_model_gfpgan():
|
||||
if thread_data.gfpgan_file is None: raise ValueError(f'Thread gfpgan_file is undefined.')
|
||||
#print('load_model_gfpgan called without setting gfpgan_file')
|
||||
#return
|
||||
if not is_first_cuda_device(thread_data.device):
|
||||
#TODO Remove when fixed - A bug with GFPGANer and facexlib needs to be fixed before use on other devices.
|
||||
raise Exception(f'Current device {torch.device(thread_data.device)} is not {torch.device(0)}. Cannot run GFPGANer.')
|
||||
model_path = thread_data.gfpgan_file + ".pth"
|
||||
thread_data.model_gfpgan = GFPGANer(device=torch.device(thread_data.device), model_path=model_path, upscale=1, arch='clean', channel_multiplier=2, bg_upsampler=None)
|
||||
device = 'cuda:0' if force_gfpgan_to_cuda0 else thread_data.device
|
||||
thread_data.model_gfpgan = GFPGANer(device=torch.device(device), model_path=model_path, upscale=1, arch='clean', channel_multiplier=2, bg_upsampler=None)
|
||||
print('loaded', thread_data.gfpgan_file, 'to', thread_data.model_gfpgan.device, 'precision', thread_data.precision)
|
||||
|
||||
def load_model_real_esrgan():
|
||||
if thread_data.real_esrgan_file is None: raise ValueError(f'Thread real_esrgan_file is undefined.')
|
||||
#print('load_model_real_esrgan called without setting real_esrgan_file')
|
||||
#return
|
||||
model_path = thread_data.real_esrgan_file + ".pth"
|
||||
|
||||
RealESRGAN_models = {
|
||||
@ -397,11 +287,11 @@ def get_base_path(disk_path, session_id, prompt, img_id, ext, suffix=None):
|
||||
def apply_filters(filter_name, image_data, model_path=None):
|
||||
print(f'Applying filter {filter_name}...')
|
||||
gc() # Free space before loading new data.
|
||||
if isinstance(image_data, torch.Tensor):
|
||||
print(image_data)
|
||||
image_data.to(thread_data.device)
|
||||
|
||||
if filter_name == 'gfpgan':
|
||||
if isinstance(image_data, torch.Tensor):
|
||||
image_data.to('cuda:0' if force_gfpgan_to_cuda0 else thread_data.device)
|
||||
|
||||
if model_path is not None and model_path != thread_data.gfpgan_file:
|
||||
thread_data.gfpgan_file = model_path
|
||||
load_model_gfpgan()
|
||||
@ -413,6 +303,9 @@ def apply_filters(filter_name, image_data, model_path=None):
|
||||
image_data = output[:,:,::-1]
|
||||
|
||||
if filter_name == 'real_esrgan':
|
||||
if isinstance(image_data, torch.Tensor):
|
||||
image_data.to(thread_data.device)
|
||||
|
||||
if model_path is not None and model_path != thread_data.real_esrgan_file:
|
||||
thread_data.real_esrgan_file = model_path
|
||||
load_model_real_esrgan()
|
||||
@ -431,15 +324,11 @@ def mk_img(req: Request):
|
||||
except Exception as e:
|
||||
print(traceback.format_exc())
|
||||
|
||||
if thread_data.reduced_memory:
|
||||
if thread_data.device != 'cpu':
|
||||
thread_data.modelFS.to('cpu')
|
||||
thread_data.modelCS.to('cpu')
|
||||
thread_data.model.model1.to("cpu")
|
||||
thread_data.model.model2.to("cpu")
|
||||
else:
|
||||
# Model crashed, release all resources in unknown state.
|
||||
unload_models()
|
||||
unload_filters()
|
||||
|
||||
gc() # Release from memory.
|
||||
yield json.dumps({
|
||||
@ -715,12 +604,12 @@ def do_mk_img(req: Request):
|
||||
# Filter Applied, move to next seed
|
||||
opt_seed += 1
|
||||
|
||||
if thread_data.reduced_memory:
|
||||
unload_filters()
|
||||
# if thread_data.reduced_memory:
|
||||
# unload_filters()
|
||||
del img_data
|
||||
gc()
|
||||
if thread_data.device != 'cpu':
|
||||
print(f'memory_final = {round(torch.cuda.memory_allocated(thread_data.device) / 1e6, 2)}Mo')
|
||||
print(f'memory_final = {round(torch.cuda.memory_allocated(thread_data.device) / 1e6, 2)}Mb')
|
||||
|
||||
print('Task completed')
|
||||
yield json.dumps(res.json())
|
||||
@ -744,6 +633,7 @@ Use Upscaling: {req.use_upscale}
|
||||
Sampler: {req.sampler}
|
||||
Negative Prompt: {req.negative_prompt}
|
||||
Stable Diffusion model: {req.use_stable_diffusion_model + '.ckpt'}
|
||||
VAE model: {req.use_vae_model}
|
||||
'''
|
||||
try:
|
||||
with open(meta_out_path, 'w', encoding='utf-8') as f:
|
||||
@ -871,12 +761,18 @@ def img_to_base64_str(img, output_format="PNG"):
|
||||
img.save(buffered, format=output_format)
|
||||
buffered.seek(0)
|
||||
img_byte = buffered.getvalue()
|
||||
img_str = "data:image/png;base64," + base64.b64encode(img_byte).decode()
|
||||
mime_type = "image/png" if output_format.lower() == "png" else "image/jpeg"
|
||||
img_str = f"data:{mime_type};base64," + base64.b64encode(img_byte).decode()
|
||||
return img_str
|
||||
|
||||
def base64_str_to_img(img_str):
|
||||
img_str = img_str[len("data:image/png;base64,"):]
|
||||
def base64_str_to_buffer(img_str):
|
||||
mime_type = "image/png" if img_str.startswith("data:image/png;") else "image/jpeg"
|
||||
img_str = img_str[len(f"data:{mime_type};base64,"):]
|
||||
data = base64.b64decode(img_str)
|
||||
buffered = BytesIO(data)
|
||||
return buffered
|
||||
|
||||
def base64_str_to_img(img_str):
|
||||
buffered = base64_str_to_buffer(img_str)
|
||||
img = Image.open(buffered)
|
||||
return img
|
||||
|
@ -14,7 +14,7 @@ import queue, threading, time, weakref
|
||||
from typing import Any, Generator, Hashable, Optional, Union
|
||||
|
||||
from pydantic import BaseModel
|
||||
from sd_internal import Request, Response, runtime
|
||||
from sd_internal import Request, Response, runtime, device_manager
|
||||
|
||||
THREAD_NAME_PREFIX = 'Runtime-Render/'
|
||||
ERR_LOCK_FAILED = ' failed to acquire lock within timeout.'
|
||||
@ -22,7 +22,6 @@ LOCK_TIMEOUT = 15 # Maximum locking time in seconds before failing a task.
|
||||
# It's better to get an exception than a deadlock... ALWAYS use timeout in critical paths.
|
||||
|
||||
DEVICE_START_TIMEOUT = 60 # seconds - Maximum time to wait for a render device to init.
|
||||
CPU_UNLOAD_TIMEOUT = 4 * 60 # seconds - Idle time before CPU unload resource when GPUs are present.
|
||||
|
||||
class SymbolClass(type): # Print nicely formatted Symbol names.
|
||||
def __repr__(self): return self.__qualname__
|
||||
@ -40,7 +39,7 @@ class RenderTask(): # Task with output queue and completion lock.
|
||||
def __init__(self, req: Request):
|
||||
self.request: Request = req # Initial Request
|
||||
self.response: Any = None # Copy of the last reponse
|
||||
self.render_device = None
|
||||
self.render_device = None # Select the task affinity. (Not used to change active devices).
|
||||
self.temp_images:list = [None] * req.num_outputs * (1 if req.show_only_filtered_image else 2)
|
||||
self.error: Exception = None
|
||||
self.lock: threading.Lock = threading.Lock() # Locks at task start and unlocks when task is completed
|
||||
@ -72,7 +71,7 @@ class ImageRequest(BaseModel):
|
||||
save_to_disk_path: str = None
|
||||
turbo: bool = True
|
||||
use_cpu: bool = False ##TODO Remove after UI and plugins transition.
|
||||
render_device: str = None
|
||||
render_device: str = None # Select the task affinity. (Not used to change active devices).
|
||||
use_full_precision: bool = False
|
||||
use_face_correction: str = None # or "GFPGANv1.3"
|
||||
use_upscale: str = None # or "RealESRGAN_x4plus" or "RealESRGAN_x4plus_anime_6B"
|
||||
@ -183,7 +182,7 @@ default_vae_to_load = None
|
||||
weak_thread_data = weakref.WeakKeyDictionary()
|
||||
|
||||
def preload_model(ckpt_file_path=None, vae_file_path=None):
|
||||
global current_state, current_state_error, current_model_path
|
||||
global current_state, current_state_error, current_model_path, current_vae_path
|
||||
if ckpt_file_path == None:
|
||||
ckpt_file_path = default_model_to_load
|
||||
if vae_file_path == None:
|
||||
@ -218,24 +217,17 @@ def thread_get_next_task():
|
||||
task = None
|
||||
try: # Select a render task.
|
||||
for queued_task in tasks_queue:
|
||||
if queued_task.request.use_face_correction: # TODO Remove when fixed - A bug with GFPGANer and facexlib needs to be fixed before use on other devices.
|
||||
if is_alive(0) <= 0: # Allows GFPGANer only on cuda:0.
|
||||
queued_task.error = Exception('cuda:0 is not available with the current config. Remove GFPGANer filter to run task.')
|
||||
task = queued_task
|
||||
break
|
||||
if queued_task.render_device == 'cpu':
|
||||
queued_task.error = Exception('Cpu cannot be used to run this task. Remove GFPGANer filter to run task.')
|
||||
task = queued_task
|
||||
break
|
||||
if not runtime.is_first_cuda_device(runtime.thread_data.device):
|
||||
continue # Wait for cuda:0
|
||||
if queued_task.request.use_face_correction and runtime.thread_data.device == 'cpu' and is_alive() == 1:
|
||||
queued_task.error = Exception('The CPU cannot be used to run this task currently. Please remove "Fix incorrect faces" from Image Settings and try again.')
|
||||
task = queued_task
|
||||
break
|
||||
if queued_task.render_device and runtime.thread_data.device != queued_task.render_device:
|
||||
# Is asking for a specific render device.
|
||||
if is_alive(queued_task.render_device) > 0:
|
||||
continue # requested device alive, skip current one.
|
||||
else:
|
||||
# Requested device is not active, return error to UI.
|
||||
queued_task.error = Exception(str(queued_task.render_device) + ' is not currently active.')
|
||||
queued_task.error = Exception(queued_task.render_device + ' is not currently active.')
|
||||
task = queued_task
|
||||
break
|
||||
if not queued_task.render_device and runtime.thread_data.device == 'cpu' and is_alive() > 1:
|
||||
@ -253,7 +245,7 @@ def thread_render(device):
|
||||
global current_state, current_state_error, current_model_path, current_vae_path
|
||||
from . import runtime
|
||||
try:
|
||||
runtime.device_init(device)
|
||||
runtime.thread_init(device)
|
||||
except Exception as e:
|
||||
print(traceback.format_exc())
|
||||
weak_thread_data[threading.current_thread()] = {
|
||||
@ -262,24 +254,24 @@ def thread_render(device):
|
||||
return
|
||||
weak_thread_data[threading.current_thread()] = {
|
||||
'device': runtime.thread_data.device,
|
||||
'device_name': runtime.thread_data.device_name
|
||||
'device_name': runtime.thread_data.device_name,
|
||||
'alive': True
|
||||
}
|
||||
if runtime.thread_data.device != 'cpu' or is_alive() == 1:
|
||||
preload_model()
|
||||
current_state = ServerStates.Online
|
||||
while True:
|
||||
task_cache.clean()
|
||||
if not weak_thread_data[threading.current_thread()]['alive']:
|
||||
print(f'Shutting down thread for device {runtime.thread_data.device}')
|
||||
runtime.unload_models()
|
||||
runtime.unload_filters()
|
||||
return
|
||||
if isinstance(current_state_error, SystemExit):
|
||||
current_state = ServerStates.Unavailable
|
||||
return
|
||||
task = thread_get_next_task()
|
||||
if task is None:
|
||||
if runtime.thread_data.device == 'cpu' and is_alive() > 1 and hasattr(runtime.thread_data, 'lastActive') and time.time() - runtime.thread_data.lastActive > CPU_UNLOAD_TIMEOUT:
|
||||
# GPUs present and CPU is idle. Unload resources.
|
||||
runtime.unload_models()
|
||||
runtime.unload_filters()
|
||||
del runtime.thread_data.lastActive
|
||||
print('unloaded models from CPU because it was idle for too long')
|
||||
time.sleep(1)
|
||||
continue
|
||||
if task.error is not None:
|
||||
@ -330,7 +322,8 @@ def thread_render(device):
|
||||
img_id = out_obj['path'][out_obj['path'].rindex('/') + 1:]
|
||||
task.temp_images[int(img_id)] = runtime.thread_data.temp_images[out_obj['path'][11:]]
|
||||
elif 'data' in out_obj:
|
||||
task.temp_images[result['output'].index(out_obj)] = out_obj['data']
|
||||
buf = runtime.base64_str_to_buffer(out_obj['data'])
|
||||
task.temp_images[result['output'].index(out_obj)] = buf
|
||||
# Before looping back to the generator, mark cache as still alive.
|
||||
task_cache.keep(task.request.session_id, TASK_TTL)
|
||||
except Exception as e:
|
||||
@ -362,15 +355,30 @@ def get_devices():
|
||||
'active': {},
|
||||
}
|
||||
|
||||
def get_device_info(device):
|
||||
if device == 'cpu':
|
||||
return {'name': device_manager.get_processor_name()}
|
||||
|
||||
mem_free, mem_total = torch.cuda.mem_get_info(device)
|
||||
mem_free /= float(10**9)
|
||||
mem_total /= float(10**9)
|
||||
|
||||
return {
|
||||
'name': torch.cuda.get_device_name(device),
|
||||
'mem_free': mem_free,
|
||||
'mem_total': mem_total,
|
||||
}
|
||||
|
||||
# list the compatible devices
|
||||
gpu_count = torch.cuda.device_count()
|
||||
for device in range(gpu_count):
|
||||
if runtime.device_would_fail(device):
|
||||
device = f'cuda:{device}'
|
||||
if not device_manager.is_device_compatible(device):
|
||||
continue
|
||||
|
||||
devices['all'].update({device: torch.cuda.get_device_name(device)})
|
||||
devices['all'].update({device: get_device_info(device)})
|
||||
|
||||
devices['all'].update({'cpu': runtime.get_processor_name()})
|
||||
devices['all'].update({'cpu': get_device_info('cpu')})
|
||||
|
||||
# list the activated devices
|
||||
if not manager_lock.acquire(blocking=True, timeout=LOCK_TIMEOUT): raise Exception('get_devices' + ERR_LOCK_FAILED)
|
||||
@ -381,30 +389,24 @@ def get_devices():
|
||||
weak_data = weak_thread_data.get(rthread)
|
||||
if not weak_data or not 'device' in weak_data or not 'device_name' in weak_data:
|
||||
continue
|
||||
devices['active'].update({weak_data['device']: weak_data['device_name']})
|
||||
device = weak_data['device']
|
||||
devices['active'].update({device: get_device_info(device)})
|
||||
finally:
|
||||
manager_lock.release()
|
||||
|
||||
return devices
|
||||
|
||||
def is_first_cuda_device(device):
|
||||
from . import runtime # When calling runtime from outside thread_render DO NOT USE thread specific attributes or functions.
|
||||
return runtime.is_first_cuda_device(device)
|
||||
|
||||
def is_alive(name=None):
|
||||
def is_alive(device=None):
|
||||
if not manager_lock.acquire(blocking=True, timeout=LOCK_TIMEOUT): raise Exception('is_alive' + ERR_LOCK_FAILED)
|
||||
nbr_alive = 0
|
||||
try:
|
||||
for rthread in render_threads:
|
||||
if name is not None:
|
||||
if device is not None:
|
||||
weak_data = weak_thread_data.get(rthread)
|
||||
if weak_data is None or not 'device' in weak_data or weak_data['device'] is None:
|
||||
continue
|
||||
thread_name = str(weak_data['device']).lower()
|
||||
if is_first_cuda_device(name):
|
||||
if not is_first_cuda_device(thread_name):
|
||||
continue
|
||||
elif thread_name != name:
|
||||
thread_device = weak_data['device']
|
||||
if thread_device != device:
|
||||
continue
|
||||
if rthread.is_alive():
|
||||
nbr_alive += 1
|
||||
@ -412,8 +414,8 @@ def is_alive(name=None):
|
||||
finally:
|
||||
manager_lock.release()
|
||||
|
||||
def start_render_thread(device='auto'):
|
||||
if not manager_lock.acquire(blocking=True, timeout=LOCK_TIMEOUT): raise Exception('start_render_threads' + ERR_LOCK_FAILED)
|
||||
def start_render_thread(device):
|
||||
if not manager_lock.acquire(blocking=True, timeout=LOCK_TIMEOUT): raise Exception('start_render_thread' + ERR_LOCK_FAILED)
|
||||
print('Start new Rendering Thread on device', device)
|
||||
try:
|
||||
rthread = threading.Thread(target=thread_render, kwargs={'device': device})
|
||||
@ -426,6 +428,7 @@ def start_render_thread(device='auto'):
|
||||
timeout = DEVICE_START_TIMEOUT
|
||||
while not rthread.is_alive() or not rthread in weak_thread_data or not 'device' in weak_thread_data[rthread]:
|
||||
if rthread in weak_thread_data and 'error' in weak_thread_data[rthread]:
|
||||
print(rthread, device, 'error:', weak_thread_data[rthread]['error'])
|
||||
return False
|
||||
if timeout <= 0:
|
||||
return False
|
||||
@ -433,6 +436,59 @@ def start_render_thread(device='auto'):
|
||||
time.sleep(1)
|
||||
return True
|
||||
|
||||
def stop_render_thread(device):
|
||||
try:
|
||||
device_manager.validate_device_id(device, log_prefix='stop_render_thread')
|
||||
except:
|
||||
print(traceback.format_exec())
|
||||
return False
|
||||
|
||||
if not manager_lock.acquire(blocking=True, timeout=LOCK_TIMEOUT): raise Exception('stop_render_thread' + ERR_LOCK_FAILED)
|
||||
print('Stopping Rendering Thread on device', device)
|
||||
|
||||
try:
|
||||
thread_to_remove = None
|
||||
for rthread in render_threads:
|
||||
weak_data = weak_thread_data.get(rthread)
|
||||
if weak_data is None or not 'device' in weak_data or weak_data['device'] is None:
|
||||
continue
|
||||
thread_device = weak_data['device']
|
||||
if thread_device == device:
|
||||
weak_data['alive'] = False
|
||||
thread_to_remove = rthread
|
||||
break
|
||||
if thread_to_remove is not None:
|
||||
render_threads.remove(rthread)
|
||||
return True
|
||||
finally:
|
||||
manager_lock.release()
|
||||
|
||||
return False
|
||||
|
||||
def update_render_threads(render_devices, active_devices):
|
||||
devices_to_start, devices_to_stop = device_manager.get_device_delta(render_devices, active_devices)
|
||||
print('devices_to_start', devices_to_start)
|
||||
print('devices_to_stop', devices_to_stop)
|
||||
|
||||
for device in devices_to_stop:
|
||||
if is_alive(device) <= 0:
|
||||
print(device, 'is not alive')
|
||||
continue
|
||||
if not stop_render_thread(device):
|
||||
print(device, 'could not stop render thread')
|
||||
|
||||
for device in devices_to_start:
|
||||
if is_alive(device) >= 1:
|
||||
print(device, 'already registered.')
|
||||
continue
|
||||
if not start_render_thread(device):
|
||||
print(device, 'failed to start.')
|
||||
|
||||
if is_alive() <= 0: # No running devices, probably invalid user config.
|
||||
raise EnvironmentError('ERROR: No active render devices! Please verify the "render_devices" value in config.json')
|
||||
|
||||
print('active devices', get_devices()['active'])
|
||||
|
||||
def shutdown_event(): # Signal render thread to close on shutdown
|
||||
global current_state_error
|
||||
current_state_error = SystemExit('Application shutting down.')
|
||||
@ -479,7 +535,6 @@ def render(req : ImageRequest):
|
||||
r.stream_image_progress = False
|
||||
|
||||
new_task = RenderTask(r)
|
||||
new_task.render_device = req.render_device
|
||||
|
||||
if task_cache.put(r.session_id, new_task, TASK_TTL):
|
||||
# Use twice the normal timeout for adding user requests.
|
||||
|
183
ui/server.py
183
ui/server.py
@ -22,8 +22,11 @@ OUTPUT_DIRNAME = "Stable Diffusion UI" # in the user's home folder
|
||||
TASK_TTL = 15 * 60 # Discard last session's task timeout
|
||||
APP_CONFIG_DEFAULTS = {
|
||||
# auto: selects the cuda device with the most free memory, cuda: use the currently active cuda device.
|
||||
'render_devices': ['auto'], # ['cuda'] or ['CPU', 'GPU:0', 'GPU:1', ...] or ['cpu']
|
||||
'render_devices': 'auto', # valid entries: 'auto', 'cpu' or 'cuda:N' (where N is a GPU index)
|
||||
'update_branch': 'main',
|
||||
'ui': {
|
||||
'open_browser_on_start': True,
|
||||
},
|
||||
}
|
||||
APP_CONFIG_DEFAULT_MODELS = [
|
||||
# needed to support the legacy installations
|
||||
@ -56,23 +59,13 @@ NOCACHE_HEADERS={"Cache-Control": "no-cache, no-store, must-revalidate", "Pragma
|
||||
app.mount('/media', StaticFiles(directory=os.path.join(SD_UI_DIR, 'media')), name="media")
|
||||
app.mount('/plugins', StaticFiles(directory=UI_PLUGINS_DIR), name="plugins")
|
||||
|
||||
config_cached = None
|
||||
config_last_mod_time = 0
|
||||
def getConfig(default_val=APP_CONFIG_DEFAULTS):
|
||||
global config_cached, config_last_mod_time
|
||||
try:
|
||||
config_json_path = os.path.join(CONFIG_DIR, 'config.json')
|
||||
if not os.path.exists(config_json_path):
|
||||
return default_val
|
||||
if config_last_mod_time > 0 and config_cached is not None:
|
||||
# Don't read if file was not modified
|
||||
mtime = os.path.getmtime(config_json_path)
|
||||
if mtime <= config_last_mod_time:
|
||||
return config_cached
|
||||
with open(config_json_path, 'r', encoding='utf-8') as f:
|
||||
config_cached = json.load(f)
|
||||
config_last_mod_time = os.path.getmtime(config_json_path)
|
||||
return config_cached
|
||||
return json.load(f)
|
||||
except Exception as e:
|
||||
print(str(e))
|
||||
print(traceback.format_exc())
|
||||
@ -86,71 +79,38 @@ def setConfig(config):
|
||||
except:
|
||||
print(traceback.format_exc())
|
||||
|
||||
if 'render_devices' in config:
|
||||
gpu_devices = list(filter(lambda dev: dev.lower().startswith('gpu') or dev.lower().startswith('cuda'), config['render_devices']))
|
||||
else:
|
||||
gpu_devices = []
|
||||
|
||||
has_first_cuda_device = False
|
||||
for device in gpu_devices:
|
||||
if not task_manager.is_first_cuda_device(device.lower()): continue
|
||||
has_first_cuda_device = True
|
||||
break
|
||||
if len(gpu_devices) > 0 and not has_first_cuda_device:
|
||||
print('WARNING: GFPGANer only works on GPU:0, use CUDA_VISIBLE_DEVICES if GFPGANer is needed on a specific GPU.')
|
||||
print('Using CUDA_VISIBLE_DEVICES will remap the selected devices starting at GPU:0 fixing GFPGANer')
|
||||
|
||||
try: # config.bat
|
||||
config_bat = [
|
||||
f"@set update_branch={config['update_branch']}"
|
||||
]
|
||||
|
||||
if os.getenv('CUDA_VISIBLE_DEVICES') is None:
|
||||
if len(gpu_devices) > 0 and not has_first_cuda_device:
|
||||
config_bat.append('::Set the devices visible inside SD-UI here')
|
||||
config_bat.append(f"::@set CUDA_VISIBLE_DEVICES={','.join(gpu_devices)}") # Needs better detection for edge cases, add as a comment for now.
|
||||
print('Add the line "@set CUDA_VISIBLE_DEVICES=N" where N is the GPUs to use to config.bat')
|
||||
else:
|
||||
config_bat.append(f"@set CUDA_VISIBLE_DEVICES={os.getenv('CUDA_VISIBLE_DEVICES')}")
|
||||
if len(gpu_devices) > 0 and not has_first_cuda_device:
|
||||
print('GPU:0 seems to be missing! Validate that CUDA_VISIBLE_DEVICES is set properly.')
|
||||
config_bat_path = os.path.join(CONFIG_DIR, 'config.bat')
|
||||
config_bat = []
|
||||
|
||||
if 'update_branch' in config:
|
||||
config_bat.append(f"@set update_branch={config['update_branch']}")
|
||||
if os.getenv('SD_UI_BIND_PORT') is not None:
|
||||
config_bat.append(f"@set SD_UI_BIND_PORT={os.getenv('SD_UI_BIND_PORT')}")
|
||||
if os.getenv('SD_UI_BIND_IP') is not None:
|
||||
config_bat.append(f"@set SD_UI_BIND_IP={os.getenv('SD_UI_BIND_IP')}")
|
||||
|
||||
|
||||
with open(config_bat_path, 'w', encoding='utf-8') as f:
|
||||
f.write('\r\n'.join(config_bat))
|
||||
except Exception as e:
|
||||
if len(config_bat) > 0:
|
||||
with open(config_bat_path, 'w', encoding='utf-8') as f:
|
||||
f.write('\r\n'.join(config_bat))
|
||||
except:
|
||||
print(traceback.format_exc())
|
||||
|
||||
try: # config.sh
|
||||
config_sh = [
|
||||
'#!/bin/bash',
|
||||
f"export update_branch={config['update_branch']}"
|
||||
]
|
||||
if os.getenv('CUDA_VISIBLE_DEVICES') is None:
|
||||
if len(gpu_devices) > 0 and not has_first_cuda_device:
|
||||
config_sh.append('#Set the devices visible inside SD-UI here')
|
||||
config_sh.append(f"#CUDA_VISIBLE_DEVICES={','.join(gpu_devices)}") # Needs better detection for edge cases, add as a comment for now.
|
||||
print('Add the line "CUDA_VISIBLE_DEVICES=N" where N is the GPUs to use to config.sh')
|
||||
else:
|
||||
config_sh.append(f"export CUDA_VISIBLE_DEVICES=\"{os.getenv('CUDA_VISIBLE_DEVICES')}\"")
|
||||
if len(gpu_devices) > 0 and not has_first_cuda_device:
|
||||
print('GPU:0 seems to be missing! Validate that CUDA_VISIBLE_DEVICES is set properly.')
|
||||
config_sh_path = os.path.join(CONFIG_DIR, 'config.sh')
|
||||
config_sh = ['#!/bin/bash']
|
||||
|
||||
if 'update_branch' in config:
|
||||
config_sh.append(f"export update_branch={config['update_branch']}")
|
||||
if os.getenv('SD_UI_BIND_PORT') is not None:
|
||||
config_sh.append(f"export SD_UI_BIND_PORT={os.getenv('SD_UI_BIND_PORT')}")
|
||||
if os.getenv('SD_UI_BIND_IP') is not None:
|
||||
config_sh.append(f"export SD_UI_BIND_IP={os.getenv('SD_UI_BIND_IP')}")
|
||||
|
||||
config_sh_path = os.path.join(CONFIG_DIR, 'config.sh')
|
||||
with open(config_sh_path, 'w', encoding='utf-8') as f:
|
||||
f.write('\n'.join(config_sh))
|
||||
except Exception as e:
|
||||
if len(config_sh) > 1:
|
||||
with open(config_sh_path, 'w', encoding='utf-8') as f:
|
||||
f.write('\n'.join(config_sh))
|
||||
except:
|
||||
print(traceback.format_exc())
|
||||
|
||||
def resolve_model_to_use(model_name:str, model_type:str, model_dir:str, model_extensions:list, default_models=[]):
|
||||
@ -199,28 +159,25 @@ class SetAppConfigRequest(BaseModel):
|
||||
update_branch: str = None
|
||||
render_devices: Union[List[str], List[int], str, int] = None
|
||||
model_vae: str = None
|
||||
ui_open_browser_on_start: bool = None
|
||||
|
||||
@app.post('/app_config')
|
||||
async def setAppConfig(req : SetAppConfigRequest):
|
||||
config = getConfig()
|
||||
if req.update_branch:
|
||||
if req.update_branch is not None:
|
||||
config['update_branch'] = req.update_branch
|
||||
if req.render_devices and hasattr(req.render_devices, "__len__"): # strings, array of strings or numbers.
|
||||
render_devices = []
|
||||
if isinstance(req.render_devices, str):
|
||||
req.render_devices = req.render_devices.split(',')
|
||||
if isinstance(req.render_devices, list):
|
||||
for gpu in req.render_devices:
|
||||
if isinstance(req.render_devices, int):
|
||||
render_devices.append('GPU:' + gpu)
|
||||
else:
|
||||
render_devices.append(gpu)
|
||||
if isinstance(req.render_devices, int):
|
||||
render_devices.append('GPU:' + req.render_devices)
|
||||
if len(render_devices) > 0:
|
||||
config['render_devices'] = render_devices
|
||||
if req.render_devices is not None:
|
||||
update_render_devices_in_config(config, req.render_devices)
|
||||
if req.ui_open_browser_on_start is not None:
|
||||
if 'ui' not in config:
|
||||
config['ui'] = {}
|
||||
config['ui']['open_browser_on_start'] = req.ui_open_browser_on_start
|
||||
try:
|
||||
setConfig(config)
|
||||
|
||||
if req.render_devices:
|
||||
update_render_threads()
|
||||
|
||||
return JSONResponse({'status': 'OK'}, headers=NOCACHE_HEADERS)
|
||||
except Exception as e:
|
||||
print(traceback.format_exc())
|
||||
@ -279,10 +236,13 @@ def read_web_data(key:str=None):
|
||||
elif key == 'app_config':
|
||||
config = getConfig(default_val=None)
|
||||
if config is None:
|
||||
raise HTTPException(status_code=500, detail="Config file is missing or unreadable")
|
||||
config = APP_CONFIG_DEFAULTS
|
||||
return JSONResponse(config, headers=NOCACHE_HEADERS)
|
||||
elif key == 'devices':
|
||||
return JSONResponse(task_manager.get_devices(), headers=NOCACHE_HEADERS)
|
||||
config = getConfig()
|
||||
devices = task_manager.get_devices()
|
||||
devices['config'] = config.get('render_devices', "auto")
|
||||
return JSONResponse(devices, headers=NOCACHE_HEADERS)
|
||||
elif key == 'models':
|
||||
return JSONResponse(getModels(), headers=NOCACHE_HEADERS)
|
||||
elif key == 'modifiers': return FileResponse(os.path.join(SD_UI_DIR, 'modifiers.json'), headers=NOCACHE_HEADERS)
|
||||
@ -315,6 +275,7 @@ def ping(session_id:str=None):
|
||||
response['session'] = 'completed'
|
||||
else:
|
||||
response['session'] = 'pending'
|
||||
response['devices'] = task_manager.get_devices()
|
||||
return JSONResponse(response, headers=NOCACHE_HEADERS)
|
||||
|
||||
def save_model_to_config(ckpt_model_name, vae_model_name):
|
||||
@ -330,17 +291,17 @@ def save_model_to_config(ckpt_model_name, vae_model_name):
|
||||
|
||||
setConfig(config)
|
||||
|
||||
def update_render_devices_in_config(config, render_devices):
|
||||
if render_devices not in ('cpu', 'auto') and not render_devices.startswith('cuda:'):
|
||||
raise HTTPException(status_code=400, detail=f'Invalid render device requested: {render_devices}')
|
||||
|
||||
if render_devices.startswith('cuda:'):
|
||||
render_devices = render_devices.split(',')
|
||||
|
||||
config['render_devices'] = render_devices
|
||||
|
||||
@app.post('/render')
|
||||
def render(req : task_manager.ImageRequest):
|
||||
if req.use_cpu: # TODO Remove after transition.
|
||||
print('WARNING Replace {use_cpu: true} by {render_device: "cpu"}')
|
||||
req.render_device = 'cpu'
|
||||
del req.use_cpu
|
||||
if req.render_device != 'cpu':
|
||||
req.render_device = int(req.render_device)
|
||||
if req.render_device and task_manager.is_alive(req.render_device) <= 0: raise HTTPException(status_code=403, detail=f'{req.render_device} rendering is not enabled in config.json or the thread has died...') # HTTP403 Forbidden
|
||||
if req.use_face_correction and task_manager.is_alive(0) <= 0: #TODO Remove when GFPGANer is fixed upstream.
|
||||
raise HTTPException(status_code=412, detail=f'GFPGANer only works GPU:0, use CUDA_VISIBLE_DEVICES if GFPGANer is needed on a specific GPU.') # HTTP412 Precondition Failed
|
||||
try:
|
||||
save_model_to_config(req.use_stable_diffusion_model, req.use_vae_model)
|
||||
req.use_stable_diffusion_model = resolve_ckpt_to_use(req.use_stable_diffusion_model)
|
||||
@ -394,8 +355,6 @@ def get_image(session_id, img_id):
|
||||
if not task.temp_images[img_id]: raise HTTPException(status_code=425, detail='Too Early, task data is not available yet.') # HTTP425 Too Early
|
||||
try:
|
||||
img_data = task.temp_images[img_id]
|
||||
if isinstance(img_data, str):
|
||||
return img_data
|
||||
img_data.seek(0)
|
||||
return StreamingResponse(img_data, media_type='image/jpeg')
|
||||
except KeyError as e:
|
||||
@ -419,45 +378,25 @@ class LogSuppressFilter(logging.Filter):
|
||||
return True
|
||||
logging.getLogger('uvicorn.access').addFilter(LogSuppressFilter())
|
||||
|
||||
config = getConfig()
|
||||
|
||||
# Start the task_manager
|
||||
task_manager.default_model_to_load = resolve_ckpt_to_use()
|
||||
task_manager.default_vae_to_load = resolve_vae_to_use()
|
||||
if 'render_devices' in config: # Start a new thread for each device.
|
||||
if isinstance(config['render_devices'], str):
|
||||
config['render_devices'] = config['render_devices'].split(',')
|
||||
if not isinstance(config['render_devices'], list):
|
||||
raise Exception('Invalid render_devices value in config.')
|
||||
for device in config['render_devices']:
|
||||
if task_manager.is_alive(device) >= 1:
|
||||
print(device, 'already registered.')
|
||||
continue
|
||||
if not task_manager.start_render_thread(device):
|
||||
print(device, 'failed to start.')
|
||||
if task_manager.is_alive() <= 0: # No running devices, probably invalid user config.
|
||||
print('WARNING: No active render devices after loading config. Validate "render_devices" in config.json')
|
||||
print('Loading default render devices to replace invalid render_devices field from config', config['render_devices'])
|
||||
|
||||
if task_manager.is_alive() <= 0: # Either no defaults or no devices after loading config.
|
||||
# Select best GPU device using free memory, if more than one device.
|
||||
if task_manager.start_render_thread('auto'): # Detect best device for renders
|
||||
# if cuda:0 is missing, another cuda device is better. try to start it...
|
||||
if task_manager.is_alive(0) <= 0 and task_manager.is_alive('cpu') <= 0 and not task_manager.start_render_thread('cuda'):
|
||||
print('Failed to start GPU:0...')
|
||||
else:
|
||||
print('Failed to start gpu device.')
|
||||
if task_manager.is_alive('cpu') <= 0 and not task_manager.start_render_thread('cpu'): # Allow CPU to be used for renders
|
||||
print('Failed to start CPU render device...')
|
||||
def update_render_threads():
|
||||
config = getConfig()
|
||||
render_devices = config.get('render_devices', 'auto')
|
||||
active_devices = task_manager.get_devices()['active'].keys()
|
||||
|
||||
is_using_a_gpu = (task_manager.is_alive() > task_manager.is_alive('cpu'))
|
||||
if is_using_a_gpu and task_manager.is_alive(0) <= 0:
|
||||
print('WARNING: GFPGANer only works on GPU:0, use CUDA_VISIBLE_DEVICES if GFPGANer is needed on a specific GPU.')
|
||||
print('Using CUDA_VISIBLE_DEVICES will remap the selected devices starting at GPU:0 fixing GFPGANer')
|
||||
print('Add the line "@set CUDA_VISIBLE_DEVICES=N" where N is the GPUs to use to config.bat')
|
||||
print('Add the line "CUDA_VISIBLE_DEVICES=N" where N is the GPUs to use to config.sh')
|
||||
print('requesting for render_devices', render_devices)
|
||||
task_manager.update_render_threads(render_devices, active_devices)
|
||||
|
||||
# print('active devices', task_manager.get_devices())
|
||||
update_render_threads()
|
||||
|
||||
# start the browser ui
|
||||
import webbrowser; webbrowser.open('http://localhost:9000')
|
||||
def open_browser():
|
||||
config = getConfig()
|
||||
ui = config.get('ui', {})
|
||||
if ui.get('open_browser_on_start', True):
|
||||
import webbrowser; webbrowser.open('http://localhost:9000')
|
||||
|
||||
open_browser()
|
||||
|
Loading…
Reference in New Issue
Block a user