Compare commits

..

1 Commits

62 changed files with 2063 additions and 4546 deletions

111
README.md
View File

@ -1,74 +1,43 @@
# Stable Diffusion UI
### Easiest way to install and use [Stable Diffusion](https://github.com/CompVis/stable-diffusion) on your own computer. No dependencies or technical knowledge required. 1-click install, powerful features.
[![Discord Server](https://img.shields.io/discord/1014774730907209781?label=Discord)](https://discord.com/invite/u9yhsFmEkB) (for support, and development discussion) | [Troubleshooting guide for common problems](Troubleshooting.md)
----
## Step 1: Download the installer
# Stable Diffusion UI v2
### A simple 1-click way to install and use [Stable Diffusion](https://github.com/CompVis/stable-diffusion) on your own computer. No dependencies or technical knowledge required.
<p float="left">
<a href="#installation"><img src="https://github.com/cmdr2/stable-diffusion-ui/raw/develop/media/download-win.png" width="200" /></a>
<a href="#installation"><img src="https://github.com/cmdr2/stable-diffusion-ui/raw/develop/media/download-linux.png" width="200" /></a>
</p>
## Step 2: Run the program
- On Windows: Double-click `Start Stable Diffusion UI.cmd`
- On Linux: Run `./start.sh` in a terminal
[![Discord Server](https://img.shields.io/discord/1014774730907209781?label=Discord)](https://discord.com/invite/u9yhsFmEkB) (for support, and development discussion) | [Troubleshooting guide for common problems](Troubleshooting.md)
## Step 3: There is no step 3!
It's simple to get started. You don't need to install or struggle with Python, Anaconda, Docker etc.
️‍🔥🎉 **New!** Live Preview, More Samplers, In-Painting, Face Correction (GFPGAN) and Upscaling (RealESRGAN) have been added!
The installer will take care of whatever is needed. A friendly [Discord community](https://discord.com/invite/u9yhsFmEkB) will help you if you face any problems.
This distribution currently uses Stable Diffusion 1.4. Once the model for 1.5 becomes publicly available, the model in this distribution will be updated.
----
# Easy for new users, powerful features for advanced users
### Features:
# Features in the new v2 Version:
- **No Dependencies or Technical Knowledge Required**: 1-click install for Windows 10/11 and Linux. *No dependencies*, no need for WSL or Docker or Conda or technical setup. Just download and run!
- **Clutter-free UI**: a friendly and simple UI, while providing a lot of powerful features
- Supports "*Text to Image*" and "*Image to Image*"
- **Custom Models**: Use your own `.ckpt` file, by placing it inside the `models/stable-diffusion` folder!
- **Live Preview**: See the image as the AI is drawing it
- **Task Queue**: Queue up all your ideas, without waiting for the current task to finish
- **In-Painting**: Specify areas of your image to paint into
- **Face Correction (GFPGAN) and Upscaling (RealESRGAN)**
- **Image Modifiers**: A library of *modifier tags* like *"Realistic"*, *"Pencil Sketch"*, *"ArtStation"* etc. Experiment with various styles quickly.
- **Loopback**: Use the output image as the input image for the next img2img task
- **Negative Prompt**: Specify aspects of the image to *remove*.
- **Attention/Emphasis:** () in the prompt increases the model's attention to enclosed words, and [] decreases it
- **Weighted Prompts:** Use weights for specific words in your prompt to change their importance, e.g. `red:2.4 dragon:1.2`
- **Prompt Matrix:** (in beta) Quickly create multiple variations of your prompt, e.g. `a photograph of an astronaut riding a horse | illustration | cinematic lighting`
- **In-Painting**
- **Live Preview**: See the image as the AI is drawing it
- **Lots of Samplers:** ddim, plms, heun, euler, euler_a, dpm2, dpm2_a, lms
- **Multiple Prompts File:** Queue multiple prompts by entering one prompt per line, or by running a text file
- **Image Modifiers**: A library of *modifier tags* like *"Realistic"*, *"Pencil Sketch"*, *"ArtStation"* etc. Experiment with various styles quickly.
- **New UI**: with cleaner design
- **Waifu Model Support**: Just replace the `stable-diffusion\sd-v1-4.ckpt` file after installation with the Waifu model
- Supports "*Text to Image*" and "*Image to Image*"
- **NSFW Setting**: A setting in the UI to control *NSFW content*
- **JPEG/PNG output**
- **Save generated images to disk**
- **Use CPU setting**: If you don't have a compatible graphics card, but still want to run it on your CPU.
- **Auto-updater**: Gets you the latest improvements and bug-fixes to a rapidly evolving project.
- **Low Memory Usage**: Creates 512x512 images with less than 4GB of VRAM!
- **Developer Console**: A developer-mode for those who want to modify their Stable Diffusion code, and edit the conda environment.
### Easy for new users:
![Screenshot of the initial UI](media/shot-v10-simple.jpg?raw=true)
### Powerful features for advanced users:
![Screenshot of advanced settings](media/shot-v10.jpg?raw=true)
### Live Preview
Useful for judging (and stopping) an image quickly, without waiting for it to finish rendering.
![Screenshot of advanced settings](media/shot-v9.jpg?raw=true)
## Live Preview
![live-512](https://user-images.githubusercontent.com/844287/192097249-729a0a1e-a677-485e-9ccc-16a9e848fabe.gif)
### Task Queue
![Screenshot of task queue](media/task-queue-v1.jpg?raw=true)
# System Requirements
1. Windows 10/11, or Linux. Experimental support for Mac is coming soon.
2. An NVIDIA graphics card, preferably with 4GB or more of VRAM. If you don't have a compatible graphics card, it'll automatically run in the slower "CPU Mode".
3. Minimum 8 GB of RAM.
2. An NVIDIA graphics card, preferably with 4GB or more of VRAM. But if you don't have a compatible graphics card, you can still use it with a "Use CPU" setting. It'll be very slow, but it should still work.
You don't need to install or struggle with Python, Anaconda, Docker etc. The installer will take care of whatever is needed.
You do not need anything else. You do not need WSL, Docker or Conda. The installer will take care of it.
# Installation
1. **Download** [for Windows](https://github.com/cmdr2/stable-diffusion-ui/releases/download/v2.16/stable-diffusion-ui-win64.zip) or [for Linux](https://github.com/cmdr2/stable-diffusion-ui/releases/download/v2.16/stable-diffusion-ui-linux.tar.xz).
@ -85,19 +54,49 @@ This will automatically install Stable Diffusion, set it up, and start the inter
**To Uninstall:** Just delete the `stable-diffusion-ui` folder to uninstall all the downloaded packages.
# How to use?
Please use our [guide](https://github.com/cmdr2/stable-diffusion-ui/wiki/How-to-Use) to understand how to use the features in this UI.
# Usage
Open http://localhost:9000 in your browser (after running step 3 previously). It may take a few moments for the back-end to be ready.
## With a text description
1. Enter a text prompt, like `a photograph of an astronaut riding a horse` in the textbox.
2. Press `Make Image`. This will take some time, depending on your system's processing power.
3. See the image generated using your prompt.
## With an image
1. Click `Browse..` next to `Initial Image`. Select your desired image.
2. An optional text prompt can help you further describe the kind of image you want to generate.
3. Press `Make Image`. See the image generated using your prompt.
You can use Face Correction or Upscaling to improve the image further.
**Pro tip:** You can also click `Use as Input` on a generated image, to use it as the input image for your next generation. This can be useful for sequentially refining the generated image with a single click.
**Another tip:** Images with the same aspect ratio of your generated image work best. E.g. 1:1 if you're generating images sized 512x512.
## Problems? Troubleshooting
Please try the common [troubleshooting](Troubleshooting.md) steps. If that doesn't fix it, please ask on the [discord server](https://discord.com/invite/u9yhsFmEkB), or [file an issue](https://github.com/cmdr2/stable-diffusion-ui/issues).
# Image Settings
You can also set the configuration like `seed`, `width`, `height`, `num_outputs`, `num_inference_steps` and `guidance_scale` using the 'show' button next to 'Image settings'.
Use the same `seed` number to get the same image for a certain prompt. This is useful for refining a prompt without losing the basic image design. Enable the `random images` checkbox to get random images.
![Screenshot of advanced settings](media/config-v6.jpg?raw=true)
# System Settings
The system settings are reachable via the cogwheel symbol on the top right. It can be used to configure whether all generated images should
saved be automically, or to tune the Stable Diffusion image generation.
![Screenshot of advanced settings](media/system-settings-v2.jpg?raw=true)
# Image Modifiers
![Screenshot of advanced settings](media/modifiers-v1.jpg?raw=true)
# Bugs reports and code contributions welcome
If there are any problems or suggestions, please feel free to ask on the [discord server](https://discord.com/invite/u9yhsFmEkB) or [file an issue](https://github.com/cmdr2/stable-diffusion-ui/issues).
We could really use help on these aspects (click to view tasks that need your help):
* [User Interface](https://github.com/users/cmdr2/projects/1/views/1)
* [Engine](https://github.com/users/cmdr2/projects/3/views/1)
* [Installer](https://github.com/users/cmdr2/projects/4/views/1)
* [Documentation](https://github.com/users/cmdr2/projects/5/views/1)
If you have any code contributions in mind, please feel free to say Hi to us on the [discord server](https://discord.com/invite/u9yhsFmEkB). We use the Discord server for development-related discussions, and for helping users.
Also, please feel free to submit a pull request, if you have any code contributions in mind. Join the [discord server](https://discord.com/invite/u9yhsFmEkB) for development-related discussions, and for helping other users.
# Disclaimer
The authors of this project are not responsible for any content generated using this interface.

View File

@ -1 +1,46 @@
Moved to https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting
Common issues and their solutions. If these solutions don't work, please feel free to ask at the [discord server](https://discord.com/invite/u9yhsFmEkB) or [file an issue](https://github.com/cmdr2/stable-diffusion-ui/issues).
## RuntimeError: CUDA out of memory
This can happen if your PC has less than 6GB of VRAM.
Try disabling the "Turbo mode" setting under "Advanced Settings", since that takes an additional 1 GB of VRAM (to increase the speed).
Additionally, a common reason for this error is that you're using an initial image larger than 768x768 pixels. Try using a smaller initial image.
Also try generating smaller sized images.
## No ldm found, or antlr4 or any other missing module, or ClobberError: This transaction has incompatible packages due to a shared path
On Windows, please ensure that you had placed the `stable-diffusion-ui` folder after unzipping to the root of C: or D: (or any drive). For e.g. `C:\stable-diffusion-ui`. **Note:** This has to be done **before** you start the installation process. If you have already installed (and are facing this error), please delete the installed folder, and start fresh by unzipping and placing the folder at the top of your drive.
This error can also be caused if you already have conda/miniconda/anaconda installed, due to package conflicts. Please open your Anaconda Prompt, and run `conda clean --all` to clean up unused packages.
If nothing works, this could be due to a corrupted installation. Please try reinstalling this, by deleting the installed folder, and unzipping from the downloaded zip file.
## Killed uvicorn server:app --app-dir ... --port 9000 --host 0.0.0.0
This happens if your PC ran out of RAM. Stable Diffusion requires a lot of RAM, and requires atleast 10 GB of RAM to work well. You can also try closing all other applications before running Stable Diffusion UI.
## Green image generated
This usually happens if you're running NVIDIA 1650 or 1660 Super. To solve this, please close and run the Stable Diffusion command on your computer. If you're using the older Docker-based solution (v1), please upgrade to v2: https://github.com/cmdr2/stable-diffusion-ui/tree/v2#installation
If you're still seeing this error, please try enabling "Full Precision" under "Advanced Settings" in the Stable Diffusion UI.
## './docker-compose.yml' is invalid:
> ERROR: The Compose file './docker-compose.yml' is invalid because:
> services.stability-ai.deploy.resources.reservations value Additional properties are not allowed ('devices' was unexpected)
Please ensure you have `docker-compose` version 1.29 or higher. Check `docker-compose --version`, and if required [update it to 1.29](https://docs.docker.com/compose/install/). (Thanks [HVRyan](https://github.com/HVRyan))
## RuntimeError: Found no NVIDIA driver on your system:
If you have an NVIDIA GPU and the latest [NVIDIA driver](http://www.nvidia.com/Download/index.aspx), please ensure that you've installed [nvidia-container-toolkit](https://stackoverflow.com/a/58432877). (Thanks [u/exintrovert420](https://www.reddit.com/user/exintrovert420/))
## Some other process is already running at port 9000 / port 9000 could not be bound
You can override the port used. Please change `docker-compose.yml` inside the project directory, and update the line `9000:9000` to `1337:9000` (where 1337 is whichever port number you want).
After doing this, please restart your server, by running `./server restart`.
After this, you can access the server at `http://localhost:1337` (where 1337 is the new port you specified earlier).
## RuntimeError: CUDA error: unknown error
Please ensure that you have an NVIDIA GPU and the latest [NVIDIA driver](http://www.nvidia.com/Download/index.aspx), and that you've installed [nvidia-container-toolkit](https://stackoverflow.com/a/58432877).
Also, if you are using WSL (Windows), please ensure you have the latest WSL kernel by running `wsl --shutdown` and then `wsl --update`. (Thanks [AndrWeisR](https://github.com/AndrWeisR))

View File

@ -8,42 +8,40 @@
set /p answer=Are you a developer of this project (Y/N)?
if /i "%answer:~,1%" NEQ "Y" exit /b
mkdir dist\win\stable-diffusion-ui\scripts
@REM mkdir dist\linux-mac\stable-diffusion-ui\scripts
@set PYTHONNOUSERSITE=1
@rem copy the installer files for Windows
@mkdir dist\stable-diffusion-ui
copy scripts\on_env_start.bat dist\win\stable-diffusion-ui\scripts\
copy scripts\bootstrap.bat dist\win\stable-diffusion-ui\scripts\
copy "scripts\Start Stable Diffusion UI.cmd" dist\win\stable-diffusion-ui\
copy LICENSE dist\win\stable-diffusion-ui\
copy "CreativeML Open RAIL-M License" dist\win\stable-diffusion-ui\
copy "How to install and run.txt" dist\win\stable-diffusion-ui\
echo. > dist\win\stable-diffusion-ui\scripts\install_status.txt
@echo "Downloading components for the installer.."
@rem copy the installer files for Linux and Mac
@call conda env create --prefix installer -f environment.yaml
@call conda activate .\installer
@REM copy scripts\on_env_start.sh dist\linux-mac\stable-diffusion-ui\scripts\
@REM copy scripts\bootstrap.sh dist\linux-mac\stable-diffusion-ui\scripts\
@REM copy scripts\start.sh dist\linux-mac\stable-diffusion-ui\
@REM copy LICENSE dist\linux-mac\stable-diffusion-ui\
@REM copy "CreativeML Open RAIL-M License" dist\linux-mac\stable-diffusion-ui\
@REM copy "How to install and run.txt" dist\linux-mac\stable-diffusion-ui\
@REM echo. > dist\linux-mac\stable-diffusion-ui\scripts\install_status.txt
@echo "Creating a distributable package.."
@rem make the zip
@call conda install -c conda-forge -y conda-pack
@call conda pack --n-threads -1 --prefix installer --format tar
cd dist\win
call powershell Compress-Archive -Path stable-diffusion-ui -DestinationPath ..\stable-diffusion-ui-win-x64.zip
cd ..\..
@cd dist\stable-diffusion-ui
@mkdir installer
@REM cd dist\linux-mac
@REM call powershell Compress-Archive -Path stable-diffusion-ui -DestinationPath ..\stable-diffusion-ui-linux-x64.zip
@REM call powershell Compress-Archive -Path stable-diffusion-ui -DestinationPath ..\stable-diffusion-ui-linux-arm64.zip
@REM call powershell Compress-Archive -Path stable-diffusion-ui -DestinationPath ..\stable-diffusion-ui-mac-x64.zip
@REM call powershell Compress-Archive -Path stable-diffusion-ui -DestinationPath ..\stable-diffusion-ui-mac-arm64.zip
@REM cd ..\..
@call tar -xf ..\..\installer.tar -C installer
echo "Build ready. Upload the zip files inside the 'dist' folder."
@mkdir scripts
pause
@copy ..\..\scripts\on_env_start.bat scripts\
@copy "..\..\scripts\Start Stable Diffusion UI.cmd" .
@copy ..\..\LICENSE .
@copy "..\..\CreativeML Open RAIL-M License" .
@copy "..\..\How to install and run.txt" .
@echo. > scripts\install_status.txt
@echo "Build ready. Zip the 'dist\stable-diffusion-ui' folder."
@echo "Cleaning up.."
@cd ..\..
@rmdir /s /q installer
@del installer.tar

View File

@ -11,40 +11,45 @@ case $yn in
* ) exit;;
esac
# mkdir -p dist/win/stable-diffusion-ui/scripts
mkdir -p dist/linux-mac/stable-diffusion-ui/scripts
export PYTHONNOUSERSITE=1
# copy the installer files for Windows
mkdir -p dist/stable-diffusion-ui
# cp scripts/on_env_start.bat dist/win/stable-diffusion-ui/scripts/
# cp scripts/bootstrap.bat dist/win/stable-diffusion-ui/scripts/
# cp "scripts/Start Stable Diffusion UI.cmd" dist/win/stable-diffusion-ui/
# cp LICENSE dist/win/stable-diffusion-ui/
# cp "CreativeML Open RAIL-M License" dist/win/stable-diffusion-ui/
# cp "How to install and run.txt" dist/win/stable-diffusion-ui/
# echo "" > dist/win/stable-diffusion-ui/scripts/install_status.txt
echo "Downloading components for the installer.."
# copy the installer files for Linux and Mac
source ~/miniconda3/etc/profile.d/conda.sh
cp scripts/on_env_start.sh dist/linux-mac/stable-diffusion-ui/scripts/
cp scripts/bootstrap.sh dist/linux-mac/stable-diffusion-ui/scripts/
cp scripts/start.sh dist/linux-mac/stable-diffusion-ui/
cp LICENSE dist/linux-mac/stable-diffusion-ui/
cp "CreativeML Open RAIL-M License" dist/linux-mac/stable-diffusion-ui/
cp "How to install and run.txt" dist/linux-mac/stable-diffusion-ui/
echo "" > dist/linux-mac/stable-diffusion-ui/scripts/install_status.txt
conda install -c conda-forge -y conda-pack
# make the zip
conda env create --prefix installer -f environment.yaml
conda activate ./installer
# cd dist/win
# zip -r ../stable-diffusion-ui-win-x64.zip stable-diffusion-ui
# cd ../..
echo "Creating a distributable package.."
conda pack --n-threads -1 --prefix installer --format tar
cd dist/stable-diffusion-ui
mkdir installer
tar -xf ../../installer.tar -C installer
mkdir scripts
cp ../../scripts/on_env_start.sh scripts/
cp ../../scripts/start.sh .
cp ../../LICENSE .
cp "../../CreativeML Open RAIL-M License" .
cp "../../How to install and run.txt" .
echo "" > scripts/install_status.txt
chmod u+x start.sh
echo "Build ready. Zip the 'dist/stable-diffusion-ui' folder."
echo "Cleaning up.."
cd dist/linux-mac
zip -r ../stable-diffusion-ui-linux-x64.zip stable-diffusion-ui
zip -r ../stable-diffusion-ui-linux-arm64.zip stable-diffusion-ui
zip -r ../stable-diffusion-ui-mac-x64.zip stable-diffusion-ui
zip -r ../stable-diffusion-ui-mac-arm64.zip stable-diffusion-ui
cd ../..
echo "Build ready. Upload the zip files inside the 'dist' folder."
rm -rf installer
rm installer.tar

Binary file not shown.

Before

Width:  |  Height:  |  Size: 56 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 139 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 113 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 155 KiB

View File

@ -1,32 +0,0 @@
@echo off
echo "Opening Stable Diffusion UI - Developer Console.." & echo.
set PATH=C:\Windows\System32;%PATH%
@rem set legacy and new installer's PATH, if they exist
if exist "installer" set PATH=%cd%\installer;%cd%\installer\Library\bin;%cd%\installer\Scripts;%cd%\installer\Library\usr\bin;%PATH%
if exist "installer_files\env" set PATH=%cd%\installer_files\env;%cd%\installer_files\env\Library\bin;%cd%\installer_files\env\Scripts;%cd%\installer_files\Library\usr\bin;%PATH%
@rem activate the installer env
call conda activate
@rem Test the environment
echo "Environment Info:"
call where git
call git --version
call where conda
call conda --version
echo.
@rem activate the environment
call conda activate .\stable-diffusion\env
call where python
call python --version
echo.
cmd /k

View File

@ -1,24 +1,19 @@
@echo off
set PATH=C:\Windows\System32;%PATH%
@REM Delete the post-activate hook from the old installer
if exist "installer\etc\conda\activate.d\post_activate.bat" (
echo. > installer\etc\conda\activate.d\post_activate.bat
)
@rem set legacy installer's PATH, if it exists
if exist "installer" set PATH=%cd%\installer;%cd%\installer\Library\bin;%cd%\installer\Scripts;%cd%\installer\Library\usr\bin;%PATH%
@call installer\Scripts\activate.bat
@rem Setup the packages required for the installer
call scripts\bootstrap.bat
@call conda-unpack
@rem set new installer's PATH, if it downloaded any packages
if exist "installer_files\env" set PATH=%cd%\installer_files\env;%cd%\installer_files\env\Library\bin;%cd%\installer_files\env\Scripts;%cd%\installer_files\Library\usr\bin;%PATH%
@call conda --version
@call git --version
@rem Test the bootstrap
call where git
call git --version
@cd installer
call where conda
call conda --version
@rem Download the rest of the installer and UI
call scripts\on_env_start.bat
@call ..\scripts\on_env_start.bat
@pause

View File

@ -1,55 +0,0 @@
@echo off
@rem This script will install git and conda (if not found on the PATH variable)
@rem using micromamba (an 8mb static-linked single-file binary, conda replacement).
@rem For users who already have git and conda, this step will be skipped.
@rem This enables a user to install this project without manually installing conda and git.
@rem config
set MAMBA_ROOT_PREFIX=%cd%\installer_files\mamba
set INSTALL_ENV_DIR=%cd%\installer_files\env
set LEGACY_INSTALL_ENV_DIR=%cd%\installer
set MICROMAMBA_DOWNLOAD_URL=https://github.com/cmdr2/stable-diffusion-ui/releases/download/v1.1/micromamba.exe
@rem figure out whether git and conda needs to be installed
if exist "%INSTALL_ENV_DIR%" set PATH=%INSTALL_ENV_DIR%;%INSTALL_ENV_DIR%\Library\bin;%INSTALL_ENV_DIR%\Scripts;%INSTALL_ENV_DIR%\Library\usr\bin;%PATH%
set PACKAGES_TO_INSTALL=
if not exist "%LEGACY_INSTALL_ENV_DIR%\etc\profile.d\conda.sh" (
if not exist "%INSTALL_ENV_DIR%\etc\profile.d\conda.sh" set PACKAGES_TO_INSTALL=%PACKAGES_TO_INSTALL% conda
)
call git --version >.tmp1 2>.tmp2
if "%ERRORLEVEL%" NEQ "0" set PACKAGES_TO_INSTALL=%PACKAGES_TO_INSTALL% git
@rem (if necessary) install git and conda into a contained environment
if "%PACKAGES_TO_INSTALL%" NEQ "" (
@rem download micromamba
if not exist "%MAMBA_ROOT_PREFIX%\micromamba.exe" (
echo "Downloading micromamba from %MICROMAMBA_DOWNLOAD_URL% to %MAMBA_ROOT_PREFIX%\micromamba.exe"
mkdir "%MAMBA_ROOT_PREFIX%"
call curl -L "%MICROMAMBA_DOWNLOAD_URL%" > "%MAMBA_ROOT_PREFIX%\micromamba.exe"
@rem test the mamba binary
echo Micromamba version:
call "%MAMBA_ROOT_PREFIX%\micromamba.exe" --version
)
@rem create the installer env
if not exist "%INSTALL_ENV_DIR%" (
call "%MAMBA_ROOT_PREFIX%\micromamba.exe" create -y --prefix "%INSTALL_ENV_DIR%"
)
echo "Packages to install:%PACKAGES_TO_INSTALL%"
call "%MAMBA_ROOT_PREFIX%\micromamba.exe" install -y --prefix "%INSTALL_ENV_DIR%" -c conda-forge %PACKAGES_TO_INSTALL%
if not exist "%INSTALL_ENV_DIR%" (
echo "There was a problem while installing%PACKAGES_TO_INSTALL% using micromamba. Cannot continue."
pause
exit /b
)
)

View File

@ -1,70 +0,0 @@
#!/bin/bash
# This script will install git and conda (if not found on the PATH variable)
# using micromamba (an 8mb static-linked single-file binary, conda replacement).
# For users who already have git and conda, this step will be skipped.
# This enables a user to install this project without manually installing conda and git.
OS_NAME=$(uname -s)
case "${OS_NAME}" in
Linux*) OS_NAME="linux";;
Darwin*) OS_NAME="osx";;
*) echo "Unknown OS: $OS_NAME! This script runs only on Linux or Mac" && exit
esac
OS_ARCH=$(uname -m)
case "${OS_ARCH}" in
x86_64*) OS_ARCH="64";;
arm64*) OS_ARCH="arm64";;
*) echo "Unknown system architecture: $OS_ARCH! This script runs only on x86_64 or arm64" && exit
esac
# https://mamba.readthedocs.io/en/latest/installation.html
if [ "$OS_NAME" == "linux" ] && [ "$OS_ARCH" == "arm64" ]; then OS_ARCH="aarch64"; fi
# config
export MAMBA_ROOT_PREFIX="$(pwd)/installer_files/mamba"
INSTALL_ENV_DIR="$(pwd)/installer_files/env"
LEGACY_INSTALL_ENV_DIR="$(pwd)/installer"
MICROMAMBA_DOWNLOAD_URL="https://micro.mamba.pm/api/micromamba/${OS_NAME}-${OS_ARCH}/latest"
# figure out whether git and conda needs to be installed
if [ -e "$INSTALL_ENV_DIR" ]; then export PATH="$INSTALL_ENV_DIR/bin:$PATH"; fi
PACKAGES_TO_INSTALL=""
if [ ! -e "$LEGACY_INSTALL_ENV_DIR/etc/profile.d/conda.sh" ] && [ ! -e "$INSTALL_ENV_DIR/etc/profile.d/conda.sh" ]; then PACKAGES_TO_INSTALL="$PACKAGES_TO_INSTALL conda"; fi
if ! hash "git" &>/dev/null; then PACKAGES_TO_INSTALL="$PACKAGES_TO_INSTALL git"; fi
# (if necessary) install git and conda into a contained environment
if [ "$PACKAGES_TO_INSTALL" != "" ]; then
# download micromamba
if [ ! -e "$MAMBA_ROOT_PREFIX/micromamba" ]; then
echo "Downloading micromamba from $MICROMAMBA_DOWNLOAD_URL to $MAMBA_ROOT_PREFIX/micromamba"
mkdir -p "$MAMBA_ROOT_PREFIX"
curl -L "$MICROMAMBA_DOWNLOAD_URL" | tar -xvj bin/micromamba -O > "$MAMBA_ROOT_PREFIX/micromamba"
chmod u+x "$MAMBA_ROOT_PREFIX/micromamba"
# test the mamba binary
echo "Micromamba version:"
"$MAMBA_ROOT_PREFIX/micromamba" --version
fi
# create the installer env
if [ ! -e "$INSTALL_ENV_DIR" ]; then
"$MAMBA_ROOT_PREFIX/micromamba" create -y --prefix "$INSTALL_ENV_DIR"
fi
echo "Packages to install:$PACKAGES_TO_INSTALL"
"$MAMBA_ROOT_PREFIX/micromamba" install -y --prefix "$INSTALL_ENV_DIR" -c conda-forge $PACKAGES_TO_INSTALL
if [ ! -e "$INSTALL_ENV_DIR" ]; then
echo "There was a problem while installing$PACKAGES_TO_INSTALL using micromamba. Cannot continue."
exit
fi
fi

View File

@ -1,39 +0,0 @@
#!/bin/bash
if [ "$0" == "bash" ]; then
echo "Opening Stable Diffusion UI - Developer Console.."
echo ""
# set legacy and new installer's PATH, if they exist
if [ -e "installer" ]; then export PATH="$(pwd)/installer/bin:$PATH"; fi
if [ -e "installer_files/env" ]; then export PATH="$(pwd)/installer_files/env/bin:$PATH"; fi
# activate the installer env
CONDA_BASEPATH=$(conda info --base)
source "$CONDA_BASEPATH/etc/profile.d/conda.sh" # avoids the 'shell not initialized' error
conda activate
# test the environment
echo "Environment Info:"
which git
git --version
which conda
conda --version
echo ""
# activate the environment
CONDA_BASEPATH=$(conda info --base)
source "$CONDA_BASEPATH/etc/profile.d/conda.sh" # otherwise conda complains about 'shell not initialized' (needed when running in a script)
conda activate ./stable-diffusion/env
which python
python --version
echo ""
else
bash --init-file developer_console.sh
fi

View File

@ -4,6 +4,8 @@
set PATH=C:\Windows\System32;%PATH%
@cd ..
if exist "scripts\config.bat" (
@call scripts\config.bat
)
@ -12,7 +14,7 @@ if "%update_branch%"=="" (
set update_branch=main
)
@>nul findstr /m "conda_sd_ui_deps_installed" scripts\install_status.txt
@>nul grep -c "conda_sd_ui_deps_installed" scripts\install_status.txt
@if "%ERRORLEVEL%" NEQ "0" (
for /f "tokens=*" %%a in ('python -c "import os; parts = os.getcwd().split(os.path.sep); print(len(parts))"') do if "%%a" NEQ "2" (
echo. & echo "!!!! WARNING !!!!" & echo.
@ -26,14 +28,14 @@ if "%update_branch%"=="" (
)
)
@>nul findstr /m "sd_ui_git_cloned" scripts\install_status.txt
@>nul grep -c "sd_ui_git_cloned" scripts\install_status.txt
@if "%ERRORLEVEL%" EQU "0" (
@echo "Stable Diffusion UI's git repository was already installed. Updating from %update_branch%.."
@cd sd-ui-files
@call git reset --hard
@call git -c advice.detachedHead=false checkout "%update_branch%"
@call git checkout "%update_branch%"
@call git pull
@cd ..
@ -44,7 +46,7 @@ if "%update_branch%"=="" (
@call git clone -b "%update_branch%" https://github.com/cmdr2/stable-diffusion-ui.git sd-ui-files && (
@echo sd_ui_git_cloned >> scripts\install_status.txt
) || (
@echo "Error downloading Stable Diffusion UI. Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!"
@echo "Error downloading Stable Diffusion UI. Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!"
pause
@exit /b
)
@ -52,9 +54,7 @@ if "%update_branch%"=="" (
@xcopy sd-ui-files\ui ui /s /i /Y
@copy sd-ui-files\scripts\on_sd_start.bat scripts\ /Y
@copy sd-ui-files\scripts\bootstrap.bat scripts\ /Y
@copy "sd-ui-files\scripts\Start Stable Diffusion UI.cmd" . /Y
@copy "sd-ui-files\scripts\Developer Console.cmd" . /Y
@call scripts\on_sd_start.bat

View File

@ -16,7 +16,7 @@ if [ -f "scripts/install_status.txt" ] && [ `grep -c sd_ui_git_cloned scripts/in
cd sd-ui-files
git reset --hard
git -c advice.detachedHead=false checkout "$update_branch"
git checkout "$update_branch"
git pull
cd ..
@ -27,7 +27,7 @@ else
if git clone -b "$update_branch" https://github.com/cmdr2/stable-diffusion-ui.git sd-ui-files ; then
echo sd_ui_git_cloned >> scripts/install_status.txt
else
printf "\n\nError downloading Stable Diffusion UI. Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n"
printf "\n\nError downloading Stable Diffusion UI. Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n"
read -p "Press any key to continue"
exit
fi
@ -36,9 +36,7 @@ fi
rm -rf ui
cp -Rf sd-ui-files/ui .
cp sd-ui-files/scripts/on_sd_start.sh scripts/
cp sd-ui-files/scripts/bootstrap.sh scripts/
cp sd-ui-files/scripts/start.sh .
cp sd-ui-files/scripts/developer_console.sh .
./scripts/on_sd_start.sh

View File

@ -1,24 +1,13 @@
@echo off
@copy sd-ui-files\scripts\on_env_start.bat scripts\ /Y
@copy sd-ui-files\scripts\bootstrap.bat scripts\ /Y
if exist "%cd%\profile" (
set USERPROFILE=%cd%\profile
)
@rem activate the installer env
call conda activate
@REM Caution, this file will make your eyes and brain bleed. It's such an unholy mess.
@REM Note to self: Please rewrite this in Python. For the sake of your own sanity.
@REM remove the old version of the dev console script, if it's still present
if exist "Open Developer Console.cmd" del "Open Developer Console.cmd"
@call python -c "import os; import shutil; frm = 'sd-ui-files\\ui\\hotfix\\9c24e6cd9f499d02c4f21a033736dabd365962dc80fe3aeb57a8f85ea45a20a3.26fead7ea4f0f843f6eb4055dfd25693f1a71f3c6871b184042d4b126244e142'; dst = os.path.join(os.path.expanduser('~'), '.cache', 'huggingface', 'transformers', '9c24e6cd9f499d02c4f21a033736dabd365962dc80fe3aeb57a8f85ea45a20a3.26fead7ea4f0f843f6eb4055dfd25693f1a71f3c6871b184042d4b126244e142'); shutil.copyfile(frm, dst) if os.path.exists(dst) else print(''); print('Hotfixed broken JSON file from OpenAI');"
@>nul findstr /m "sd_git_cloned" scripts\install_status.txt
@>nul grep -c "sd_git_cloned" scripts\install_status.txt
@if "%ERRORLEVEL%" EQU "0" (
@echo "Stable Diffusion's git repository was already installed. Updating.."
@ -26,35 +15,37 @@ if exist "Open Developer Console.cmd" del "Open Developer Console.cmd"
@call git reset --hard
@call git pull
@call git -c advice.detachedHead=false checkout f6cfebffa752ee11a7b07497b8529d5971de916c
@call git checkout d87bd29a6862996d8a0980c1343b6f0d4eb718b4
@call git apply ..\ui\sd_internal\ddim_callback.patch
@call git apply ..\ui\sd_internal\env_yaml.patch
@REM @call git apply ..\ui\sd_internal\ddim_callback.patch
@REM @call git apply ..\ui\sd_internal\env_yaml.patch
@call git apply ..\ui\sd_internal\custom_sd.patch
@cd ..
) else (
@echo. & echo "Downloading Stable Diffusion.." & echo.
@call git clone https://github.com/basujindal/stable-diffusion.git && (
@call git clone https://github.com/invoke-ai/InvokeAI.git stable-diffusion && (
@echo sd_git_cloned >> scripts\install_status.txt
) || (
@echo "Error downloading Stable Diffusion. Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!"
@echo "Error downloading Stable Diffusion. Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!"
pause
@exit /b
)
@cd stable-diffusion
@call git -c advice.detachedHead=false checkout f6cfebffa752ee11a7b07497b8529d5971de916c
@call git checkout d87bd29a6862996d8a0980c1343b6f0d4eb718b4
@call git apply ..\ui\sd_internal\ddim_callback.patch
@call git apply ..\ui\sd_internal\env_yaml.patch
@REM @call git apply ..\ui\sd_internal\ddim_callback.patch
@REM @call git apply ..\ui\sd_internal\env_yaml.patch
@call git apply ..\ui\sd_internal\custom_sd.patch
@cd ..
)
@cd stable-diffusion
@>nul findstr /m "conda_sd_env_created" ..\scripts\install_status.txt
@>nul grep -c "conda_sd_env_created" ..\scripts\install_status.txt
@if "%ERRORLEVEL%" EQU "0" (
@echo "Packages necessary for Stable Diffusion were already installed"
@ -67,12 +58,8 @@ if exist "Open Developer Console.cmd" del "Open Developer Console.cmd"
@REM prevent conda from using packages from the user's home directory, to avoid conflicts
@set PYTHONNOUSERSITE=1
set USERPROFILE=%cd%\profile
set TMP=%cd%\tmp
set TEMP=%cd%\tmp
@call conda env create --prefix env -f environment.yaml || (
@echo. & echo "Error installing the packages necessary for Stable Diffusion. Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo.
@echo. & echo "Error installing the packages necessary for Stable Diffusion. Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo.
pause
exit /b
)
@ -80,13 +67,13 @@ if exist "Open Developer Console.cmd" del "Open Developer Console.cmd"
@call conda activate .\env
@call conda install -c conda-forge -y --prefix env antlr4-python3-runtime=4.8 || (
@echo. & echo "Error installing antlr4-python3-runtime for Stable Diffusion. Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo.
@echo. & echo "Error installing antlr4-python3-runtime for Stable Diffusion. Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo.
pause
exit /b
)
for /f "tokens=*" %%a in ('python -c "import torch; import ldm; import transformers; import numpy; import antlr4; print(42)"') do if "%%a" NEQ "42" (
@echo. & echo "Dependency test failed! Error installing the packages necessary for Stable Diffusion. Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo.
@echo. & echo "Dependency test failed! Error installing the packages necessary for Stable Diffusion. Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo.
pause
exit /b
)
@ -96,67 +83,7 @@ if exist "Open Developer Console.cmd" del "Open Developer Console.cmd"
set PATH=C:\Windows\System32;%PATH%
@>nul findstr /m "conda_sd_gfpgan_deps_installed" ..\scripts\install_status.txt
@if "%ERRORLEVEL%" EQU "0" (
@echo "Packages necessary for GFPGAN (Face Correction) were already installed"
) else (
@echo. & echo "Downloading packages necessary for GFPGAN (Face Correction).." & echo.
@set PYTHONNOUSERSITE=1
set USERPROFILE=%cd%\profile
set TMP=%cd%\tmp
set TEMP=%cd%\tmp
@call pip install -e git+https://github.com/TencentARC/GFPGAN#egg=GFPGAN || (
@echo. & echo "Error installing the packages necessary for GFPGAN (Face Correction). Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo.
pause
exit /b
)
@call pip install basicsr==1.4.2 || (
@echo. & echo "Error installing the basicsr package necessary for GFPGAN (Face Correction). Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo.
pause
exit /b
)
for /f "tokens=*" %%a in ('python -c "from gfpgan import GFPGANer; print(42)"') do if "%%a" NEQ "42" (
@echo. & echo "Dependency test failed! Error installing the packages necessary for GFPGAN (Face Correction). Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo.
pause
exit /b
)
@echo conda_sd_gfpgan_deps_installed >> ..\scripts\install_status.txt
)
@>nul findstr /m "conda_sd_esrgan_deps_installed" ..\scripts\install_status.txt
@if "%ERRORLEVEL%" EQU "0" (
@echo "Packages necessary for ESRGAN (Resolution Upscaling) were already installed"
) else (
@echo. & echo "Downloading packages necessary for ESRGAN (Resolution Upscaling).." & echo.
@set PYTHONNOUSERSITE=1
set USERPROFILE=%cd%\profile
set TMP=%cd%\tmp
set TEMP=%cd%\tmp
@call pip install -e git+https://github.com/xinntao/Real-ESRGAN#egg=realesrgan || (
@echo. & echo "Error installing the packages necessary for ESRGAN (Resolution Upscaling). Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo.
pause
exit /b
)
for /f "tokens=*" %%a in ('python -c "from basicsr.archs.rrdbnet_arch import RRDBNet; from realesrgan import RealESRGANer; print(42)"') do if "%%a" NEQ "42" (
@echo. & echo "Dependency test failed! Error installing the packages necessary for ESRGAN (Resolution Upscaling). Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo.
pause
exit /b
)
@echo conda_sd_esrgan_deps_installed >> ..\scripts\install_status.txt
)
@>nul findstr /m "conda_sd_ui_deps_installed" ..\scripts\install_status.txt
@>nul grep -c "conda_sd_ui_deps_installed" ..\scripts\install_status.txt
@if "%ERRORLEVEL%" EQU "0" (
echo "Packages necessary for Stable Diffusion UI were already installed"
) else (
@ -164,35 +91,28 @@ set PATH=C:\Windows\System32;%PATH%
@set PYTHONNOUSERSITE=1
set USERPROFILE=%cd%\profile
set TMP=%cd%\tmp
set TEMP=%cd%\tmp
@call conda install -c conda-forge -y --prefix env uvicorn fastapi || (
echo "Error installing the packages necessary for Stable Diffusion UI. Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!"
echo "Error installing the packages necessary for Stable Diffusion UI. Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!"
pause
exit /b
)
)
call WHERE uvicorn > .tmp
@>nul findstr /m "uvicorn" .tmp
@>nul grep -c "uvicorn" .tmp
@if "%ERRORLEVEL%" NEQ "0" (
@echo. & echo "UI packages not found! Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo.
@echo. & echo "UI packages not found! Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo.
pause
exit /b
)
@>nul findstr /m "conda_sd_ui_deps_installed" ..\scripts\install_status.txt
@>nul grep -c "conda_sd_ui_deps_installed" ..\scripts\install_status.txt
@if "%ERRORLEVEL%" NEQ "0" (
@echo conda_sd_ui_deps_installed >> ..\scripts\install_status.txt
)
if not exist "..\models\stable-diffusion" mkdir "..\models\stable-diffusion"
echo. > "..\models\stable-diffusion\Put your custom ckpt files here.txt"
@if exist "sd-v1-4.ckpt" (
for %%I in ("sd-v1-4.ckpt") do if "%%~zI" EQU "4265380512" (
echo "Data files (weights) necessary for Stable Diffusion were already downloaded. Using the HuggingFace 4 GB Model."
@ -213,17 +133,17 @@ echo. > "..\models\stable-diffusion\Put your custom ckpt files here.txt"
@if not exist "sd-v1-4.ckpt" (
@echo. & echo "Downloading data files (weights) for Stable Diffusion.." & echo.
@call curl -C - --retry 20 --retry-all-errors --retry-delay 5 -L -k https://me.cmdr2.org/stable-diffusion-ui/sd-v1-4.ckpt > sd-v1-4.ckpt
@call curl -L -k https://me.cmdr2.org/stable-diffusion-ui/sd-v1-4.ckpt > sd-v1-4.ckpt
@if exist "sd-v1-4.ckpt" (
for %%I in ("sd-v1-4.ckpt") do if "%%~zI" NEQ "4265380512" (
echo. & echo "Error: The downloaded model file was invalid! Bytes downloaded: %%~zI" & echo.
echo. & echo "Error downloading the data files (weights) for Stable Diffusion. Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo.
echo. & echo "Error downloading the data files (weights) for Stable Diffusion. Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo.
pause
exit /b
)
) else (
@echo. & echo "Error downloading the data files (weights) for Stable Diffusion. Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo.
@echo. & echo "Error downloading the data files (weights) for Stable Diffusion. Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo.
pause
exit /b
)
@ -243,17 +163,17 @@ echo. > "..\models\stable-diffusion\Put your custom ckpt files here.txt"
@if not exist "GFPGANv1.3.pth" (
@echo. & echo "Downloading data files (weights) for GFPGAN (Face Correction).." & echo.
@call curl -C - --retry 20 --retry-all-errors --retry-delay 5 -L -k https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth > GFPGANv1.3.pth
@call curl -L -k https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth > GFPGANv1.3.pth
@if exist "GFPGANv1.3.pth" (
for %%I in ("GFPGANv1.3.pth") do if "%%~zI" NEQ "348632874" (
echo. & echo "Error: The downloaded GFPGAN model file was invalid! Bytes downloaded: %%~zI" & echo.
echo. & echo "Error downloading the data files (weights) for GFPGAN (Face Correction). Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo.
echo. & echo "Error downloading the data files (weights) for GFPGAN (Face Correction). Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo.
pause
exit /b
)
) else (
@echo. & echo "Error downloading the data files (weights) for GFPGAN (Face Correction). Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo.
@echo. & echo "Error downloading the data files (weights) for GFPGAN (Face Correction). Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo.
pause
exit /b
)
@ -273,17 +193,17 @@ echo. > "..\models\stable-diffusion\Put your custom ckpt files here.txt"
@if not exist "RealESRGAN_x4plus.pth" (
@echo. & echo "Downloading data files (weights) for ESRGAN (Resolution Upscaling) x4plus.." & echo.
@call curl -C - --retry 20 --retry-all-errors --retry-delay 5 -L -k https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth > RealESRGAN_x4plus.pth
@call curl -L -k https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth > RealESRGAN_x4plus.pth
@if exist "RealESRGAN_x4plus.pth" (
for %%I in ("RealESRGAN_x4plus.pth") do if "%%~zI" NEQ "67040989" (
echo. & echo "Error: The downloaded ESRGAN x4plus model file was invalid! Bytes downloaded: %%~zI" & echo.
echo. & echo "Error downloading the data files (weights) for ESRGAN (Resolution Upscaling) x4plus. Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo.
echo. & echo "Error downloading the data files (weights) for ESRGAN (Resolution Upscaling) x4plus. Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo.
pause
exit /b
)
) else (
@echo. & echo "Error downloading the data files (weights) for ESRGAN (Resolution Upscaling) x4plus. Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo.
@echo. & echo "Error downloading the data files (weights) for ESRGAN (Resolution Upscaling) x4plus. Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo.
pause
exit /b
)
@ -303,17 +223,17 @@ echo. > "..\models\stable-diffusion\Put your custom ckpt files here.txt"
@if not exist "RealESRGAN_x4plus_anime_6B.pth" (
@echo. & echo "Downloading data files (weights) for ESRGAN (Resolution Upscaling) x4plus_anime.." & echo.
@call curl -C - --retry 20 --retry-all-errors --retry-delay 5 -L -k https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth > RealESRGAN_x4plus_anime_6B.pth
@call curl -L -k https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth > RealESRGAN_x4plus_anime_6B.pth
@if exist "RealESRGAN_x4plus_anime_6B.pth" (
for %%I in ("RealESRGAN_x4plus_anime_6B.pth") do if "%%~zI" NEQ "17938799" (
echo. & echo "Error: The downloaded ESRGAN x4plus_anime model file was invalid! Bytes downloaded: %%~zI" & echo.
echo. & echo "Error downloading the data files (weights) for ESRGAN (Resolution Upscaling) x4plus_anime. Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo.
echo. & echo "Error downloading the data files (weights) for ESRGAN (Resolution Upscaling) x4plus_anime. Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo.
pause
exit /b
)
) else (
@echo. & echo "Error downloading the data files (weights) for ESRGAN (Resolution Upscaling) x4plus_anime. Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo.
@echo. & echo "Error downloading the data files (weights) for ESRGAN (Resolution Upscaling) x4plus_anime. Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo.
pause
exit /b
)
@ -321,7 +241,7 @@ echo. > "..\models\stable-diffusion\Put your custom ckpt files here.txt"
@>nul findstr /m "sd_install_complete" ..\scripts\install_status.txt
@>nul grep -c "sd_install_complete" ..\scripts\install_status.txt
@if "%ERRORLEVEL%" NEQ "0" (
@echo sd_weights_downloaded >> ..\scripts\install_status.txt
@echo sd_install_complete >> ..\scripts\install_status.txt
@ -336,13 +256,12 @@ echo. > "..\models\stable-diffusion\Put your custom ckpt files here.txt"
@cd ..\..\..
@echo PYTHONPATH=%PYTHONPATH%
call where python
call python --version
@cd ..
@set SD_UI_PATH=%cd%\ui
@cd stable-diffusion
@call python --version
@uvicorn server:app --app-dir "%SD_UI_PATH%" --port 9000 --host 0.0.0.0
@pause

View File

@ -1,18 +1,8 @@
#!/bin/bash
cp sd-ui-files/scripts/on_env_start.sh scripts/
cp sd-ui-files/scripts/bootstrap.sh scripts/
# activate the installer env
CONDA_BASEPATH=$(conda info --base)
source "$CONDA_BASEPATH/etc/profile.d/conda.sh" # avoids the 'shell not initialized' error
conda activate
# remove the old version of the dev console script, if it's still present
if [ -e "open_dev_console.sh" ]; then
rm "open_dev_console.sh"
fi
source installer/etc/profile.d/conda.sh
python -c "import os; import shutil; frm = 'sd-ui-files/ui/hotfix/9c24e6cd9f499d02c4f21a033736dabd365962dc80fe3aeb57a8f85ea45a20a3.26fead7ea4f0f843f6eb4055dfd25693f1a71f3c6871b184042d4b126244e142'; dst = os.path.join(os.path.expanduser('~'), '.cache', 'huggingface', 'transformers', '9c24e6cd9f499d02c4f21a033736dabd365962dc80fe3aeb57a8f85ea45a20a3.26fead7ea4f0f843f6eb4055dfd25693f1a71f3c6871b184042d4b126244e142'); shutil.copyfile(frm, dst) if os.path.exists(dst) else print(''); print('Hotfixed broken JSON file from OpenAI');"
@ -26,7 +16,7 @@ if [ -e "scripts/install_status.txt" ] && [ `grep -c sd_git_cloned scripts/insta
git reset --hard
git pull
git -c advice.detachedHead=false checkout f6cfebffa752ee11a7b07497b8529d5971de916c
git checkout f6cfebffa752ee11a7b07497b8529d5971de916c
git apply ../ui/sd_internal/ddim_callback.patch
git apply ../ui/sd_internal/env_yaml.patch
@ -38,13 +28,13 @@ else
if git clone https://github.com/basujindal/stable-diffusion.git ; then
echo sd_git_cloned >> scripts/install_status.txt
else
printf "\n\nError downloading Stable Diffusion. Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n"
printf "\n\nError downloading Stable Diffusion. Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n"
read -p "Press any key to continue"
exit
fi
cd stable-diffusion
git -c advice.detachedHead=false checkout f6cfebffa752ee11a7b07497b8529d5971de916c
git checkout f6cfebffa752ee11a7b07497b8529d5971de916c
git apply ../ui/sd_internal/ddim_callback.patch
git apply ../ui/sd_internal/env_yaml.patch
@ -68,7 +58,7 @@ else
if conda env create --prefix env --force -f environment.yaml ; then
echo "Installed. Testing.."
else
printf "\n\nError installing the packages necessary for Stable Diffusion. Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n"
printf "\n\nError installing the packages necessary for Stable Diffusion. Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n"
read -p "Press any key to continue"
exit
fi
@ -78,14 +68,14 @@ else
if conda install -c conda-forge --prefix ./env -y antlr4-python3-runtime=4.8 ; then
echo "Installed. Testing.."
else
printf "\n\nError installing antlr4-python3-runtime for Stable Diffusion. Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n"
printf "\n\nError installing antlr4-python3-runtime for Stable Diffusion. Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n"
read -p "Press any key to continue"
exit
fi
out_test=`python -c "import torch; import ldm; import transformers; import numpy; import antlr4; print(42)"`
if [ "$out_test" != "42" ]; then
printf "\n\nDependency test failed! Error installing the packages necessary for Stable Diffusion. Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n"
printf "\n\nDependency test failed! Error installing the packages necessary for Stable Diffusion. Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n"
read -p "Press any key to continue"
exit
fi
@ -103,14 +93,14 @@ else
if pip install -e git+https://github.com/TencentARC/GFPGAN#egg=GFPGAN ; then
echo "Installed. Testing.."
else
printf "\n\nError installing the packages necessary for GFPGAN (Face Correction). Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n"
printf "\n\nError installing the packages necessary for GFPGAN (Face Correction). Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n"
read -p "Press any key to continue"
exit
fi
out_test=`python -c "from gfpgan import GFPGANer; print(42)"`
if [ "$out_test" != "42" ]; then
printf "\n\nDependency test failed! Error installing the packages necessary for GFPGAN (Face Correction). Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n"
printf "\n\nDependency test failed! Error installing the packages necessary for GFPGAN (Face Correction). Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n"
read -p "Press any key to continue"
exit
fi
@ -128,14 +118,14 @@ else
if pip install -e git+https://github.com/xinntao/Real-ESRGAN#egg=realesrgan ; then
echo "Installed. Testing.."
else
printf "\n\nError installing the packages necessary for ESRGAN (Resolution Upscaling). Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n"
printf "\n\nError installing the packages necessary for ESRGAN (Resolution Upscaling). Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n"
read -p "Press any key to continue"
exit
fi
out_test=`python -c "from basicsr.archs.rrdbnet_arch import RRDBNet; from realesrgan import RealESRGANer; print(42)"`
if [ "$out_test" != "42" ]; then
printf "\n\nDependency test failed! Error installing the packages necessary for ESRGAN (Resolution Upscaling). Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n"
printf "\n\nDependency test failed! Error installing the packages necessary for ESRGAN (Resolution Upscaling). Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n"
read -p "Press any key to continue"
exit
fi
@ -153,13 +143,13 @@ else
if conda install -c conda-forge --prefix ./env -y uvicorn fastapi ; then
echo "Installed. Testing.."
else
printf "\n\nError installing the packages necessary for Stable Diffusion UI. Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n"
printf "\n\nError installing the packages necessary for Stable Diffusion UI. Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n"
read -p "Press any key to continue"
exit
fi
if ! command -v uvicorn &> /dev/null; then
printf "\n\nUI packages not found! Error installing the packages necessary for Stable Diffusion UI. Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n"
printf "\n\nUI packages not found! Error installing the packages necessary for Stable Diffusion UI. Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n"
read -p "Press any key to continue"
exit
fi
@ -169,11 +159,8 @@ fi
mkdir -p "../models/stable-diffusion"
echo "" > "../models/stable-diffusion/Put your custom ckpt files here.txt"
if [ -f "sd-v1-4.ckpt" ]; then
model_size=`find "sd-v1-4.ckpt" -printf "%s"`
model_size=`ls -l sd-v1-4.ckpt | awk '{print $5}'`
if [ "$model_size" -eq "4265380512" ] || [ "$model_size" -eq "7703807346" ] || [ "$model_size" -eq "7703810927" ]; then
echo "Data files (weights) necessary for Stable Diffusion were already downloaded"
@ -186,18 +173,18 @@ fi
if [ ! -f "sd-v1-4.ckpt" ]; then
echo "Downloading data files (weights) for Stable Diffusion.."
curl -C - --retry 20 --retry-all-errors --retry-delay 5 -L -k https://me.cmdr2.org/stable-diffusion-ui/sd-v1-4.ckpt > sd-v1-4.ckpt
curl -L -k https://me.cmdr2.org/stable-diffusion-ui/sd-v1-4.ckpt > sd-v1-4.ckpt
if [ -f "sd-v1-4.ckpt" ]; then
model_size=`find "sd-v1-4.ckpt" -printf "%s"`
model_size=`ls -l sd-v1-4.ckpt | awk '{print $5}'`
if [ ! "$model_size" == "4265380512" ]; then
printf "\n\nError: The downloaded model file was invalid! Bytes downloaded: $model_size\n\n"
printf "\n\nError downloading the data files (weights) for Stable Diffusion. Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n"
printf "\n\nError downloading the data files (weights) for Stable Diffusion. Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n"
read -p "Press any key to continue"
exit
fi
else
printf "\n\nError downloading the data files (weights) for Stable Diffusion. Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n"
printf "\n\nError downloading the data files (weights) for Stable Diffusion. Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n"
read -p "Press any key to continue"
exit
fi
@ -205,7 +192,7 @@ fi
if [ -f "GFPGANv1.3.pth" ]; then
model_size=`find "GFPGANv1.3.pth" -printf "%s"`
model_size=`ls -l GFPGANv1.3.pth | awk '{print $5}'`
if [ "$model_size" -eq "348632874" ]; then
echo "Data files (weights) necessary for GFPGAN (Face Correction) were already downloaded"
@ -218,18 +205,18 @@ fi
if [ ! -f "GFPGANv1.3.pth" ]; then
echo "Downloading data files (weights) for GFPGAN (Face Correction).."
curl -C - --retry 20 --retry-all-errors --retry-delay 5 -L -k https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth > GFPGANv1.3.pth
curl -L -k https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth > GFPGANv1.3.pth
if [ -f "GFPGANv1.3.pth" ]; then
model_size=`find "GFPGANv1.3.pth" -printf "%s"`
model_size=`ls -l GFPGANv1.3.pth | awk '{print $5}'`
if [ ! "$model_size" -eq "348632874" ]; then
printf "\n\nError: The downloaded GFPGAN model file was invalid! Bytes downloaded: $model_size\n\n"
printf "\n\nError downloading the data files (weights) for GFPGAN (Face Correction). Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n"
printf "\n\nError downloading the data files (weights) for GFPGAN (Face Correction). Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n"
read -p "Press any key to continue"
exit
fi
else
printf "\n\nError downloading the data files (weights) for GFPGAN (Face Correction). Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n"
printf "\n\nError downloading the data files (weights) for GFPGAN (Face Correction). Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n"
read -p "Press any key to continue"
exit
fi
@ -237,7 +224,7 @@ fi
if [ -f "RealESRGAN_x4plus.pth" ]; then
model_size=`find "RealESRGAN_x4plus.pth" -printf "%s"`
model_size=`ls -l RealESRGAN_x4plus.pth | awk '{print $5}'`
if [ "$model_size" -eq "67040989" ]; then
echo "Data files (weights) necessary for ESRGAN (Resolution Upscaling) x4plus were already downloaded"
@ -250,18 +237,18 @@ fi
if [ ! -f "RealESRGAN_x4plus.pth" ]; then
echo "Downloading data files (weights) for ESRGAN (Resolution Upscaling) x4plus.."
curl -C - --retry 20 --retry-all-errors --retry-delay 5 -L -k https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth > RealESRGAN_x4plus.pth
curl -L -k https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth > RealESRGAN_x4plus.pth
if [ -f "RealESRGAN_x4plus.pth" ]; then
model_size=`find "RealESRGAN_x4plus.pth" -printf "%s"`
model_size=`ls -l RealESRGAN_x4plus.pth | awk '{print $5}'`
if [ ! "$model_size" -eq "67040989" ]; then
printf "\n\nError: The downloaded ESRGAN x4plus model file was invalid! Bytes downloaded: $model_size\n\n"
printf "\n\nError downloading the data files (weights) for ESRGAN (Resolution Upscaling) x4plus. Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n"
printf "\n\nError downloading the data files (weights) for ESRGAN (Resolution Upscaling) x4plus. Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n"
read -p "Press any key to continue"
exit
fi
else
printf "\n\nError downloading the data files (weights) for ESRGAN (Resolution Upscaling) x4plus. Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n"
printf "\n\nError downloading the data files (weights) for ESRGAN (Resolution Upscaling) x4plus. Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n"
read -p "Press any key to continue"
exit
fi
@ -269,7 +256,7 @@ fi
if [ -f "RealESRGAN_x4plus_anime_6B.pth" ]; then
model_size=`find "RealESRGAN_x4plus_anime_6B.pth" -printf "%s"`
model_size=`ls -l RealESRGAN_x4plus_anime_6B.pth | awk '{print $5}'`
if [ "$model_size" -eq "17938799" ]; then
echo "Data files (weights) necessary for ESRGAN (Resolution Upscaling) x4plus_anime were already downloaded"
@ -282,18 +269,18 @@ fi
if [ ! -f "RealESRGAN_x4plus_anime_6B.pth" ]; then
echo "Downloading data files (weights) for ESRGAN (Resolution Upscaling) x4plus_anime.."
curl -C - --retry 20 --retry-all-errors --retry-delay 5 -L -k https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth > RealESRGAN_x4plus_anime_6B.pth
curl -L -k https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth > RealESRGAN_x4plus_anime_6B.pth
if [ -f "RealESRGAN_x4plus_anime_6B.pth" ]; then
model_size=`find "RealESRGAN_x4plus_anime_6B.pth" -printf "%s"`
model_size=`ls -l RealESRGAN_x4plus_anime_6B.pth | awk '{print $5}'`
if [ ! "$model_size" -eq "17938799" ]; then
printf "\n\nError: The downloaded ESRGAN x4plus_anime model file was invalid! Bytes downloaded: $model_size\n\n"
printf "\n\nError downloading the data files (weights) for ESRGAN (Resolution Upscaling) x4plus_anime. Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n"
printf "\n\nError downloading the data files (weights) for ESRGAN (Resolution Upscaling) x4plus_anime. Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n"
read -p "Press any key to continue"
exit
fi
else
printf "\n\nError downloading the data files (weights) for ESRGAN (Resolution Upscaling) x4plus_anime. Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n"
printf "\n\nError downloading the data files (weights) for ESRGAN (Resolution Upscaling) x4plus_anime. Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n"
read -p "Press any key to continue"
exit
fi
@ -308,16 +295,15 @@ fi
printf "\n\nStable Diffusion is ready!\n\n"
SD_PATH=`pwd`
export PYTHONPATH="$SD_PATH:$SD_PATH/env/lib/python3.8/site-packages"
export PYTHONPATH="$SD_PATH;$SD_PATH/env/lib/python3.8/site-packages"
echo "PYTHONPATH=$PYTHONPATH"
which python
python --version
cd ..
export SD_UI_PATH=`pwd`/ui
cd stable-diffusion
python --version
uvicorn server:app --app-dir "$SD_UI_PATH" --port 9000 --host 0.0.0.0
read -p "Press any key to continue"

16
scripts/start.sh Executable file → Normal file
View File

@ -1,20 +1,10 @@
#!/bin/bash
# set legacy installer's PATH, if it exists
if [ -e "installer" ]; then export PATH="$(pwd)/installer/bin:$PATH"; fi
source installer/bin/activate
# Setup the packages required for the installer
scripts/bootstrap.sh
conda-unpack
# set new installer's PATH, if it downloaded any packages
if [ -e "installer_files/env" ]; then export PATH="$(pwd)/installer_files/env/bin:$PATH"; fi
# Test the bootstrap
which git
conda --version
git --version
which conda
conda --version
# Download the rest of the installer and UI
scripts/on_env_start.sh

View File

@ -2,29 +2,26 @@
<html>
<head>
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<link rel="icon" type="image/png" href="/media/images/favicon-16x16.png" sizes="16x16">
<link rel="icon" type="image/png" href="/media/images/favicon-32x32.png" sizes="32x32">
<link rel="stylesheet" href="/media/css/fonts.css?v=1">
<link rel="stylesheet" href="/media/css/themes.css?v=1">
<link rel="stylesheet" href="/media/css/main.css?v=3">
<link rel="stylesheet" href="/media/css/auto-save.css?v=2">
<link rel="stylesheet" href="/media/css/modifier-thumbnails.css?v=2">
<link rel="stylesheet" href="/media/css/fontawesome-all.min.css?v=1">
<link rel="stylesheet" href="/media/css/drawingboard.min.css">
<script src="/media/js/jquery-3.6.1.min.js"></script>
<script src="/media/js/drawingboard.min.js"></script>
<link rel="icon" type="image/png" href="/media/favicon-16x16.png" sizes="16x16">
<link rel="icon" type="image/png" href="/media/favicon-32x32.png" sizes="32x32">
<link rel="stylesheet" href="/media/main.css?v=21">
<link rel="stylesheet" href="/media/modifier-thumbnails.css?v=1">
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/6.2.0/css/all.min.css">
<link rel="stylesheet" href="/media/drawingboard.min.css">
<script src="/media/jquery-3.6.1.min.js"></script>
<script src="/media/drawingboard.min.js"></script>
</head>
<body>
<div id="container">
<div id="top-nav">
<div id="logo">
<h1>Stable Diffusion UI <small>v2.3.5 <span id="updateBranchLabel"></span></small></h1>
<h1>Stable Diffusion UI <small>v2.2 <span id="updateBranchLabel"></span></small></h1>
</div>
<ul id="top-nav-items">
<li class="dropdown">
<span><i class="fa fa-comments icon"></i> Help & Community</span>
<ul id="community-links" class="dropdown-content">
<li><a href="https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting" target="_blank"><i class="fa-solid fa-circle-question fa-fw"></i> Usual problems and solutions</a></li>
<li><a href="https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md" target="_blank"><i class="fa-solid fa-circle-question fa-fw"></i> Usual problems and solutions</a></li>
<li><a href="https://discord.com/invite/u9yhsFmEkB" target="_blank"><i class="fa-brands fa-discord fa-fw"></i> Discord user community</a></li>
<li><a href="https://www.reddit.com/r/StableDiffusionUI/" target="_blank"><i class="fa-brands fa-reddit fa-fw"></i> Reddit community</a></li>
<li><a href="https://github.com/cmdr2/stable-diffusion-ui" target="_blank"><i class="fa-brands fa-github fa-fw"></i> Source code on GitHub</a></li>
@ -36,18 +33,11 @@
<ul id="system-settings-entries">
<li><b class="settings-subheader">System Settings</b></li>
<br/>
<li><label for="theme">Theme: </label><select id="theme" name="theme"><option value="theme-default">Default</option></select></li>
<li><input id="save_to_disk" name="save_to_disk" type="checkbox"> <label for="save_to_disk">Automatically save to <input id="diskPath" name="diskPath" size="40" disabled></label></li>
<li><input id="sound_toggle" name="sound_toggle" type="checkbox" checked> <label for="sound_toggle">Play sound on task completion</label></li>
<li><input id="turbo" name="turbo" type="checkbox" checked> <label for="turbo">Turbo mode <small>(generates images faster, but uses an additional 1 GB of GPU memory)</small></label></li>
<li><input id="use_cpu" name="use_cpu" type="checkbox"> <label for="use_cpu">Use CPU instead of GPU <small>(warning: this will be *very* slow)</small></label></li>
<li><input id="use_full_precision" name="use_full_precision" type="checkbox"> <label for="use_full_precision">Use full precision <small>(for GPU-only. warning: this will consume more VRAM)</small></label></li>
<li>
<input id="auto_save_settings" name="auto_save_settings" checked type="checkbox">
<label for="auto_save_settings">Automatically save settings <small>(settings restored on browser load)</small></label>
<br/>
<button id="configureSettingsSaveBtn">Configure</button>
</li>
<!-- <li><input id="allow_nsfw" name="allow_nsfw" type="checkbox"> <label for="allow_nsfw">Allow NSFW Content (You confirm you are above 18 years of age)</label></li> -->
<br/>
<li><input id="use_beta_channel" name="use_beta_channel" type="checkbox"> <label for="use_beta_channel">🔥Beta channel. Get the latest features immediately (but could be less stable). Please restart the program after changing this.</label></li>
@ -65,25 +55,16 @@
</div>
<div id="editor-inputs">
<div id="editor-inputs-prompt" class="row">
<label for="prompt"><b>Enter Prompt</b></label> <small>or</small> <button id="promptsFromFileBtn">Load from a file</button>
<label for="prompt">Prompt</label>
<textarea id="prompt" class="col-free">a photograph of an astronaut riding a horse</textarea>
<input id="prompt_from_file" name="prompt_from_file" type="file" /> <!-- hidden -->
<label for="negative_prompt" class="collapsible" id="negative_prompt_handle">Negative Prompt <small>(optional)</small></label>
<div class="collapsible-content">
<input id="negative_prompt" name="negative_prompt" placeholder="list the things to remove from the image (e.g. fog, green)">
</div>
</div>
<div id="editor-inputs-init-image" class="row">
<label for="init_image">Initial Image (img2img) <small>(optional)</small> </label> <input id="init_image" name="init_image" type="file" /><br/>
<label for="init_image"><b>Initial Image:</b> (optional) </label> <input id="init_image" name="init_image" type="file" /><br/>
<div id="init_image_preview_container" class="image_preview_container">
<div id="init_image_wrapper">
<img id="init_image_preview" src="" />
<span id="init_image_size_box"></span>
<img id="init_image_preview" src="" width="100" height="100" />
<button class="init_image_clear image_clear_btn">X</button>
</div>
<br/>
<input id="enable_mask" name="enable_mask" type="checkbox"> <label for="enable_mask">In-Painting (beta) <small>(select the area which the AI will paint into)</small></label>
@ -103,37 +84,24 @@
<div class="line-separator">&nbsp;</div>
<div id="editor-settings" class="panel-box settings-box">
<h4 class="collapsible">
Image Settings
<i id="reset-image-settings" class="fa-solid fa-arrow-rotate-left">
<span class="simple-tooltip right">
Reset Image Settings
</span>
</i>
</h4>
<h4 class="collapsible">Image Settings</h4>
<ul id="editor-settings-entries" class="collapsible-content">
<li><table>
<tr><b class="settings-subheader">Image Settings</b></tr>
<tr class="pl-5"><td><label for="seed">Seed:</label></td><td><input id="seed" name="seed" size="10" value="30000"> <input id="random_seed" name="random_seed" type="checkbox" checked><label for="random_seed">Random</label></td></tr>
<tr class="pl-5"><td><label for="num_outputs_total">Number of Images:</label></td><td><input id="num_outputs_total" name="num_outputs_total" value="1" size="1"> <label><small>(total)</small></label> <input id="num_outputs_parallel" name="num_outputs_parallel" value="1" size="1"> <label for="num_outputs_parallel"><small>(in parallel)</small></label></td></tr>
<tr class="pl-5"><td><label for="stable_diffusion_model">Model:</label></td><td>
<select id="stable_diffusion_model" name="stable_diffusion_model">
<!-- <option value="sd-v1-4" selected>sd-v1-4</option> -->
</select>
</td></tr>
<tr id="samplerSelection" class="pl-5"><td><label for="sampler">Sampler:</label></td><td>
<li><b class="settings-subheader">Image Settings</b></li>
<li class="pl-5"><label for="seed">Seed:</label> <input id="seed" name="seed" size="10" value="30000"> <input id="random_seed" name="random_seed" type="checkbox" checked> <label for="random_seed">Random Image</label></li>
<li class="pl-5"><label for="num_outputs_total">Number of images to make:</label> <input id="num_outputs_total" name="num_outputs_total" value="1" size="1"> <label for="num_outputs_parallel">Generate in parallel:</label> <input id="num_outputs_parallel" name="num_outputs_parallel" value="1" size="1"> (images at once)</li>
<li id="samplerSelection" class="pl-5"><label for="sampler">Sampler:</label>
<select id="sampler" name="sampler">
<option value="plms">plms</option>
<option value="plms" selected>plms</option>
<option value="ddim">ddim</option>
<option value="heun">heun</option>
<option value="euler">euler</option>
<option value="euler_a" selected>euler_a</option>
<option value="euler_a">euler_a</option>
<option value="dpm2">dpm2</option>
<option value="dpm2_a">dpm2_a</option>
<option value="lms">lms</option>
</select>
</td></tr>
<tr class="pl-5"><td><label>Image Size: </label></td><td>
</li>
<li class="pl-5"><label>Image Size: </label>
<select id="width" name="width" value="512">
<option value="128">128 (*)</option>
<option value="192">192</option>
@ -178,39 +146,37 @@
<option value="2048">2048</option>
</select>
<label for="height"><small>(height)</small></label>
</td></tr>
<tr class="pl-5"><td><label for="num_inference_steps">Inference Steps:</label></td><td> <input id="num_inference_steps" name="num_inference_steps" size="4" value="25"></td></tr>
<tr class="pl-5"><td><label for="guidance_scale_slider">Guidance Scale:</label></td><td> <input id="guidance_scale_slider" name="guidance_scale_slider" class="editor-slider" value="75" type="range" min="10" max="500"> <input id="guidance_scale" name="guidance_scale" size="4"></td></tr>
<tr id="prompt_strength_container" class="pl-5"><td><label for="prompt_strength_slider">Prompt Strength:</label></td><td> <input id="prompt_strength_slider" name="prompt_strength_slider" class="editor-slider" value="80" type="range" min="0" max="99"> <input id="prompt_strength" name="prompt_strength" size="4"><br/></td></tr></span>
<tr class="pl-5"><td><label for="output_format">Output Format:</label></td><td>
<select id="output_format" name="output_format">
<option value="jpeg" selected>jpeg</option>
<option value="png">png</option>
</select>
</td></tr>
</li></table>
</li>
<li class="pl-5"><label for="num_inference_steps">Number of inference steps:</label> <input id="num_inference_steps" name="num_inference_steps" size="4" value="50"></li>
<li class="pl-5"><label for="guidance_scale_slider">Guidance Scale:</label> <input id="guidance_scale_slider" name="guidance_scale_slider" class="editor-slider" value="75" type="range" min="10" max="500"> <input id="guidance_scale" name="guidance_scale" size="4"></li>
<li class="pl-5"><span id="prompt_strength_container"><label for="prompt_strength_slider">Prompt Strength:</label> <input id="prompt_strength_slider" name="prompt_strength_slider" class="editor-slider" value="80" type="range" min="0" max="99"> <input id="prompt_strength" name="prompt_strength" size="4"><br/></span></li>
<br/>
<li><b class="settings-subheader">Prompt Settings</b></li>
<li class="pl-5"><label for="negative_prompt">Negative Prompt:</label> <input id="negative_prompt" name="negative_prompt" size="55"></li>
<br/>
<li><b class="settings-subheader">Render Settings</b></li>
<li class="pl-5"><input id="stream_image_progress" name="stream_image_progress" type="checkbox"> <label for="stream_image_progress">Show a live preview <small>(uses more VRAM, slightly slower image creation)</small></label></li>
<li class="pl-5"><input id="stream_image_progress" name="stream_image_progress" type="checkbox"> <label for="stream_image_progress">Show a live preview of the image <small>(uses more VRAM, slightly slower image creation)</small></label></li>
<li class="pl-5"><input id="use_face_correction" name="use_face_correction" type="checkbox" checked> <label for="use_face_correction">Fix incorrect faces and eyes <small>(uses GFPGAN)</small></label></li>
<li class="pl-5">
<input id="use_upscale" name="use_upscale" type="checkbox"> <label for="use_upscale">Upscale image by 4x with </label>
<input id="use_upscale" name="use_upscale" type="checkbox"> <label for="use_upscale">Upscale the image to 4x resolution using </label>
<select id="upscale_model" name="upscale_model">
<option value="RealESRGAN_x4plus" selected>RealESRGAN_x4plus</option>
<option value="RealESRGAN_x4plus_anime_6B">RealESRGAN_x4plus_anime_6B</option>
</select>
</li>
<li class="pl-5"><input id="show_only_filtered_image" name="show_only_filtered_image" type="checkbox" checked> <label for="show_only_filtered_image">Show only the corrected/upscaled image</label></li>
<br/>
<li><small>The system-related settings have been moved to the top-right corner.</small></li>
</ul>
</div>
<div id="editor-modifiers" class="panel-box">
<button id="modifier-settings-btn" title="Add custom modifiers"><i class="fa fa-gear"></i></button>
<h4 class="collapsible">Image Modifiers (art styles, tags etc)</h4>
<div id="editor-modifiers-entries" class="collapsible-content">
<div id="editor-modifiers-entries-toolbar">
<label for="preview-image">Image Style:</label>
<select id="preview-image" name="preview-image" value="portrait">
<option value="portrait" selected="">Face</option>
@ -222,7 +188,6 @@
</div>
</div>
</div>
</div>
<div id="preview" class="col-free">
<div id="initial-text">
@ -234,30 +199,10 @@
</div>
</div>
<div id="save-settings-config" style="display:none">
<div>
<span id="save-settings-config-close-btn">X</span>
<h1>Save Settings Configuration</h1>
<p>Select which settings should be remembered when restarting the browser</p>
<table id="save-settings-config-table">
</table>
</div>
</div>
<div id="modifier-settings-config" style="display:none">
<div>
<span id="modifier-settings-config-close-btn">X</span>
<h1>Modifier Settings</h1>
<p>Set your custom modifiers (one per line)</p>
<textarea id="custom-modifiers-input" placeholder="Enter your custom modifiers, one-per-line"></textarea>
<p><small><b>Tip:</b> You can include special characters like {} () [] and |. You can also put multiple comma-separated phrases in a single line, to make a single modifier that combines all of those.</small></p>
</div>
</div>
<div class="line-separator">&nbsp;</div>
<div id="footer" class="panel-box">
<p>If you found this project useful and want to help keep it alive, please <a href="https://ko-fi.com/cmdr2_stablediffusion_ui" target="_blank"><img src="/media/images/kofi.png" id="coffeeButton"></a> to help cover the cost of development and maintenance! Thank you for your support!</p>
<p>If you found this project useful and want to help keep it alive, please <a href="https://ko-fi.com/cmdr2_stablediffusion_ui" target="_blank"><img src="media/kofi.png" id="coffeeButton"></a> to help cover the cost of development and maintenance! Thank you for your support!</p>
<p>Please feel free to join the <a href="https://discord.com/invite/u9yhsFmEkB" target="_blank">discord community</a> or <a href="https://github.com/cmdr2/stable-diffusion-ui/issues" target="_blank">file an issue</a> if you have any problems or suggestions in using this interface.</p>
<div id="footer-legal">
<p><b>Disclaimer:</b> The authors of this project are not responsible for any content generated using this interface.</p>
@ -268,21 +213,12 @@
</div>
</body>
<script src="media/js/plugins.js?v=1"></script>
<script src="media/js/utils.js?v=4"></script>
<script src="media/js/inpainting-editor.js?v=1"></script>
<script src="media/js/image-modifiers.js?v=3"></script>
<script src="media/js/auto-save.js?v=2"></script>
<script src="media/js/main.js?v=5"></script>
<script src="media/js/themes.js?v=2"></script>
<script src="media/main.js?v=21"></script>
<script>
async function init() {
await initSettings()
await getModels()
await loadModifiers()
await getDiskPath()
await getAppConfig()
await loadModifiers()
await loadUIPlugins()
setInterval(healthCheck, HEALTH_PING_INTERVAL * 1000)
healthCheck()

View File

@ -1,74 +0,0 @@
/* Auto-Settings Styling */
#auto_save_settings ~ button {
margin: 5px;
}
#auto_save_settings:not(:checked) ~ button {
display: none;
}
#save-settings-config {
position: absolute;
background: rgba(32, 33, 36, 50%);
top: 0px;
left: 0px;
right: 0px;
bottom: 0px;
z-index: 1000;
}
#save-settings-config > div {
background: var(--background-color3);
max-width: 600px;
margin: auto;
margin-top: 50px;
border-radius: 6px;
padding: 30px;
text-align: center;
}
#save-settings-config-table {
margin: auto;
}
#save-settings-config-table th {
padding-top: 15px;
padding-bottom: 5px;
}
#save-settings-config-table td:first-child,
#save-settings-config-table th:first-child {
float: right;
}
#save-settings-config-table td:last-child,
#save-settings-config-table th:last-child {
float: left;
}
#save-settings-config-table td small {
color: rgb(153, 153, 153);
}
#save-settings-config-close-btn {
float: right;
cursor: pointer;
padding: 10px;
transform: translate(50%, -50%) scaleX(130%);
}
#reset-image-settings {
cursor: pointer;
float: right;
padding: 8px;
opacity: 1;
transition: opacity 0.5;
}
.collapsible:not(.active) #reset-image-settings {
display: none;
}
#reset-image-settings.hidden {
opacity: 0;
pointer-events: none;
}

File diff suppressed because one or more lines are too long

View File

@ -1,40 +0,0 @@
/* work-sans-regular - latin */
@font-face {
font-family: 'Work Sans';
font-style: normal;
font-weight: 400;
src: local(''),
url('/media/fonts/work-sans-v18-latin-regular.woff2') format('woff2'), /* Chrome 26+, Opera 23+, Firefox 39+ */
url('/media/fonts/work-sans-v18-latin-regular.woff') format('woff'); /* Chrome 6+, Firefox 3.6+, IE 9+, Safari 5.1+ */
}
/* work-sans-600 - latin */
@font-face {
font-family: 'Work Sans';
font-style: normal;
font-weight: 600;
src: local(''),
url('/media/fonts/work-sans-v18-latin-600.woff2') format('woff2'), /* Chrome 26+, Opera 23+, Firefox 39+ */
url('/media/fonts/work-sans-v18-latin-600.woff') format('woff'); /* Chrome 6+, Firefox 3.6+, IE 9+, Safari 5.1+ */
}
/* work-sans-700 - latin */
@font-face {
font-family: 'Work Sans';
font-style: normal;
font-weight: 700;
src: local(''),
url('/media/fonts/work-sans-v18-latin-700.woff2') format('woff2'), /* Chrome 26+, Opera 23+, Firefox 39+ */
url('/media/fonts/work-sans-v18-latin-700.woff') format('woff'); /* Chrome 6+, Firefox 3.6+, IE 9+, Safari 5.1+ */
}
/* work-sans-800 - latin */
@font-face {
font-family: 'Work Sans';
font-style: normal;
font-weight: 800;
src: local(''),
url('/media/fonts/work-sans-v18-latin-800.woff2') format('woff2'), /* Chrome 26+, Opera 23+, Firefox 39+ */
url('/media/fonts/work-sans-v18-latin-800.woff') format('woff'); /* Chrome 6+, Firefox 3.6+, IE 9+, Safari 5.1+ */
}

View File

@ -1,146 +0,0 @@
:root {
--background-color1: rgb(32, 33, 36); /* main parts of the page */
--background-color2: rgb(44, 45, 48); /* main panels */
--background-color3: rgb(47, 49, 53);
--background-color4: rgb(18, 18, 19); /* settings dropdowns */
--accent-hue: 266;
--accent-lightness: 36%;
--accent-lightness-hover: 40%;
--text-color: #eee;
--input-text-color: black;
--input-background-color: #e9e9ed;
--input-border-color: #8f8f9d;
--button-text-color: var(--input-text-color);
--button-color: #e9e9ed;
--button-border: 1px solid #8f8f9d;
/* other */
--input-border-radius: 4px;
--input-border-size: 1px;
--accent-color: hsl(var(--accent-hue), 100%, var(--accent-lightness));
--accent-color-hover: hsl(var(--accent-hue), 100%, var(--accent-lightness-hover));
--make-image-border: 2px solid hsl(var(--accent-hue), 100%, calc(var(--accent-lightness) - 21%));
}
.theme-light {
--background-color1: white;
--background-color2: #dddddd;
--background-color3: #e7e9eb;
--background-color4: #cccccc;
--text-color: black;
--input-text-color: black;
--input-background-color: #f8f9fa;
--input-border-color: grey;
}
.theme-discord {
--background-color1: #36393f;
--background-color2: #2f3136;
--background-color3: #292b2f;
--background-color4: #202225;
--accent-hue: 235;
--accent-lightness: 65%;
--make-image-border: none;
--button-color: var(--accent-color);
--button-border: none;
--input-text-color: #ccc;
--input-border-size: 2px;
--input-background-color: #202225;
--input-border-color: var(--input-background-color);
}
.theme-cool-blue {
--main-hue: 222;
--main-saturation: 18%;
--value-base: 19%;
--value-step: 3%;
--background-color1: hsl(var(--main-hue), var(--main-saturation), var(--value-base));
--background-color2: hsl(var(--main-hue), var(--main-saturation), calc(var(--value-base) - (1 * var(--value-step))));
--background-color3: hsl(var(--main-hue), var(--main-saturation), calc(var(--value-base) - (2 * var(--value-step))));
--background-color4: hsl(var(--main-hue), var(--main-saturation), calc(var(--value-base) - (3 * var(--value-step))));
--accent-hue: 212;
--make-image-border: none;
--button-color: var(--accent-color);
--button-border: none;
--input-border-size: 1px;
--input-background-color: var(--background-color3);
--input-text-color: #ccc;
--input-border-color: var(--background-color4);
}
.theme-blurple {
--main-hue: 235;
--main-saturation: 18%;
--value-base: 16%;
--value-step: 3%;
--background-color1: hsl(var(--main-hue), var(--main-saturation), var(--value-base));
--background-color2: hsl(var(--main-hue), var(--main-saturation), calc(var(--value-base) - (1 * var(--value-step))));
--background-color3: hsl(var(--main-hue), var(--main-saturation), calc(var(--value-base) - (2 * var(--value-step))));
--background-color4: hsl(var(--main-hue), var(--main-saturation), calc(var(--value-base) - (3 * var(--value-step))));
--make-image-border: none;
--button-color: var(--accent-color);
--button-border: none;
--input-border-size: 1px;
--input-background-color: var(--background-color3);
--input-text-color: #ccc;
--input-border-color: var(--background-color4);
}
.theme-super-dark {
--main-hue: 222;
--main-saturation: 18%;
--value-base: 5%;
--value-step: 5%;
--background-color1: hsl(var(--main-hue), var(--main-saturation), var(--value-base));
--background-color2: hsl(var(--main-hue), var(--main-saturation), calc(var(--value-base) + (1 * var(--value-step))));
--background-color3: hsl(var(--main-hue), var(--main-saturation), calc(var(--value-base) + (2 * var(--value-step))));
--background-color4: hsl(var(--main-hue), var(--main-saturation), calc(var(--value-base) + (3 * var(--value-step))));
--make-image-border: none;
--button-color: var(--accent-color);
--button-border: none;
--input-border-size: 0px;
--input-background-color: var(--background-color3);
--input-text-color: #ccc;
--input-border-color: var(--background-color4);
}
.theme-wild {
--main-hue: 128;
--main-saturation: 18%;
--value-base: 20%;
--value-step: 5%;
--background-color1: hsl(var(--main-hue), var(--main-saturation), var(--value-base));
--background-color2: hsl(var(--main-hue), var(--main-saturation), calc(var(--value-base) - (1 * var(--value-step))));
--background-color3: hsl(var(--main-hue), var(--main-saturation), calc(var(--value-base) - (2 * var(--value-step))));
--background-color4: hsl(var(--main-hue), var(--main-saturation), calc(var(--value-base) - (3 * var(--value-step))));
--accent-hue: 212;
--make-image-border: none;
--button-color: var(--accent-color);
--button-border: none;
--input-border-size: 1px;
--input-background-color: hsl(222, var(--main-saturation), calc(var(--value-base) - (2 * var(--value-step))));
--input-text-color: red;
--input-border-color: green;
}

View File

Before

Width:  |  Height:  |  Size: 466 B

After

Width:  |  Height:  |  Size: 466 B

View File

Before

Width:  |  Height:  |  Size: 973 B

After

Width:  |  Height:  |  Size: 973 B

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

View File

@ -1,285 +0,0 @@
// Saving settings
let saveSettingsConfigTable = document.getElementById("save-settings-config-table")
let saveSettingsConfigOverlay = document.getElementById("save-settings-config")
let resetImageSettingsButton = document.getElementById("reset-image-settings")
const SETTINGS_KEY = "user_settings_v2"
const SETTINGS = {} // key=id. dict initialized in initSettings. { element, default, value, ignore }
const SETTINGS_IDS_LIST = [
"prompt",
"seed",
"random_seed",
"num_outputs_total",
"num_outputs_parallel",
"stable_diffusion_model",
"sampler",
"width",
"height",
"num_inference_steps",
"guidance_scale",
"prompt_strength",
"output_format",
"negative_prompt",
"stream_image_progress",
"use_face_correction",
"use_upscale",
"show_only_filtered_image",
"upscale_model",
"preview-image",
"modifier-card-size-slider",
"theme",
"save_to_disk",
"diskPath",
"sound_toggle",
"turbo",
"use_cpu",
"use_full_precision",
"auto_save_settings"
]
const IGNORE_BY_DEFAULT = [
"prompt"
]
const SETTINGS_SECTIONS = [ // gets the "keys" property filled in with an ordered list of settings in this section via initSettings
{ id: "editor-inputs", name: "Prompt" },
{ id: "editor-settings", name: "Image Settings" },
{ id: "system-settings", name: "System Settings" },
{ id: "container", name: "Other" }
]
async function initSettings() {
SETTINGS_IDS_LIST.forEach(id => {
var element = document.getElementById(id)
var label = document.querySelector(`label[for='${element.id}']`)
SETTINGS[id] = {
key: id,
element: element,
label: getSettingLabel(element),
default: getSetting(element),
value: getSetting(element),
ignore: IGNORE_BY_DEFAULT.includes(id)
}
element.addEventListener("input", settingChangeHandler)
element.addEventListener("change", settingChangeHandler)
})
var unsorted_settings_ids = [...SETTINGS_IDS_LIST]
SETTINGS_SECTIONS.forEach(section => {
var name = section.name
var element = document.getElementById(section.id)
var children = Array.from(element.querySelectorAll(unsorted_settings_ids.map(id => `#${id}`).join(",")))
section.keys = []
children.forEach(e => {
section.keys.push(e.id)
})
unsorted_settings_ids = unsorted_settings_ids.filter(id => children.find(e => e.id == id) == undefined)
})
loadSettings()
}
function getSetting(element) {
if (typeof element === "string" || element instanceof String) {
element = SETTINGS[element].element
}
if (element.type == "checkbox") {
return element.checked
}
return element.value
}
function setSetting(element, value) {
if (typeof element === "string" || element instanceof String) {
element = SETTINGS[element].element
}
SETTINGS[element.id].value = value
if (getSetting(element) == value) {
return // no setting necessary
}
if (element.type == "checkbox") {
element.checked = value
}
else {
element.value = value
}
element.dispatchEvent(new Event("input"))
element.dispatchEvent(new Event("change"))
}
function saveSettings() {
var saved_settings = Object.values(SETTINGS).map(setting => {
return {
key: setting.key,
value: setting.value,
ignore: setting.ignore
}
})
localStorage.setItem(SETTINGS_KEY, JSON.stringify(saved_settings))
}
var CURRENTLY_LOADING_SETTINGS = false
function loadSettings() {
var saved_settings_text = localStorage.getItem(SETTINGS_KEY)
if (saved_settings_text) {
var saved_settings = JSON.parse(saved_settings_text)
if (saved_settings.find(s => s.key == "auto_save_settings").value == false) {
setSetting("auto_save_settings", false)
return
}
CURRENTLY_LOADING_SETTINGS = true
saved_settings.map(saved_setting => {
var setting = SETTINGS[saved_setting.key]
setting.ignore = saved_setting.ignore
if (!setting.ignore) {
setting.value = saved_setting.value
setSetting(setting.element, setting.value)
}
})
CURRENTLY_LOADING_SETTINGS = false
}
else {
CURRENTLY_LOADING_SETTINGS = true
tryLoadOldSettings();
CURRENTLY_LOADING_SETTINGS = false
saveSettings()
}
}
function loadDefaultSettingsSection(section_id) {
CURRENTLY_LOADING_SETTINGS = true
var section = SETTINGS_SECTIONS.find(s => s.id == section_id);
section.keys.forEach(key => {
var setting = SETTINGS[key];
setting.value = setting.default
setSetting(setting.element, setting.value)
})
CURRENTLY_LOADING_SETTINGS = false
saveSettings()
}
function settingChangeHandler(event) {
if (!CURRENTLY_LOADING_SETTINGS) {
var element = event.target
var value = getSetting(element)
if (value != SETTINGS[element.id].value) {
SETTINGS[element.id].value = value
saveSettings()
}
}
}
function getSettingLabel(element) {
var labelElement = document.querySelector(`label[for='${element.id}']`)
var label = labelElement?.innerText || element.id
var truncate_length = 30
if (label.includes(" (")) {
label = label.substring(0, label.indexOf(" ("))
}
if (label.length > truncate_length) {
label = label.substring(0, truncate_length - 3) + "..."
}
label = label.replace("", "")
label = label.replace("", "")
return label
}
function fillSaveSettingsConfigTable() {
saveSettingsConfigTable.textContent = ""
SETTINGS_SECTIONS.forEach(section => {
var section_row = `<tr><th>${section.name}</th><td></td></tr>`
saveSettingsConfigTable.insertAdjacentHTML("beforeend", section_row)
section.keys.forEach(key => {
var setting = SETTINGS[key]
var element = setting.element
var checkbox_id = `shouldsave_${element.id}`
var is_checked = setting.ignore ? "" : "checked"
var value = setting.value
var value_truncate_length = 30
if ((typeof value === "string" || value instanceof String) && value.length > value_truncate_length) {
value = value.substring(0, value_truncate_length - 3) + "..."
}
var newrow = `<tr><td><label for="${checkbox_id}">${setting.label}</label></td><td><input id="${checkbox_id}" name="${checkbox_id}" ${is_checked} type="checkbox" ></td><td><small>(${value})</small></td></tr>`
saveSettingsConfigTable.insertAdjacentHTML("beforeend", newrow)
var checkbox = document.getElementById(checkbox_id)
checkbox.addEventListener("input", event => {
setting.ignore = !checkbox.checked
saveSettings()
})
})
})
}
document.getElementById("save-settings-config-close-btn").addEventListener('click', () => {
saveSettingsConfigOverlay.style.display = 'none'
})
document.getElementById("configureSettingsSaveBtn").addEventListener('click', () => {
fillSaveSettingsConfigTable()
saveSettingsConfigOverlay.style.display = 'block'
})
saveSettingsConfigOverlay.addEventListener('click', (event) => {
if (event.target.id == saveSettingsConfigOverlay.id) {
saveSettingsConfigOverlay.style.display = 'none'
}
})
document.getElementById("save-settings-config-close-btn").addEventListener('click', () => {
saveSettingsConfigOverlay.style.display = 'none'
})
resetImageSettingsButton.addEventListener('click', event => {
loadDefaultSettingsSection("editor-settings");
event.stopPropagation()
})
function tryLoadOldSettings() {
console.log("Loading old user settings")
// load v1 auto-save.js settings
var old_map = {
"guidance_scale_slider": "guidance_scale",
"prompt_strength_slider": "prompt_strength"
}
var settings_key_v1 = "user_settings"
var saved_settings_text = localStorage.getItem(settings_key_v1)
if (saved_settings_text) {
var saved_settings = JSON.parse(saved_settings_text)
Object.keys(saved_settings.should_save).forEach(key => {
key = key in old_map ? old_map[key] : key
SETTINGS[key].ignore = !saved_settings.should_save[key]
});
Object.keys(saved_settings.values).forEach(key => {
key = key in old_map ? old_map[key] : key
var setting = SETTINGS[key]
if (!setting.ignore) {
setting.value = saved_settings.values[key]
setSetting(setting.element, setting.value)
}
});
localStorage.removeItem(settings_key_v1)
}
// load old individually stored items
var individual_settings_map = { // maps old localStorage-key to new SETTINGS-key
"soundEnabled": "sound_toggle",
"saveToDisk": "save_to_disk",
"useCPU": "use_cpu",
"useFullPrecision": "use_full_precision",
"useTurboMode": "turbo",
"diskPath": "diskPath",
"useFaceCorrection": "use_face_correction",
"useUpscaling": "use_upscale",
"showOnlyFilteredImage": "show_only_filtered_image",
"streamImageProgress": "stream_image_progress",
"outputFormat": "output_format",
"autoSaveSettings": "auto_save_settings",
};
Object.keys(individual_settings_map).forEach(localStorageKey => {
var localStorageValue = localStorage.getItem(localStorageKey);
if (localStorageValue !== null) {
var setting = SETTINGS[individual_settings_map[localStorageKey]]
if (setting.element.type == "checkbox" && (typeof localStorageValue === "string" || localStorageValue instanceof String)) {
localStorageValue = localStorageValue == "true"
}
setting.value = localStorageValue
setSetting(setting.element, setting.value)
localStorage.removeItem(localStorageKey);
}
})
}

View File

@ -1,294 +0,0 @@
let activeTags = []
let modifiers = []
let customModifiersGroupElement = undefined
let editorModifierEntries = document.querySelector('#editor-modifiers-entries')
let editorModifierTagsList = document.querySelector('#editor-inputs-tags-list')
let editorTagsContainer = document.querySelector('#editor-inputs-tags-container')
let modifierCardSizeSlider = document.querySelector('#modifier-card-size-slider')
let previewImageField = document.querySelector('#preview-image')
let modifierSettingsBtn = document.querySelector('#modifier-settings-btn')
let modifierSettingsOverlay = document.querySelector('#modifier-settings-config')
let customModifiersTextBox = document.querySelector('#custom-modifiers-input')
let customModifierEntriesToolbar = document.querySelector('#editor-modifiers-entries-toolbar')
const modifierThumbnailPath = 'media/modifier-thumbnails'
const activeCardClass = 'modifier-card-active'
const CUSTOM_MODIFIERS_KEY = "customModifiers"
function createModifierCard(name, previews) {
const modifierCard = document.createElement('div')
modifierCard.className = 'modifier-card'
modifierCard.innerHTML = `
<div class="modifier-card-overlay"></div>
<div class="modifier-card-image-container">
<div class="modifier-card-image-overlay">+</div>
<p class="modifier-card-error-label"></p>
<img onerror="this.remove()" alt="Modifier Image" class="modifier-card-image">
</div>
<div class="modifier-card-container">
<div class="modifier-card-label"><p></p></div>
</div>`
const image = modifierCard.querySelector('.modifier-card-image')
const errorText = modifierCard.querySelector('.modifier-card-error-label')
const label = modifierCard.querySelector('.modifier-card-label')
errorText.innerText = 'No Image'
if (typeof previews == 'object') {
image.src = previews[0]; // portrait
image.setAttribute('preview-type', 'portrait')
} else {
image.remove()
}
const maxLabelLength = 30
const nameWithoutBy = name.replace('by ', '')
if(nameWithoutBy.length <= maxLabelLength) {
label.querySelector('p').innerText = nameWithoutBy
} else {
const tooltipText = document.createElement('span')
tooltipText.className = 'tooltip-text'
tooltipText.innerText = name
label.classList.add('tooltip')
label.appendChild(tooltipText)
label.querySelector('p').innerText = nameWithoutBy.substring(0, maxLabelLength) + '...'
}
return modifierCard
}
function createModifierGroup(modifierGroup, initiallyExpanded) {
const title = modifierGroup.category
const modifiers = modifierGroup.modifiers
const titleEl = document.createElement('h5')
titleEl.className = 'collapsible'
titleEl.innerText = title
const modifiersEl = document.createElement('div')
modifiersEl.classList.add('collapsible-content', 'editor-modifiers-leaf')
if (initiallyExpanded === true) {
titleEl.className += ' active'
modifiersEl.style.display = 'block'
}
modifiers.forEach(modObj => {
const modifierName = modObj.modifier
const modifierPreviews = modObj?.previews?.map(preview => `${modifierThumbnailPath}/${preview.path}`)
const modifierCard = createModifierCard(modifierName, modifierPreviews)
if(typeof modifierCard == 'object') {
modifiersEl.appendChild(modifierCard)
modifierCard.addEventListener('click', () => {
if (activeTags.map(x => x.name).includes(modifierName)) {
// remove modifier from active array
activeTags = activeTags.filter(x => x.name != modifierName)
modifierCard.classList.remove(activeCardClass)
modifierCard.querySelector('.modifier-card-image-overlay').innerText = '+'
} else {
// add modifier to active array
activeTags.push({
'name': modifierName,
'element': modifierCard.cloneNode(true),
'originElement': modifierCard,
'previews': modifierPreviews
})
modifierCard.classList.add(activeCardClass)
modifierCard.querySelector('.modifier-card-image-overlay').innerText = '-'
}
refreshTagsList()
})
}
})
let brk = document.createElement('br')
brk.style.clear = 'both'
modifiersEl.appendChild(brk)
let e = document.createElement('div')
e.appendChild(titleEl)
e.appendChild(modifiersEl)
editorModifierEntries.insertBefore(e, customModifierEntriesToolbar.nextSibling)
return e
}
async function loadModifiers() {
try {
let res = await fetch('/get/modifiers')
if (res.status === 200) {
res = await res.json()
modifiers = res; // update global variable
res.reverse()
res.forEach((modifierGroup, idx) => {
createModifierGroup(modifierGroup, idx === res.length - 1)
})
createCollapsibles(editorModifierEntries)
}
} catch (e) {
console.log('error fetching modifiers', e)
}
loadCustomModifiers()
}
function refreshTagsList() {
editorModifierTagsList.innerHTML = ''
if (activeTags.length == 0) {
editorTagsContainer.style.display = 'none'
return
} else {
editorTagsContainer.style.display = 'block'
}
activeTags.forEach((tag, index) => {
tag.element.querySelector('.modifier-card-image-overlay').innerText = '-'
tag.element.classList.add('modifier-card-tiny')
editorModifierTagsList.appendChild(tag.element)
tag.element.addEventListener('click', () => {
let idx = activeTags.indexOf(tag)
if (idx !== -1) {
activeTags[idx].originElement.classList.remove(activeCardClass)
activeTags[idx].originElement.querySelector('.modifier-card-image-overlay').innerText = '+'
activeTags.splice(idx, 1)
refreshTagsList()
}
})
})
let brk = document.createElement('br')
brk.style.clear = 'both'
editorModifierTagsList.appendChild(brk)
}
function changePreviewImages(val) {
const previewImages = document.querySelectorAll('.modifier-card-image-container img')
let previewArr = []
modifiers.map(x => x.modifiers).forEach(x => previewArr.push(...x.map(m => m.previews)))
previewArr = previewArr.map(x => {
let obj = {}
x.forEach(preview => {
obj[preview.name] = preview.path
})
return obj
})
previewImages.forEach(previewImage => {
const currentPreviewType = previewImage.getAttribute('preview-type')
const relativePreviewPath = previewImage.src.split(modifierThumbnailPath + '/').pop()
const previews = previewArr.find(preview => relativePreviewPath == preview[currentPreviewType])
if(typeof previews == 'object') {
let preview = null
if (val == 'portrait') {
preview = previews.portrait
}
else if (val == 'landscape') {
preview = previews.landscape
}
if(preview != null) {
previewImage.src = `${modifierThumbnailPath}/${preview}`
previewImage.setAttribute('preview-type', val)
}
}
})
}
function resizeModifierCards(val) {
const cardSizePrefix = 'modifier-card-size_'
const modifierCardClass = 'modifier-card'
const modifierCards = document.querySelectorAll(`.${modifierCardClass}`)
const cardSize = n => `${cardSizePrefix}${n}`
modifierCards.forEach(card => {
// remove existing size classes
const classes = card.className.split(' ').filter(c => !c.startsWith(cardSizePrefix))
card.className = classes.join(' ').trim()
if(val != 0) {
card.classList.add(cardSize(val))
}
})
}
modifierCardSizeSlider.onchange = () => resizeModifierCards(modifierCardSizeSlider.value)
previewImageField.onchange = () => changePreviewImages(previewImageField.value)
modifierSettingsBtn.addEventListener('click', function() {
modifierSettingsOverlay.style.display = 'block'
})
document.getElementById("modifier-settings-config-close-btn").addEventListener('click', () => {
modifierSettingsOverlay.style.display = 'none'
})
modifierSettingsOverlay.addEventListener('click', (event) => {
if (event.target.id == modifierSettingsOverlay.id) {
modifierSettingsOverlay.style.display = 'none'
}
})
function saveCustomModifiers() {
localStorage.setItem(CUSTOM_MODIFIERS_KEY, customModifiersTextBox.value.trim())
loadCustomModifiers()
}
function loadCustomModifiers() {
let customModifiers = localStorage.getItem(CUSTOM_MODIFIERS_KEY, '')
customModifiersTextBox.value = customModifiers
if (customModifiersGroupElement !== undefined) {
customModifiersGroupElement.remove()
}
if (customModifiers && customModifiers.trim() !== '') {
customModifiers = customModifiers.split('\n')
customModifiers = customModifiers.filter(m => m.trim() !== '')
customModifiers = customModifiers.map(function(m) {
return {
"modifier": m
}
})
let customGroup = {
'category': 'Custom Modifiers',
'modifiers': customModifiers
}
customModifiersGroupElement = createModifierGroup(customGroup, true)
createCollapsibles(customModifiersGroupElement)
}
}
customModifiersTextBox.addEventListener('change', saveCustomModifiers)

View File

@ -1,41 +0,0 @@
const INPAINTING_EDITOR_SIZE = 450
let inpaintingEditorContainer = document.querySelector('#inpaintingEditor')
let inpaintingEditor = new DrawingBoard.Board('inpaintingEditor', {
color: "#ffffff",
background: false,
size: 30,
webStorage: false,
controls: [{'DrawingMode': {'filler': false}}, 'Size', 'Navigation']
})
let inpaintingEditorCanvasBackground = document.querySelector('.drawing-board-canvas-wrapper')
function resizeInpaintingEditor(widthValue, heightValue) {
if (widthValue === heightValue) {
widthValue = INPAINTING_EDITOR_SIZE
heightValue = INPAINTING_EDITOR_SIZE
} else if (widthValue > heightValue) {
heightValue = (heightValue / widthValue) * INPAINTING_EDITOR_SIZE
widthValue = INPAINTING_EDITOR_SIZE
} else {
widthValue = (widthValue / heightValue) * INPAINTING_EDITOR_SIZE
heightValue = INPAINTING_EDITOR_SIZE
}
if (inpaintingEditor.opts.aspectRatio === (widthValue / heightValue).toFixed(3)) {
// Same ratio, don't reset the canvas.
return
}
inpaintingEditor.opts.aspectRatio = (widthValue / heightValue).toFixed(3)
inpaintingEditorContainer.style.width = widthValue + 'px'
inpaintingEditorContainer.style.height = heightValue + 'px'
inpaintingEditor.opts.enlargeYourContainer = true
inpaintingEditor.opts.size = inpaintingEditor.ctx.lineWidth
inpaintingEditor.resize()
inpaintingEditor.ctx.lineCap = "round"
inpaintingEditor.ctx.lineJoin = "round"
inpaintingEditor.ctx.lineWidth = inpaintingEditor.opts.size
inpaintingEditor.setColor(inpaintingEditor.opts.color)
}

File diff suppressed because it is too large Load Diff

View File

@ -1,47 +0,0 @@
const PLUGIN_API_VERSION = "1.0"
const PLUGINS = {
/**
* Register new buttons to show on each output image.
*
* Example:
* PLUGINS['IMAGE_INFO_BUTTONS'].push({
* text: 'Make a Similar Image',
* on_click: function(origRequest, image) {
* let newTaskRequest = getCurrentUserRequest()
* newTaskRequest.reqBody = Object.assign({}, origRequest, {
* init_image: image.src,
* prompt_strength: 0.7,
* seed: Math.floor(Math.random() * 10000000)
* })
* newTaskRequest.seed = newTaskRequest.reqBody.seed
* createTask(newTaskRequest)
* },
* filter: function(origRequest, image) {
* // this is an optional function. return true/false to show/hide the button
* // if this function isn't set, the button will always be visible
* return true
* }
* })
*/
IMAGE_INFO_BUTTONS: []
}
async function loadUIPlugins() {
try {
let res = await fetch('/get/ui_plugins')
if (res.status === 200) {
res = await res.json()
res.forEach(pluginPath => {
let script = document.createElement('script')
script.src = pluginPath + '?t=' + Date.now()
console.log('loading plugin', pluginPath)
document.head.appendChild(script)
})
}
} catch (e) {
console.log('error fetching plugin paths', e)
}
}

View File

@ -1,73 +0,0 @@
const themeField = document.getElementById("theme");
var DEFAULT_THEME = {};
var THEMES = []; // initialized in initTheme from data in css
function getThemeName(theme) {
theme = theme.replace("theme-", "");
theme = theme.split("-").map(word => word.charAt(0).toUpperCase() + word.slice(1)).join(" ");
return theme;
}
// init themefield
function initTheme() {
Array.from(document.styleSheets)
.filter(sheet => sheet.href?.startsWith(window.location.origin))
.flatMap(sheet => Array.from(sheet.cssRules))
.forEach(rule => {
var selector = rule.selectorText; // TODO: also do selector == ":root", re-run un-set props
if (selector && selector.startsWith(".theme-")) {
var theme_key = selector.substring(1);
THEMES.push({
key: theme_key,
name: getThemeName(theme_key),
rule: rule
})
}
if (selector && selector == ":root") {
DEFAULT_THEME = {
key: "theme-default",
name: "Default",
rule: rule
};
}
});
THEMES.forEach(theme => {
var new_option = document.createElement("option");
new_option.setAttribute("value", theme.key);
new_option.innerText = theme.name;
themeField.appendChild(new_option);
});
// setup the style transitions a second after app initializes, so initial style is instant
setTimeout(() => {
var body = document.querySelector("body");
var style = document.createElement('style');
style.innerHTML = "* { transition: background 0.5s, color 0.5s, background-color 0.5s; }";
body.appendChild(style);
}, 1000);
}
initTheme();
function themeFieldChanged() {
var theme_key = themeField.value;
var body = document.querySelector("body");
body.classList.remove(...THEMES.map(theme => theme.key));
body.classList.add(theme_key);
//
body.style = "";
var theme = THEMES.find(t => t.key == theme_key);
if (theme) {
// refresh variables incase they are back referencing
Array.from(DEFAULT_THEME.rule.style)
.filter(cssVariable => !Array.from(theme.rule.style).includes(cssVariable))
.forEach(cssVariable => {
body.style.setProperty(cssVariable, DEFAULT_THEME.rule.style.getPropertyValue(cssVariable));
});
}
}
themeField.addEventListener('change', themeFieldChanged);

View File

@ -1,344 +0,0 @@
// https://gomakethings.com/finding-the-next-and-previous-sibling-elements-that-match-a-selector-with-vanilla-js/
function getNextSibling(elem, selector) {
// Get the next sibling element
var sibling = elem.nextElementSibling
// If there's no selector, return the first sibling
if (!selector) return sibling
// If the sibling matches our selector, use it
// If not, jump to the next sibling and continue the loop
while (sibling) {
if (sibling.matches(selector)) return sibling
sibling = sibling.nextElementSibling
}
}
/* Panel Stuff */
// true = open
var COLLAPSIBLES_INITIALIZED = false;
const COLLAPSIBLES_KEY = "collapsibles";
const COLLAPSIBLE_PANELS = []; // filled in by createCollapsibles with all the elements matching .collapsible
// on-init call this for any panels that are marked open
function toggleCollapsible(element) {
var collapsibleHeader = element.querySelector(".collapsible");
var handle = element.querySelector(".collapsible-handle");
collapsibleHeader.classList.toggle("active")
let content = getNextSibling(collapsibleHeader, '.collapsible-content')
if (content.style.display === "block") {
content.style.display = "none"
handle.innerHTML = '&#x2795;' // plus
} else {
content.style.display = "block"
handle.innerHTML = '&#x2796;' // minus
}
if (COLLAPSIBLES_INITIALIZED && COLLAPSIBLE_PANELS.includes(element)) {
saveCollapsibles()
}
}
function saveCollapsibles() {
var values = {}
COLLAPSIBLE_PANELS.forEach(element => {
var value = element.querySelector(".collapsible").className.indexOf("active") !== -1
values[element.id] = value
})
localStorage.setItem(COLLAPSIBLES_KEY, JSON.stringify(values))
}
function createCollapsibles(node) {
var save = false
if (!node) {
node = document
save = true
}
let collapsibles = node.querySelectorAll(".collapsible")
collapsibles.forEach(function(c) {
if (save && c.parentElement.id) {
COLLAPSIBLE_PANELS.push(c.parentElement)
}
let handle = document.createElement('span')
handle.className = 'collapsible-handle'
if (c.className.indexOf('active') !== -1) {
handle.innerHTML = '&#x2796;' // minus
} else {
handle.innerHTML = '&#x2795;' // plus
}
c.insertBefore(handle, c.firstChild)
c.addEventListener('click', function() {
toggleCollapsible(c.parentElement)
})
})
if (save) {
var saved = localStorage.getItem(COLLAPSIBLES_KEY)
if (!saved) {
saved = tryLoadOldCollapsibles();
}
if (!saved) {
saveCollapsibles()
saved = localStorage.getItem(COLLAPSIBLES_KEY)
}
var values = JSON.parse(saved)
COLLAPSIBLE_PANELS.forEach(element => {
var value = element.querySelector(".collapsible").className.indexOf("active") !== -1
if (values[element.id] != value) {
toggleCollapsible(element)
}
})
COLLAPSIBLES_INITIALIZED = true
}
}
function tryLoadOldCollapsibles() {
var old_map = {
"advancedPanelOpen": "editor-settings",
"modifiersPanelOpen": "editor-modifiers",
"negativePromptPanelOpen": "editor-inputs-prompt"
};
if (localStorage.getItem(Object.keys(old_map)[0])) {
var result = {};
Object.keys(old_map).forEach(key => {
var value = localStorage.getItem(key);
if (value !== null) {
result[old_map[key]] = value == true || value == "true"
localStorage.removeItem(key)
}
});
result = JSON.stringify(result)
localStorage.setItem(COLLAPSIBLES_KEY, result)
return result
}
return null;
}
function permute(arr) {
let permutations = []
let n = arr.length
let n_permutations = Math.pow(2, n)
for (let i = 0; i < n_permutations; i++) {
let perm = []
let mask = Number(i).toString(2).padStart(n, '0')
for (let idx = 0; idx < mask.length; idx++) {
if (mask[idx] === '1' && arr[idx].trim() !== '') {
perm.push(arr[idx])
}
}
if (perm.length > 0) {
permutations.push(perm)
}
}
return permutations
}
// https://stackoverflow.com/a/8212878
function millisecondsToStr(milliseconds) {
function numberEnding (number) {
return (number > 1) ? 's' : ''
}
var temp = Math.floor(milliseconds / 1000)
var hours = Math.floor((temp %= 86400) / 3600)
var s = ''
if (hours) {
s += hours + ' hour' + numberEnding(hours) + ' '
}
var minutes = Math.floor((temp %= 3600) / 60)
if (minutes) {
s += minutes + ' minute' + numberEnding(minutes) + ' '
}
var seconds = temp % 60
if (!hours && minutes < 4 && seconds) {
s += seconds + ' second' + numberEnding(seconds)
}
return s
}
// https://rosettacode.org/wiki/Brace_expansion#JavaScript
function BraceExpander() {
'use strict'
// Index of any closing brace matching the opening
// brace at iPosn,
// with the indices of any immediately-enclosed commas.
function bracePair(tkns, iPosn, iNest, lstCommas) {
if (iPosn >= tkns.length || iPosn < 0) return null;
var t = tkns[iPosn],
n = (t === '{') ? (
iNest + 1
) : (t === '}' ? (
iNest - 1
) : iNest),
lst = (t === ',' && iNest === 1) ? (
lstCommas.concat(iPosn)
) : lstCommas;
return n ? bracePair(tkns, iPosn + 1, n, lst) : {
close: iPosn,
commas: lst
};
}
// Parse of a SYNTAGM subtree
function andTree(dctSofar, tkns) {
if (!tkns.length) return [dctSofar, []];
var dctParse = dctSofar ? dctSofar : {
fn: and,
args: []
},
head = tkns[0],
tail = head ? tkns.slice(1) : [],
dctBrace = head === '{' ? bracePair(
tkns, 0, 0, []
) : null,
lstOR = dctBrace && (
dctBrace.close
) && dctBrace.commas.length ? (
splitAt(dctBrace.close + 1, tkns)
) : null;
return andTree({
fn: and,
args: dctParse.args.concat(
lstOR ? (
orTree(dctParse, lstOR[0], dctBrace.commas)
) : head
)
}, lstOR ? (
lstOR[1]
) : tail);
}
// Parse of a PARADIGM subtree
function orTree(dctSofar, tkns, lstCommas) {
if (!tkns.length) return [dctSofar, []];
var iLast = lstCommas.length;
return {
fn: or,
args: splitsAt(
lstCommas, tkns
).map(function (x, i) {
var ts = x.slice(
1, i === iLast ? (
-1
) : void 0
);
return ts.length ? ts : [''];
}).map(function (ts) {
return ts.length > 1 ? (
andTree(null, ts)[0]
) : ts[0];
})
};
}
// List of unescaped braces and commas, and remaining strings
function tokens(str) {
// Filter function excludes empty splitting artefacts
var toS = function (x) {
return x.toString();
};
return str.split(/(\\\\)/).filter(toS).reduce(function (a, s) {
return a.concat(s.charAt(0) === '\\' ? s : s.split(
/(\\*[{,}])/
).filter(toS));
}, []);
}
// PARSE TREE OPERATOR (1 of 2)
// Each possible head * each possible tail
function and(args) {
var lng = args.length,
head = lng ? args[0] : null,
lstHead = "string" === typeof head ? (
[head]
) : head;
return lng ? (
1 < lng ? lstHead.reduce(function (a, h) {
return a.concat(
and(args.slice(1)).map(function (t) {
return h + t;
})
);
}, []) : lstHead
) : [];
}
// PARSE TREE OPERATOR (2 of 2)
// Each option flattened
function or(args) {
return args.reduce(function (a, b) {
return a.concat(b);
}, []);
}
// One list split into two (first sublist length n)
function splitAt(n, lst) {
return n < lst.length + 1 ? [
lst.slice(0, n), lst.slice(n)
] : [lst, []];
}
// One list split into several (sublist lengths [n])
function splitsAt(lstN, lst) {
return lstN.reduceRight(function (a, x) {
return splitAt(x, a[0]).concat(a.slice(1));
}, [lst]);
}
// Value of the parse tree
function evaluated(e) {
return typeof e === 'string' ? e :
e.fn(e.args.map(evaluated));
}
// JSON prettyprint (for parse tree, token list etc)
function pp(e) {
return JSON.stringify(e, function (k, v) {
return typeof v === 'function' ? (
'[function ' + v.name + ']'
) : v;
}, 2)
}
// ----------------------- MAIN ------------------------
// s -> [s]
this.expand = function(s) {
// BRACE EXPRESSION PARSED
var dctParse = andTree(null, tokens(s))[0];
// ABSTRACT SYNTAX TREE LOGGED
// console.log(pp(dctParse));
// AST EVALUATED TO LIST OF STRINGS
return evaluated(dctParse);
}
}
function asyncDelay(timeout) {
return new Promise(function(resolve, reject) {
setTimeout(resolve, timeout, true)
})
}

View File

Before

Width:  |  Height:  |  Size: 11 KiB

After

Width:  |  Height:  |  Size: 11 KiB

View File

@ -1,16 +1,8 @@
* {
font-family: Work Sans, Verdana, Geneva, sans-serif;
box-sizing: border-box;
}
html {
position: relative;
}
body {
font-family: Arial, Helvetica, sans-serif;
font-size: 11pt;
background-color: var(--background-color1);
color: var(--text-color);
background-color: rgb(32, 33, 36);
color: #eee;
}
a {
color: rgb(0, 102, 204);
@ -24,8 +16,12 @@ label {
#prompt {
width: 100%;
height: 65pt;
font-size: 13px;
margin-bottom: 6px;
box-sizing: border-box;
}
@media screen and (max-width: 600px) {
#prompt {
width: 95%;
}
}
.image_preview_container {
/* display: none; */
@ -33,7 +29,7 @@ label {
}
.image_clear_btn {
position: absolute;
transform: translate(30%, -30%);
transform: translateX(-50%) translateY(-35%);
background: black;
color: white;
border: 2pt solid #ccc;
@ -45,8 +41,6 @@ label {
height: 16pt;
font-family: Verdana;
font-size: 8pt;
top: 0px;
right: 0px;
}
.settings-box ul {
font-size: 9pt;
@ -77,7 +71,7 @@ label {
}
.imgSeedLabel {
font-size: 0.8em;
background-color: var(--background-color2);
background-color: rgb(44, 45, 48);
border-radius: 3px;
padding: 5px;
}
@ -107,7 +101,7 @@ label {
margin-bottom: 7px;
}
#container {
width: 95%;
width: 90%;
margin-left: auto;
margin-right: auto;
}
@ -127,7 +121,6 @@ label {
}
.settings-box label small {
color: rgb(153, 153, 153);
margin-right: 10px;
}
#preview {
padding: 5px;
@ -150,14 +143,14 @@ label {
}
#makeImage {
flex: 0 0 70px;
background: var(--accent-color);
border: var(--make-image-border);
background: rgb(80, 0, 185);
border: 2px solid rgb(40, 0, 78);
color: rgb(255, 221, 255);
width: 100%;
height: 30pt;
}
#makeImage:hover {
background: hsl(var(--accent-hue), 100%, calc(var(--accent-lightness) + 6%));
background: rgb(93, 0, 214);
}
#stopImage {
flex: 0 0 70px;
@ -174,13 +167,12 @@ label {
}
.flex-container {
display: flex;
width: 100%;
}
.col-50 {
flex: 50%;
}
.col-fixed-10 {
flex: 0 0 350pt;
flex: 0 0 380pt;
}
.col-free {
flex: 1;
@ -202,8 +194,8 @@ label {
padding-right: 5px;
}
.panel-box {
background: var(--background-color2);
border: 1px solid var(--background-color3);
background: rgb(44, 45, 48);
border: 1px solid rgb(47, 49, 53);
border-radius: 7px;
padding: 5px;
margin-bottom: 15px;
@ -242,18 +234,18 @@ img {
height: 8pt;
border-radius: 4pt; */
font-size: 14pt;
color: rgb(200, 139, 0);
color: rgb(128, 87, 0);
/* background-color: rgb(197, 1, 1); */
/* transform: translateY(15%); */
display: inline;
}
#server-status-msg {
color: rgb(200, 139, 0);
color: rgb(128, 87, 0);
padding-left: 2pt;
font-size: 10pt;
}
.preview-prompt {
font-size: 13pt;
font-size: 16pt;
margin-bottom: 10pt;
}
#coffeeButton {
@ -269,9 +261,6 @@ img {
.drawing-board-canvas-wrapper {
background-size: 100% 100%;
}
.drawing-board-controls {
min-width: 273px;
}
.drawing-board-control > button {
background-color: #eee;
border-radius: 3pt;
@ -354,7 +343,7 @@ img {
padding-right: 2pt;
}
#community-links li a {
color: var(--text-color);
color: white;
text-decoration: none;
}
.dropdown {
@ -365,8 +354,8 @@ img {
position: absolute;
z-index: 2;
background: var(--background-color4);
border: 2px solid var(--background-color2);
background: rgb(18, 18, 19);
border: 2px solid rgb(37, 38, 41);
border-radius: 7px;
padding: 5px;
margin-bottom: 15px;
@ -377,7 +366,7 @@ img {
}
.imageTaskContainer {
border: 1px solid var(--background-color2);
border: 1px solid #333;
margin-bottom: 10pt;
padding: 5pt;
border-radius: 5pt;
@ -386,7 +375,7 @@ img {
.taskStatusLabel {
float: left;
font-size: 8pt;
background:var(--background-color2);
background:rgb(44, 45, 48);
border: 1px solid rgb(61, 62, 66);
padding: 2pt 4pt;
border-radius: 2pt;
@ -395,12 +384,7 @@ img {
.activeTaskLabel {
background:rgb(0, 90, 30);
border: 1px solid rgb(0, 75, 19);
color:rgb(222, 253, 230)
}
.waitingTaskLabel {
background:rgb(128, 89, 0);
border: 1px solid rgb(107, 75, 0);
color:rgb(255, 242, 211)
color:rgb(204, 255, 217)
}
.secondaryButton {
background: rgb(132, 8, 0);
@ -427,217 +411,3 @@ img {
.img-batch {
display: inline;
}
#prompt_from_file {
display: none;
}
#init_image_preview {
max-width: 150px;
max-height: 150px;
object-fit: contain;
border-radius: 6px;
transition: all 1s ease-in-out;
}
#init_image_preview:hover {
max-width: 500px;
max-height: 1000px;
transition: all 1s 0.5s ease-in-out;
}
#init_image_wrapper {
position: relative;
width: fit-content;
}
#init_image_size_box {
position: absolute;
right: 0px;
bottom: 3px;
padding: 3px;
background: black;
color: white;
text-shadow: 0px 0px 4px black;
opacity: 60%;
font-size: 12px;
border-radius: 6px 0px;
}
#editor-settings-entries table td {
padding: 0px;
line-height: 28px;
}
#editor-settings-entries table td:first-child {
float: right;
padding-right: 4px;
white-space: nowrap;
}
#negative_prompt {
width: 100%;
}
button,
input[type="file"],
input[type="checkbox"],
select,
option {
cursor: pointer;
}
input,
select,
textarea {
border-radius: var(--input-border-radius);
padding: 4px;
accent-color: var(--accent-color);
background: var(--input-background-color);
border: var(--input-border-size) solid var(--input-border-color);
color: var(--input-text-color);
font-size: 9pt;
}
input:hover {
accent-color: var(--accent-color-hover);
}
input {
padding: 4px 6px;
}
input:focus,
select:focus,
textarea:focus {
outline: 2px solid var(--accent-color);
}
input[disabled],
select[disabled],
textarea[disabled] {
opacity: 0.5;
}
input[type="file"] {
width: 100%;
padding: 2px;
}
button,
input::file-selector-button {
padding: 2px 4px;
border-radius: 4px;
background: var(--button-color);
color: var(--button-text-color);
border: var(--button-border);
}
input::file-selector-button {
padding: 0px 4px;
height: 19px;
}
@media screen and (max-width: 700px) {
body {
margin: 0px;
}
#container {
margin: 0px;
padding: 10px
}
.flex-container {
flex-direction: column;
}
#preview {
margin: 0px;
padding: 0px;
}
#preview .collapsible-content {
padding: 0px;
}
#preview .collapsible-content {
padding: 0px;
}
.imgItem {
margin-right: 0px;
}
.imgItem img {
height: 100%;
width: 100%;
object-fit: contain;
}
.dropdown-content {
width: auto !important;
transform: none !important;
left: 0px;
right: 0px;
}
}
#promptsFromFileBtn {
font-size: 9pt;
}
#reset-image-settings {
position: relative;
transform: translateY(-13%);
}
.simple-tooltip {
border-radius: 3px;
font-weight: bold;
font-size: 16px;
background-color: var(--background-color3);
visibility: hidden;
opacity: 0;
position: absolute;
white-space: nowrap;
padding: 8px 12px;
transition: 0.3s all;
pointer-events: none;
}
@media (hover: hover) {
:hover > .simple-tooltip {
opacity: 1;
visibility: visible;
}
}
/* position specific */
.simple-tooltip.right {
right: 0px;
top: 50%;
transform: translate(calc(100% - 15%), -50%);
}
:hover > .simple-tooltip.right {
transform: translate(100%, -50%);
}
.simple-tooltip.top {
top: 0px;
left: 50%;
transform: translate(-50%, calc(-100% + 15%));
}
:hover > .simple-tooltip.top {
transform: translate(-50%, -100%);
}
.simple-tooltip.left {
left: 0px;
top: 50%;
transform: translate(calc(-100% + 15%), -50%);
}
:hover > .simple-tooltip.left {
transform: translate(-100%, -50%);
}
.simple-tooltip.bottom {
bottom: 0px;
left: 50%;
transform: translate(-50%, calc(100% - 15%));
}
:hover > .simple-tooltip.bottom {
transform: translate(-50%, 100%);
}

1345
ui/media/main.js Normal file

File diff suppressed because it is too large Load Diff

View File

@ -214,36 +214,3 @@
margin-bottom: 0.5em;
vertical-align: middle;
}
#modifier-settings-btn {
float: right;
}
#modifier-settings-config {
position: fixed;
background: rgba(32, 33, 36, 50%);
top: 0px;
left: 0px;
width: 100vw;
height: 100vh;
z-index: 1000;
}
#modifier-settings-config > div {
background: var(--background-color2);
max-width: 600px;
margin: auto;
margin-top: 100px;
border-radius: 6px;
padding: 30px;
text-align: center;
}
#modifier-settings-config-close-btn {
float: right;
cursor: pointer;
padding: 10px;
transform: translate(50%, -50%) scaleX(130%);
}
#modifier-settings-config textarea {
width: 90%;
height: 150px;
}

Binary file not shown.

Before

Width:  |  Height:  |  Size: 21 KiB

After

Width:  |  Height:  |  Size: 91 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 21 KiB

After

Width:  |  Height:  |  Size: 47 KiB

View File

@ -22,9 +22,8 @@ class Request:
use_full_precision: bool = False
use_face_correction: str = None # or "GFPGANv1.3"
use_upscale: str = None # or "RealESRGAN_x4plus" or "RealESRGAN_x4plus_anime_6B"
use_stable_diffusion_model: str = "sd-v1-4"
show_only_filtered_image: bool = False
output_format: str = "jpeg" # or "png"
output_format: str = "jpeg" # "png", "jpeg"
stream_progress_updates: bool = False
stream_image_progress: bool = False
@ -44,7 +43,6 @@ class Request:
"sampler": self.sampler,
"use_face_correction": self.use_face_correction,
"use_upscale": self.use_upscale,
"use_stable_diffusion_model": self.use_stable_diffusion_model,
"output_format": self.output_format,
}
@ -66,7 +64,6 @@ class Request:
use_full_precision: {self.use_full_precision}
use_face_correction: {self.use_face_correction}
use_upscale: {self.use_upscale}
use_stable_diffusion_model: {self.use_stable_diffusion_model}
show_only_filtered_image: {self.show_only_filtered_image}
output_format: {self.output_format}

View File

@ -0,0 +1,46 @@
diff --git a/ldm/dream/conditioning.py b/ldm/dream/conditioning.py
index dfa1089..e4908ad 100644
--- a/ldm/dream/conditioning.py
+++ b/ldm/dream/conditioning.py
@@ -12,8 +12,8 @@ log_tokenization() print out colour-coded tokens and warn if trunca
import re
import torch
-def get_uc_and_c(prompt, model, log_tokens=False, skip_normalize=False):
- uc = model.get_learned_conditioning([''])
+def get_uc_and_c(prompt, model, log_tokens=False, skip_normalize=False, negative_prompt=''):
+ uc = model.get_learned_conditioning([negative_prompt])
# get weighted sub-prompts
weighted_subprompts = split_weighted_subprompts(
diff --git a/ldm/generate.py b/ldm/generate.py
index 8f67403..d88ce2d 100644
--- a/ldm/generate.py
+++ b/ldm/generate.py
@@ -205,6 +205,7 @@ class Generate:
init_mask = None,
fit = False,
strength = None,
+ init_img_is_path = True,
# these are specific to GFPGAN/ESRGAN
gfpgan_strength= 0,
save_original = False,
@@ -303,11 +304,15 @@ class Generate:
uc, c = get_uc_and_c(
prompt, model=self.model,
skip_normalize=skip_normalize,
- log_tokens=self.log_tokenization
+ log_tokens=self.log_tokenization,
+ negative_prompt=(args['negative_prompt'] if 'negative_prompt' in args else '')
)
- (init_image,mask_image) = self._make_images(init_img,init_mask, width, height, fit)
-
+ if init_img_is_path:
+ (init_image,mask_image) = self._make_images(init_img,init_mask, width, height, fit)
+ else:
+ (init_image,mask_image) = (init_img, init_mask)
+
if (init_image is not None) and (mask_image is not None):
generator = self._make_inpaint()
elif init_image is not None:

View File

@ -1,64 +1,47 @@
import json
import os, re
import traceback
import sys
import os
import uuid
import re
import torch
import traceback
import numpy as np
from omegaconf import OmegaConf
from PIL import Image, ImageOps
from tqdm import tqdm, trange
from itertools import islice
from pytorch_lightning import logging
from einops import rearrange
import time
from pytorch_lightning import seed_everything
from torch import autocast
from contextlib import nullcontext
from einops import rearrange, repeat
from ldm.util import instantiate_from_config
from optimizedSD.optimUtils import split_weighted_subprompts
from transformers import logging
from PIL import Image, ImageOps, ImageChops
from ldm.generate import Generate
import transformers
from gfpgan import GFPGANer
from basicsr.archs.rrdbnet_arch import RRDBNet
from realesrgan import RealESRGANer
import uuid
transformers.logging.set_verbosity_error()
logging.set_verbosity_error()
# consts
config_yaml = "optimizedSD/v1-inference.yaml"
filename_regex = re.compile('[^a-zA-Z0-9]')
# api stuff
from . import Request, Response, Image as ResponseImage
import base64
import json
from io import BytesIO
#from colorama import Fore
filename_regex = re.compile('[^a-zA-Z0-9]')
generator = None
gfpgan_file = None
real_esrgan_file = None
model_gfpgan = None
model_real_esrgan = None
device = None
precision = 'autocast'
has_valid_gpu = False
force_full_precision = False
# local
stop_processing = False
temp_images = {}
ckpt_file = None
gfpgan_file = None
real_esrgan_file = None
model = None
modelCS = None
modelFS = None
model_gfpgan = None
model_real_esrgan = None
model_is_half = False
model_fs_is_half = False
device = None
unet_bs = 1
precision = 'autocast'
sampler_plms = None
sampler_ddim = None
has_valid_gpu = False
force_full_precision = False
try:
gpu = torch.cuda.current_device()
gpu_name = torch.cuda.get_device_name(gpu)
@ -79,80 +62,45 @@ except:
print('WARNING: No compatible GPU found. Using the CPU, but this will be very slow!')
pass
def load_model_ckpt(ckpt_to_use, device_to_use='cuda', turbo=False, unet_bs_to_use=1, precision_to_use='autocast'):
global ckpt_file, model, modelCS, modelFS, model_is_half, device, unet_bs, precision, model_fs_is_half
def load_model_ckpt(ckpt_to_use, device_to_use='cuda', precision_to_use='autocast'):
global generator
device = device_to_use if has_valid_gpu else 'cpu'
precision = precision_to_use if not force_full_precision else 'full'
unet_bs = unet_bs_to_use
unload_model()
try:
config = 'configs/models.yaml'
model = 'stable-diffusion-1.4'
if device == 'cpu':
precision = 'full'
models = OmegaConf.load(config)
width = models[model].width
height = models[model].height
config = models[model].config
weights = ckpt_to_use + '.ckpt'
except (FileNotFoundError, IOError, KeyError) as e:
print(f'{e}. Aborting.')
sys.exit(-1)
sd = load_model_from_config(f"{ckpt_to_use}.ckpt")
li, lo = [], []
for key, value in sd.items():
sp = key.split(".")
if (sp[0]) == "model":
if "input_blocks" in sp:
li.append(key)
elif "middle_block" in sp:
li.append(key)
elif "time_embed" in sp:
li.append(key)
else:
lo.append(key)
for key in li:
sd["model1." + key[6:]] = sd.pop(key)
for key in lo:
sd["model2." + key[6:]] = sd.pop(key)
generator = Generate(
width=width,
height=height,
sampler_name='ddim',
weights=weights,
full_precision=(precision == 'full'),
config=config,
grid=False,
# this is solely for recreating the prompt
seamless=False,
embedding_path=None,
device_type=device,
ignore_ctrl_c=True,
)
config = OmegaConf.load(f"{config_yaml}")
# gets rid of annoying messages about random seed
logging.getLogger('pytorch_lightning').setLevel(logging.ERROR)
model = instantiate_from_config(config.modelUNet)
_, _ = model.load_state_dict(sd, strict=False)
model.eval()
model.cdevice = device
model.unet_bs = unet_bs
model.turbo = turbo
modelCS = instantiate_from_config(config.modelCondStage)
_, _ = modelCS.load_state_dict(sd, strict=False)
modelCS.eval()
modelCS.cond_stage_model.device = device
modelFS = instantiate_from_config(config.modelFirstStage)
_, _ = modelFS.load_state_dict(sd, strict=False)
modelFS.eval()
del sd
if device != "cpu" and precision == "autocast":
model.half()
modelCS.half()
modelFS.half()
model_is_half = True
model_fs_is_half = True
else:
model_is_half = False
model_fs_is_half = False
ckpt_file = ckpt_to_use
print('loaded ', ckpt_file, 'to', device, 'precision', precision)
def unload_model():
global model, modelCS, modelFS
if model is not None:
del model
del modelCS
del modelFS
model = None
modelCS = None
modelFS = None
# preload the model
generator.load_model()
def load_model_gfpgan(gfpgan_to_use):
global gfpgan_file, model_gfpgan
@ -191,41 +139,12 @@ def load_model_real_esrgan(real_esrgan_to_use):
model_real_esrgan.device = torch.device('cpu')
model_real_esrgan.model.to('cpu')
else:
model_real_esrgan = RealESRGANer(scale=2, model_path=model_path, model=model_to_use, pre_pad=0, half=model_is_half)
model_real_esrgan = RealESRGANer(scale=2, model_path=model_path, model=model_to_use, pre_pad=0, half=(precision != 'full'))
model_real_esrgan.model.name = real_esrgan_to_use
print('loaded ', real_esrgan_to_use, 'to', device, 'precision', precision)
def get_base_path(disk_path, session_id, prompt, img_id, ext, suffix=None):
if disk_path is None: return None
if session_id is None: return None
if ext is None: raise Exception('Missing ext')
session_out_path = os.path.join(disk_path, session_id)
os.makedirs(session_out_path, exist_ok=True)
prompt_flattened = filename_regex.sub('_', prompt)[:50]
if suffix is not None:
return os.path.join(session_out_path, f"{prompt_flattened}_{img_id}_{suffix}.{ext}")
return os.path.join(session_out_path, f"{prompt_flattened}_{img_id}.{ext}")
def apply_filters(filter_name, image_data):
print(f'Applying filter {filter_name}...')
gc()
if filter_name == 'gfpgan':
_, _, output = model_gfpgan.enhance(image_data[:,:,::-1], has_aligned=False, only_center_face=False, paste_back=True)
image_data = output[:,:,::-1]
if filter_name == 'real_esrgan':
output, _ = model_real_esrgan.enhance(image_data[:,:,::-1])
image_data = output[:,:,::-1]
return image_data
def mk_img(req: Request):
try:
yield from do_mk_img(req)
@ -234,14 +153,14 @@ def mk_img(req: Request):
gc()
if device != "cpu":
modelFS.to("cpu")
modelCS.to("cpu")
# if device != "cpu":
# modelFS.to("cpu")
# modelCS.to("cpu")
model.model1.to("cpu")
model.model2.to("cpu")
# model.model1.to("cpu")
# model.model2.to("cpu")
gc()
# gc()
yield json.dumps({
"status": 'failed',
@ -249,125 +168,100 @@ def mk_img(req: Request):
})
def do_mk_img(req: Request):
global ckpt_file
global model, modelCS, modelFS, device
global model_gfpgan, model_real_esrgan
global stop_processing
stop_processing = False
res = Response()
res.request = req
res.images = []
temp_images.clear()
# custom model support:
# the req.use_stable_diffusion_model needs to be a valid path
# to the ckpt file (without the extension).
needs_model_reload = False
ckpt_to_use = ckpt_file
if ckpt_to_use != req.use_stable_diffusion_model:
ckpt_to_use = req.use_stable_diffusion_model
needs_model_reload = True
model.turbo = req.turbo
if req.use_cpu:
if device != 'cpu':
device = 'cpu'
if model_is_half:
load_model_ckpt(ckpt_to_use, device)
needs_model_reload = False
load_model_gfpgan(gfpgan_file)
load_model_real_esrgan(real_esrgan_file)
else:
if has_valid_gpu:
prev_device = device
device = 'cuda'
if (precision == 'autocast' and (req.use_full_precision or not model_is_half)) or \
(precision == 'full' and not req.use_full_precision and not force_full_precision):
load_model_ckpt(ckpt_to_use, device, req.turbo, unet_bs, ('full' if req.use_full_precision else 'autocast'))
needs_model_reload = False
if prev_device != device:
load_model_gfpgan(gfpgan_file)
load_model_real_esrgan(real_esrgan_file)
if needs_model_reload:
load_model_ckpt(ckpt_to_use, device, req.turbo, unet_bs, precision)
if req.use_face_correction != gfpgan_file:
load_model_gfpgan(req.use_face_correction)
if req.use_upscale != real_esrgan_file:
load_model_real_esrgan(req.use_upscale)
model.cdevice = device
modelCS.cond_stage_model.device = device
init_image = None
init_mask = None
opt_prompt = req.prompt
opt_seed = req.seed
opt_n_iter = 1
opt_C = 4
opt_f = 8
opt_ddim_eta = 0.0
opt_init_img = req.init_image
if req.init_image is not None:
image = base64_str_to_img(req.init_image)
print(req.to_string(), '\n device', device)
w, h = image.size
print(f"loaded input image of size ({w}, {h}) from base64")
if req.width is not None and req.height is not None:
h, w = req.height, req.width
print('\n\n Using precision:', precision)
w, h = map(lambda x: x - x % 64, (w, h)) # resize to integer multiple of 64
image = image.resize((w, h), resample=Image.Resampling.LANCZOS)
init_image = generator._create_init_image(image)
seed_everything(opt_seed)
if generator._has_transparency(image) and req.mask is None: # if image has a transparent area and no mask was provided, then try to generate mask
print('>> Initial image has transparent areas. Will inpaint in these regions.')
if generator._check_for_erasure(image):
print(
'>> WARNING: Colors underneath the transparent region seem to have been erased.\n',
'>> Inpainting will be suboptimal. Please preserve the colors when making\n',
'>> a transparency mask, or provide mask explicitly using --init_mask (-M).'
)
init_mask = generator._create_init_mask(image) # this returns a torch tensor
batch_size = req.num_outputs
prompt = opt_prompt
assert prompt is not None
data = [batch_size * [prompt]]
if precision == "autocast" and device != "cpu":
precision_scope = autocast
else:
precision_scope = nullcontext
mask = None
if req.init_image is None:
handler = _txt2img
init_latent = None
t_enc = None
else:
handler = _img2img
init_image = load_img(req.init_image, req.width, req.height)
init_image = init_image.to(device)
if device != "cpu" and precision == "autocast":
if device != "cpu" and precision != "full":
init_image = init_image.half()
modelFS.to(device)
init_image = repeat(init_image, '1 ... -> b ...', b=batch_size)
init_latent = modelFS.get_first_stage_encoding(modelFS.encode_first_stage(init_image)) # move to latent space
if req.mask is not None:
mask = load_mask(req.mask, req.width, req.height, init_latent.shape[2], init_latent.shape[3], True).to(device)
mask = mask[0][0].unsqueeze(0).repeat(4, 1, 1).unsqueeze(0)
mask = repeat(mask, '1 ... -> b ...', b=batch_size)
image = base64_str_to_img(req.mask)
if device != "cpu" and precision == "autocast":
mask = mask.half()
image = ImageChops.invert(image)
move_fs_to_cpu()
w, h = image.size
print(f"loaded input image of size ({w}, {h}) from base64")
if req.width is not None and req.height is not None:
h, w = req.height, req.width
assert 0. <= req.prompt_strength <= 1., 'can only work with strength in [0.0, 1.0]'
t_enc = int(req.prompt_strength * req.num_inference_steps)
print(f"target t_enc is {t_enc} steps")
w, h = map(lambda x: x - x % 64, (w, h)) # resize to integer multiple of 64
image = image.resize((w, h), resample=Image.Resampling.LANCZOS)
init_mask = generator._create_init_mask(image)
if init_mask is not None:
req.sampler = 'plms' # hack to force the underlying implementation to initialize DDIM properly
result = generator.prompt2image(
req.prompt,
iterations = req.num_outputs,
steps = req.num_inference_steps,
seed = req.seed,
cfg_scale = req.guidance_scale,
ddim_eta = 0.0,
skip_normalize = False,
image_callback = None,
step_callback = None,
width = req.width,
height = req.height,
sampler_name = req.sampler,
seamless = False,
log_tokenization= False,
with_variations = None,
variation_amount = 0.0,
# these are specific to img2img and inpaint
init_img = init_image,
init_mask = init_mask,
fit = False,
strength = req.prompt_strength,
init_img_is_path = False,
# these are specific to GFPGAN/ESRGAN
gfpgan_strength= 0,
save_original = False,
upscale = None,
negative_prompt= req.negative_prompt,
)
has_filters = (req.use_face_correction is not None and req.use_face_correction.startswith('GFPGAN')) or \
(req.use_upscale is not None and req.use_upscale.startswith('RealESRGAN'))
print('has filter', has_filters)
return_orig_img = not has_filters or not req.show_only_filtered_image
res = Response()
res.request = req
res.images = []
if req.save_to_disk_path is not None:
session_out_path = os.path.join(req.save_to_disk_path, req.session_id)
@ -375,152 +269,63 @@ def do_mk_img(req: Request):
else:
session_out_path = None
seeds = ""
with torch.no_grad():
for n in trange(opt_n_iter, desc="Sampling"):
for prompts in tqdm(data, desc="data"):
with precision_scope("cuda"):
modelCS.to(device)
uc = None
if req.guidance_scale != 1.0:
uc = modelCS.get_learned_conditioning(batch_size * [req.negative_prompt])
if isinstance(prompts, tuple):
prompts = list(prompts)
subprompts, weights = split_weighted_subprompts(prompts[0])
if len(subprompts) > 1:
c = torch.zeros_like(uc)
totalWeight = sum(weights)
# normalize each "sub prompt" and add it
for i in range(len(subprompts)):
weight = weights[i]
# if not skip_normalize:
weight = weight / totalWeight
c = torch.add(c, modelCS.get_learned_conditioning(subprompts[i]), alpha=weight)
else:
c = modelCS.get_learned_conditioning(prompts)
modelFS.to(device)
partial_x_samples = None
last_callback_time = -1
def img_callback(x_samples, i):
nonlocal partial_x_samples, last_callback_time
partial_x_samples = x_samples
if req.stream_progress_updates:
n_steps = req.num_inference_steps if req.init_image is None else t_enc
step_time = time.time() - last_callback_time if last_callback_time != -1 else -1
last_callback_time = time.time()
progress = {"step": i, "total_steps": n_steps, "step_time": step_time}
if req.stream_image_progress and i % 5 == 0:
partial_images = []
for i in range(batch_size):
x_samples_ddim = modelFS.decode_first_stage(x_samples[i].unsqueeze(0))
x_sample = torch.clamp((x_samples_ddim + 1.0) / 2.0, min=0.0, max=1.0)
x_sample = 255.0 * rearrange(x_sample[0].cpu().numpy(), "c h w -> h w c")
x_sample = x_sample.astype(np.uint8)
img = Image.fromarray(x_sample)
buf = BytesIO()
img.save(buf, format='JPEG')
buf.seek(0)
del img, x_sample, x_samples_ddim
# don't delete x_samples, it is used in the code that called this callback
temp_images[str(req.session_id) + '/' + str(i)] = buf
partial_images.append({'path': f'/image/tmp/{req.session_id}/{i}'})
progress['output'] = partial_images
yield json.dumps(progress)
if stop_processing:
raise UserInitiatedStop("User requested that we stop processing")
# run the handler
try:
if handler == _txt2img:
x_samples = _txt2img(req.width, req.height, req.num_outputs, req.num_inference_steps, req.guidance_scale, None, opt_C, opt_f, opt_ddim_eta, c, uc, opt_seed, img_callback, mask, req.sampler)
else:
x_samples = _img2img(init_latent, t_enc, batch_size, req.guidance_scale, c, uc, req.num_inference_steps, opt_ddim_eta, opt_seed, img_callback, mask)
yield from x_samples
x_samples = partial_x_samples
except UserInitiatedStop:
if partial_x_samples is None:
continue
x_samples = partial_x_samples
print("saving images")
for i in range(batch_size):
img_id = base64.b64encode(int(time.time()+i).to_bytes(8, 'big')).decode() # Generate unique ID based on time.
img_id = img_id.translate({43:None, 47:None, 61:None})[-8:] # Remove + / = and keep last 8 chars.
x_samples_ddim = modelFS.decode_first_stage(x_samples[i].unsqueeze(0))
x_sample = torch.clamp((x_samples_ddim + 1.0) / 2.0, min=0.0, max=1.0)
x_sample = 255.0 * rearrange(x_sample[0].cpu().numpy(), "c h w -> h w c")
x_sample = x_sample.astype(np.uint8)
img = Image.fromarray(x_sample)
has_filters = (req.use_face_correction is not None and req.use_face_correction.startswith('GFPGAN')) or \
(req.use_upscale is not None and req.use_upscale.startswith('RealESRGAN'))
return_orig_img = not has_filters or not req.show_only_filtered_image
if stop_processing:
return_orig_img = True
for img, seed in result:
if req.save_to_disk_path is not None:
if return_orig_img:
img_out_path = get_base_path(req.save_to_disk_path, req.session_id, prompts[0], img_id, req.output_format)
save_image(img, img_out_path)
meta_out_path = get_base_path(req.save_to_disk_path, req.session_id, prompts[0], img_id, 'txt')
save_metadata(meta_out_path, req, prompts[0], opt_seed)
prompt_flattened = filename_regex.sub('_', req.prompt)
prompt_flattened = prompt_flattened[:50]
img_id = str(uuid.uuid4())[-8:]
file_path = f"{prompt_flattened}_{img_id}"
img_out_path = os.path.join(session_out_path, f"{file_path}.{req.output_format}")
meta_out_path = os.path.join(session_out_path, f"{file_path}.txt")
if return_orig_img:
img_data = img_to_base64_str(img, req.output_format)
res_image_orig = ResponseImage(data=img_data, seed=opt_seed)
save_image(img, img_out_path)
save_metadata(meta_out_path, req.prompt, seed, req.width, req.height, req.num_inference_steps, req.guidance_scale, req.prompt_strength, req.use_face_correction, req.use_upscale, req.sampler, req.negative_prompt)
if return_orig_img:
img_data = img_to_base64_str(img)
res_image_orig = ResponseImage(data=img_data, seed=seed)
res.images.append(res_image_orig)
if req.save_to_disk_path is not None:
res_image_orig.path_abs = img_out_path
del img
if has_filters and not stop_processing:
print('Applying filters..')
gc()
filters_applied = []
np_img = img.convert('RGB')
np_img = np.array(np_img, dtype=np.uint8)
if req.use_face_correction:
x_sample = apply_filters('gfpgan', x_sample)
_, _, np_img = model_gfpgan.enhance(np_img, has_aligned=False, only_center_face=False, paste_back=True)
filters_applied.append(req.use_face_correction)
if req.use_upscale:
x_sample = apply_filters('real_esrgan', x_sample)
np_img, _ = model_real_esrgan.enhance(np_img)
filters_applied.append(req.use_upscale)
if (len(filters_applied) > 0):
filtered_image = Image.fromarray(x_sample)
filtered_img_data = img_to_base64_str(filtered_image, req.output_format)
response_image = ResponseImage(data=filtered_img_data, seed=opt_seed)
res.images.append(response_image)
filtered_image = Image.fromarray(np_img)
filtered_img_data = img_to_base64_str(filtered_image)
res_image_filtered = ResponseImage(data=filtered_img_data, seed=seed)
res.images.append(res_image_filtered)
filters_applied = "_".join(filters_applied)
if req.save_to_disk_path is not None:
filtered_img_out_path = get_base_path(req.save_to_disk_path, req.session_id, prompts[0], img_id, req.output_format, "_".join(filters_applied))
filtered_img_out_path = os.path.join(session_out_path, f"{file_path}_{filters_applied}.{req.output_format}")
save_image(filtered_image, filtered_img_out_path)
response_image.path_abs = filtered_img_out_path
res_image_filtered.path_abs = filtered_img_out_path
del filtered_image
seeds += str(opt_seed) + ","
opt_seed += 1
move_fs_to_cpu()
gc()
del x_samples, x_samples_ddim, x_sample
print("memory_final = ", torch.cuda.memory_allocated() / 1e6)
del img
print('Task completed')
@ -532,88 +337,15 @@ def save_image(img, img_out_path):
except:
print('could not save the file', traceback.format_exc())
def save_metadata(meta_out_path, req, prompt, opt_seed):
metadata = f"""{prompt}
Width: {req.width}
Height: {req.height}
Seed: {opt_seed}
Steps: {req.num_inference_steps}
Guidance Scale: {req.guidance_scale}
Prompt Strength: {req.prompt_strength}
Use Face Correction: {req.use_face_correction}
Use Upscaling: {req.use_upscale}
Sampler: {req.sampler}
Negative Prompt: {req.negative_prompt}
Stable Diffusion Model: {req.use_stable_diffusion_model + '.ckpt'}
"""
def save_metadata(meta_out_path, prompt, seed, width, height, num_inference_steps, guidance_scale, prompt_strength, use_correct_face, use_upscale, sampler_name, negative_prompt):
metadata = f"{prompt}\nWidth: {width}\nHeight: {height}\nSeed: {seed}\nSteps: {num_inference_steps}\nGuidance Scale: {guidance_scale}\nPrompt Strength: {prompt_strength}\nUse Face Correction: {use_correct_face}\nUse Upscaling: {use_upscale}\nSampler: {sampler_name}\nNegative Prompt: {negative_prompt}"
try:
with open(meta_out_path, 'w', encoding='utf-8') as f:
with open(meta_out_path, 'w') as f:
f.write(metadata)
except:
print('could not save the file', traceback.format_exc())
def _txt2img(opt_W, opt_H, opt_n_samples, opt_ddim_steps, opt_scale, start_code, opt_C, opt_f, opt_ddim_eta, c, uc, opt_seed, img_callback, mask, sampler_name):
shape = [opt_n_samples, opt_C, opt_H // opt_f, opt_W // opt_f]
if device != "cpu":
mem = torch.cuda.memory_allocated() / 1e6
modelCS.to("cpu")
while torch.cuda.memory_allocated() / 1e6 >= mem:
time.sleep(1)
if sampler_name == 'ddim':
model.make_schedule(ddim_num_steps=opt_ddim_steps, ddim_eta=opt_ddim_eta, verbose=False)
samples_ddim = model.sample(
S=opt_ddim_steps,
conditioning=c,
seed=opt_seed,
shape=shape,
verbose=False,
unconditional_guidance_scale=opt_scale,
unconditional_conditioning=uc,
eta=opt_ddim_eta,
x_T=start_code,
img_callback=img_callback,
mask=mask,
sampler = sampler_name,
)
yield from samples_ddim
def _img2img(init_latent, t_enc, batch_size, opt_scale, c, uc, opt_ddim_steps, opt_ddim_eta, opt_seed, img_callback, mask):
# encode (scaled latent)
z_enc = model.stochastic_encode(
init_latent,
torch.tensor([t_enc] * batch_size).to(device),
opt_seed,
opt_ddim_eta,
opt_ddim_steps,
)
x_T = None if mask is None else init_latent
# decode it
samples_ddim = model.sample(
t_enc,
c,
z_enc,
unconditional_guidance_scale=opt_scale,
unconditional_conditioning=uc,
img_callback=img_callback,
mask=mask,
x_T=x_T,
sampler = 'ddim'
)
yield from samples_ddim
def move_fs_to_cpu():
if device != "cpu":
mem = torch.cuda.memory_allocated() / 1e6
modelFS.to("cpu")
while torch.cuda.memory_allocated() / 1e6 >= mem:
time.sleep(1)
def gc():
if device == 'cpu':
return
@ -621,25 +353,6 @@ def gc():
torch.cuda.empty_cache()
torch.cuda.ipc_collect()
# internal
def chunk(it, size):
it = iter(it)
return iter(lambda: tuple(islice(it, size)), ())
def load_model_from_config(ckpt, verbose=False):
print(f"Loading model from {ckpt}")
pl_sd = torch.load(ckpt, map_location="cpu")
if "global_step" in pl_sd:
print(f"Global Step: {pl_sd['global_step']}")
sd = pl_sd["state_dict"]
return sd
# utils
class UserInitiatedStop(Exception):
pass
def load_img(img_str, w0, h0):
image = base64_str_to_img(img_str).convert("RGB")
w, h = image.size
@ -680,9 +393,9 @@ def load_mask(mask_str, h0, w0, newH, newW, invert=False):
return image
# https://stackoverflow.com/a/61114178
def img_to_base64_str(img, output_format="PNG"):
def img_to_base64_str(img):
buffered = BytesIO()
img.save(buffered, format=output_format)
img.save(buffered, format="PNG")
buffered.seek(0)
img_byte = buffered.getvalue()
img_str = "data:image/png;base64," + base64.b64encode(img_byte).decode()

View File

@ -1,299 +0,0 @@
import json
import traceback
TASK_TTL = 15 * 60 # seconds, Discard last session's task timeout
import queue, threading, time
from typing import Any, Generator, Hashable, Optional, Union
from pydantic import BaseModel
from sd_internal import Request, Response
class SymbolClass(type): # Print nicely formatted Symbol names.
def __repr__(self): return self.__qualname__
def __str__(self): return self.__name__
class Symbol(metaclass=SymbolClass): pass
class ServerStates:
class Init(Symbol): pass
class LoadingModel(Symbol): pass
class Online(Symbol): pass
class Rendering(Symbol): pass
class Unavailable(Symbol): pass
class RenderTask(): # Task with output queue and completion lock.
def __init__(self, req: Request):
self.request: Request = req # Initial Request
self.response: Any = None # Copy of the last reponse
self.temp_images:[] = [None] * req.num_outputs * (1 if req.show_only_filtered_image else 2)
self.error: Exception = None
self.lock: threading.Lock = threading.Lock() # Locks at task start and unlocks when task is completed
self.buffer_queue: queue.Queue = queue.Queue() # Queue of JSON string segments
async def read_buffer_generator(self):
try:
while not self.buffer_queue.empty():
res = self.buffer_queue.get(block=False)
self.buffer_queue.task_done()
yield res
except queue.Empty as e: yield
# defaults from https://huggingface.co/blog/stable_diffusion
class ImageRequest(BaseModel):
session_id: str = "session"
prompt: str = ""
negative_prompt: str = ""
init_image: str = None # base64
mask: str = None # base64
num_outputs: int = 1
num_inference_steps: int = 50
guidance_scale: float = 7.5
width: int = 512
height: int = 512
seed: int = 42
prompt_strength: float = 0.8
sampler: str = None # "ddim", "plms", "heun", "euler", "euler_a", "dpm2", "dpm2_a", "lms"
# allow_nsfw: bool = False
save_to_disk_path: str = None
turbo: bool = True
use_cpu: bool = False
use_full_precision: bool = False
use_face_correction: str = None # or "GFPGANv1.3"
use_upscale: str = None # or "RealESRGAN_x4plus" or "RealESRGAN_x4plus_anime_6B"
use_stable_diffusion_model: str = "sd-v1-4"
show_only_filtered_image: bool = False
output_format: str = "jpeg" # or "png"
stream_progress_updates: bool = False
stream_image_progress: bool = False
# Temporary cache to allow to query tasks results for a short time after they are completed.
class TaskCache():
def __init__(self):
self._base = dict()
self._lock: threading.Lock = threading.RLock()
def _get_ttl_time(self, ttl: int) -> int:
return int(time.time()) + ttl
def _is_expired(self, timestamp: int) -> bool:
return int(time.time()) >= timestamp
def clean(self) -> None:
if not self._lock.acquire(blocking=True, timeout=10): raise Exception('TaskCache.clean failed to acquire lock within timeout.')
try:
# Create a list of expired keys to delete
to_delete = []
for key in self._base:
ttl, _ = self._base[key]
if self._is_expired(ttl):
to_delete.append(key)
# Remove Items
for key in to_delete:
del self._base[key]
print(f'Session {key} expired. Data removed.')
finally:
self._lock.release()
def clear(self) -> None:
if not self._lock.acquire(blocking=True, timeout=10): raise Exception('TaskCache.clear failed to acquire lock within timeout.')
try: self._base.clear()
finally: self._lock.release()
def delete(self, key: Hashable) -> bool:
if not self._lock.acquire(blocking=True, timeout=10): raise Exception('TaskCache.delete failed to acquire lock within timeout.')
try:
if key not in self._base:
return False
del self._base[key]
return True
finally:
self._lock.release()
def keep(self, key: Hashable, ttl: int) -> bool:
if not self._lock.acquire(blocking=True, timeout=10): raise Exception('TaskCache.keep failed to acquire lock within timeout.')
try:
if key in self._base:
_, value = self._base.get(key)
self._base[key] = (self._get_ttl_time(ttl), value)
return True
return False
finally:
self._lock.release()
def put(self, key: Hashable, value: Any, ttl: int) -> bool:
if not self._lock.acquire(blocking=True, timeout=10): raise Exception('TaskCache.put failed to acquire lock within timeout.')
try:
self._base[key] = (
self._get_ttl_time(ttl), value
)
except Exception as e:
print(str(e))
print(traceback.format_exc())
return False
else:
return True
finally:
self._lock.release()
def tryGet(self, key: Hashable) -> Any:
if not self._lock.acquire(blocking=True, timeout=10): raise Exception('TaskCache.tryGet failed to acquire lock within timeout.')
try:
ttl, value = self._base.get(key, (None, None))
if ttl is not None and self._is_expired(ttl):
print(f'Session {key} expired. Discarding data.')
self.delete(key)
return None
return value
finally:
self._lock.release()
current_state = ServerStates.Init
current_state_error:Exception = None
current_model_path = None
tasks_queue = queue.Queue()
task_cache = TaskCache()
default_model_to_load = None
def preload_model(file_path=None):
global current_state, current_state_error, current_model_path
if file_path == None:
file_path = default_model_to_load
if file_path == current_model_path:
return
current_state = ServerStates.LoadingModel
try:
from . import runtime
runtime.load_model_ckpt(ckpt_to_use=file_path)
current_model_path = file_path
current_state_error = None
current_state = ServerStates.Online
except Exception as e:
current_model_path = None
current_state_error = e
current_state = ServerStates.Unavailable
print(traceback.format_exc())
def thread_render():
global current_state, current_state_error, current_model_path
from . import runtime
current_state = ServerStates.Online
preload_model()
while True:
task_cache.clean()
if isinstance(current_state_error, SystemExit):
current_state = ServerStates.Unavailable
return
task = None
try:
task = tasks_queue.get(timeout=1)
except queue.Empty as e:
if isinstance(current_state_error, SystemExit):
current_state = ServerStates.Unavailable
return
else: continue
#if current_model_path != task.request.use_stable_diffusion_model:
# preload_model(task.request.use_stable_diffusion_model)
if current_state_error:
task.error = current_state_error
continue
print(f'Session {task.request.session_id} starting task {id(task)}')
try:
task.lock.acquire(blocking=False)
res = runtime.mk_img(task.request)
if current_model_path == task.request.use_stable_diffusion_model:
current_state = ServerStates.Rendering
else:
current_state = ServerStates.LoadingModel
except Exception as e:
task.error = e
task.lock.release()
tasks_queue.task_done()
print(traceback.format_exc())
continue
dataQueue = None
if task.request.stream_progress_updates:
dataQueue = task.buffer_queue
for result in res:
if current_state == ServerStates.LoadingModel:
current_state = ServerStates.Rendering
current_model_path = task.request.use_stable_diffusion_model
if isinstance(current_state_error, SystemExit) or isinstance(current_state_error, StopAsyncIteration) or isinstance(task.error, StopAsyncIteration):
runtime.stop_processing = True
if isinstance(current_state_error, StopAsyncIteration):
task.error = current_state_error
current_state_error = None
print(f'Session {task.request.session_id} sent cancel signal for task {id(task)}')
if dataQueue:
dataQueue.put(result)
if isinstance(result, str):
result = json.loads(result)
task.response = result
if 'output' in result:
for out_obj in result['output']:
if 'path' in out_obj:
img_id = out_obj['path'][out_obj['path'].rindex('/') + 1:]
task.temp_images[int(img_id)] = runtime.temp_images[out_obj['path'][11:]]
elif 'data' in out_obj:
task.temp_images[result['output'].index(out_obj)] = out_obj['data']
task_cache.keep(task.request.session_id, TASK_TTL)
# Task completed
task.lock.release()
tasks_queue.task_done()
task_cache.keep(task.request.session_id, TASK_TTL)
if isinstance(task.error, StopAsyncIteration):
print(f'Session {task.request.session_id} task {id(task)} cancelled!')
elif task.error is not None:
print(f'Session {task.request.session_id} task {id(task)} failed!')
else:
print(f'Session {task.request.session_id} task {id(task)} completed.')
current_state = ServerStates.Online
render_thread = threading.Thread(target=thread_render)
def start_render_thread():
# Start Rendering Thread
render_thread.daemon = True
render_thread.start()
def shutdown_event(): # Signal render thread to close on shutdown
global current_state_error
current_state_error = SystemExit('Application shutting down.')
def render(req : ImageRequest):
if not render_thread.is_alive(): # Render thread is dead
raise ChildProcessError('Rendering thread has died.')
# Alive, check if task in cache
task = task_cache.tryGet(req.session_id)
if task and not task.response and not task.error and not task.lock.locked():
# Unstarted task pending, deny queueing more than one.
raise ConnectionRefusedError(f'Session {req.session_id} has an already pending task.')
#
from . import runtime
r = Request()
r.session_id = req.session_id
r.prompt = req.prompt
r.negative_prompt = req.negative_prompt
r.init_image = req.init_image
r.mask = req.mask
r.num_outputs = req.num_outputs
r.num_inference_steps = req.num_inference_steps
r.guidance_scale = req.guidance_scale
r.width = req.width
r.height = req.height
r.seed = req.seed
r.prompt_strength = req.prompt_strength
r.sampler = req.sampler
# r.allow_nsfw = req.allow_nsfw
r.turbo = req.turbo
r.use_cpu = req.use_cpu
r.use_full_precision = req.use_full_precision
r.save_to_disk_path = req.save_to_disk_path
r.use_upscale: str = req.use_upscale
r.use_face_correction = req.use_face_correction
r.use_stable_diffusion_model = req.use_stable_diffusion_model
r.show_only_filtered_image = req.show_only_filtered_image
r.output_format = req.output_format
r.stream_progress_updates = True # the underlying implementation only supports streaming
r.stream_image_progress = req.stream_image_progress
if not req.stream_progress_updates:
r.stream_image_progress = False
new_task = RenderTask(r)
if task_cache.put(r.session_id, new_task, TASK_TTL):
tasks_queue.put(new_task, block=True, timeout=30)
return new_task
raise RuntimeError('Failed to add task to cache.')

View File

@ -4,178 +4,170 @@ import traceback
import sys
import os
SD_DIR = os.getcwd()
print('started in ', SD_DIR)
SCRIPT_DIR = os.getcwd()
print('started in ', SCRIPT_DIR)
SD_UI_DIR = os.getenv('SD_UI_PATH', None)
sys.path.append(os.path.dirname(SD_UI_DIR))
CONFIG_DIR = os.path.abspath(os.path.join(SD_UI_DIR, '..', 'scripts'))
MODELS_DIR = os.path.abspath(os.path.join(SD_DIR, '..', 'models'))
UI_PLUGINS_DIR = os.path.abspath(os.path.join(SD_DIR, '..', 'plugins', 'ui'))
CONFIG_DIR = os.path.join(SD_UI_DIR, '..', 'scripts')
OUTPUT_DIRNAME = "Stable Diffusion UI" # in the user's home folder
TASK_TTL = 15 * 60 # Discard last session's task timeout
from fastapi import FastAPI, HTTPException
from fastapi.staticfiles import StaticFiles
from starlette.responses import FileResponse, JSONResponse, StreamingResponse
from starlette.responses import FileResponse, StreamingResponse
from pydantic import BaseModel
import logging
import queue, threading, time
from typing import Any, Generator, Hashable, Optional, Union
from sd_internal import Request, Response, task_manager
from sd_internal import Request, Response
app = FastAPI()
model_loaded = False
model_is_loading = False
modifiers_cache = None
outpath = os.path.join(os.path.expanduser("~"), OUTPUT_DIRNAME)
os.makedirs(UI_PLUGINS_DIR, exist_ok=True)
# don't show access log entries for URLs that start with the given prefix
ACCESS_LOG_SUPPRESS_PATH_PREFIXES = ['/ping', '/image', '/modifier-thumbnails']
ACCESS_LOG_SUPPRESS_PATH_PREFIXES = ['/ping', '/modifier-thumbnails']
NOCACHE_HEADERS={"Cache-Control": "no-cache, no-store, must-revalidate", "Pragma": "no-cache", "Expires": "0"}
app.mount('/media', StaticFiles(directory=os.path.join(SD_UI_DIR, 'media/')), name="media")
app.mount('/media', StaticFiles(directory=os.path.join(SD_UI_DIR, 'media')), name="media")
app.mount('/plugins', StaticFiles(directory=UI_PLUGINS_DIR), name="plugins")
# defaults from https://huggingface.co/blog/stable_diffusion
class ImageRequest(BaseModel):
session_id: str = "session"
prompt: str = ""
negative_prompt: str = ""
init_image: str = None # base64
mask: str = None # base64
num_outputs: int = 1
num_inference_steps: int = 50
guidance_scale: float = 7.5
width: int = 512
height: int = 512
seed: int = 42
prompt_strength: float = 0.8
sampler: str = None # "ddim", "plms", "heun", "euler", "euler_a", "dpm2", "dpm2_a", "lms"
# allow_nsfw: bool = False
save_to_disk_path: str = None
turbo: bool = True
use_cpu: bool = False
use_full_precision: bool = False
use_face_correction: str = None # or "GFPGANv1.3"
use_upscale: str = None # or "RealESRGAN_x4plus" or "RealESRGAN_x4plus_anime_6B"
show_only_filtered_image: bool = False
output_format: str = "jpeg" # "png", "jpeg"
stream_progress_updates: bool = False
stream_image_progress: bool = False
class SetAppConfigRequest(BaseModel):
update_branch: str = "main"
# needs to support the legacy installations
def get_initial_model_to_load():
custom_weight_path = os.path.join(SD_DIR, 'custom-model.ckpt')
ckpt_to_use = "sd-v1-4" if not os.path.exists(custom_weight_path) else "custom-model"
ckpt_to_use = os.path.join(SD_DIR, ckpt_to_use)
config = getConfig()
if 'model' in config and 'stable-diffusion' in config['model']:
model_name = config['model']['stable-diffusion']
model_path = resolve_model_to_use(model_name)
if os.path.exists(model_path + '.ckpt'):
ckpt_to_use = model_path
else:
print('Could not find the configured custom model at:', model_path + '.ckpt', '. Using the default one:', ckpt_to_use + '.ckpt')
return ckpt_to_use
def resolve_model_to_use(model_name):
if model_name in ('sd-v1-4', 'custom-model'):
model_path = os.path.join(MODELS_DIR, 'stable-diffusion', model_name)
legacy_model_path = os.path.join(SD_DIR, model_name)
if not os.path.exists(model_path + '.ckpt') and os.path.exists(legacy_model_path + '.ckpt'):
model_path = legacy_model_path
else:
model_path = os.path.join(MODELS_DIR, 'stable-diffusion', model_name)
return model_path
@app.on_event("shutdown")
def shutdown_event(): # Signal render thread to close on shutdown
task_manager.current_state_error = SystemExit('Application shutting down.')
@app.get('/')
def read_root():
return FileResponse(os.path.join(SD_UI_DIR, 'index.html'), headers=NOCACHE_HEADERS)
headers = {"Cache-Control": "no-cache, no-store, must-revalidate", "Pragma": "no-cache", "Expires": "0"}
return FileResponse(os.path.join(SD_UI_DIR, 'index.html'), headers=headers)
@app.get('/ping') # Get server and optionally session status.
def ping(session_id:str=None):
if not task_manager.render_thread.is_alive(): # Render thread is dead.
if task_manager.current_state_error: raise HTTPException(status_code=500, detail=str(task_manager.current_state_error))
raise HTTPException(status_code=500, detail='Render thread is dead.')
if task_manager.current_state_error and not isinstance(task_manager.current_state_error, StopAsyncIteration): raise HTTPException(status_code=500, detail=str(task_manager.current_state_error))
# Alive
response = {'status': str(task_manager.current_state)}
if session_id:
task = task_manager.task_cache.tryGet(session_id)
if task:
response['task'] = id(task)
if task.lock.locked():
response['session'] = 'running'
elif isinstance(task.error, StopAsyncIteration):
response['session'] = 'stopped'
elif task.error:
response['session'] = 'error'
elif not task.buffer_queue.empty():
response['session'] = 'buffer'
elif task.response:
response['session'] = 'completed'
else:
response['session'] = 'pending'
return JSONResponse(response, headers=NOCACHE_HEADERS)
@app.get('/ping')
async def ping():
global model_loaded, model_is_loading
def save_model_to_config(model_name):
config = getConfig()
if 'model' not in config:
config['model'] = {}
config['model']['stable-diffusion'] = model_name
setConfig(config)
@app.post('/render')
def render(req : task_manager.ImageRequest):
try:
save_model_to_config(req.use_stable_diffusion_model)
req.use_stable_diffusion_model = resolve_model_to_use(req.use_stable_diffusion_model)
new_task = task_manager.render(req)
response = {
'status': str(task_manager.current_state),
'queue': task_manager.tasks_queue.qsize(),
'stream': f'/image/stream/{req.session_id}/{id(new_task)}',
'task': id(new_task)
}
return JSONResponse(response, headers=NOCACHE_HEADERS)
except ChildProcessError as e: # Render thread is dead
raise HTTPException(status_code=500, detail=f'Rendering thread has died.') # HTTP500 Internal Server Error
except ConnectionRefusedError as e: # Unstarted task pending, deny queueing more than one.
raise HTTPException(status_code=503, detail=f'Session {req.session_id} has an already pending task.') # HTTP503 Service Unavailable
except Exception as e:
raise HTTPException(status_code=500, detail=str(e))
if model_loaded:
return {'OK'}
@app.get('/image/stream/{session_id:str}/{task_id:int}')
def stream(session_id:str, task_id:int):
#TODO Move to WebSockets ??
task = task_manager.task_cache.tryGet(session_id)
if not task: raise HTTPException(status_code=410, detail='No request received.') # HTTP410 Gone
if (id(task) != task_id): raise HTTPException(status_code=409, detail=f'Wrong task id received. Expected:{id(task)}, Received:{task_id}') # HTTP409 Conflict
if task.buffer_queue.empty() and not task.lock.locked():
if task.response:
#print(f'Session {session_id} sending cached response')
return JSONResponse(task.response, headers=NOCACHE_HEADERS)
raise HTTPException(status_code=425, detail='Too Early, task not started yet.') # HTTP425 Too Early
#print(f'Session {session_id} opened live render stream {id(task.buffer_queue)}')
return StreamingResponse(task.read_buffer_generator(), media_type='application/json')
if model_is_loading:
return {'ERROR'}
model_is_loading = True
from sd_internal import runtime
custom_weight_path = os.path.join(SCRIPT_DIR, 'custom-model.ckpt')
ckpt_to_use = "sd-v1-4" if not os.path.exists(custom_weight_path) else "custom-model"
runtime.load_model_ckpt(ckpt_to_use=ckpt_to_use)
model_loaded = True
model_is_loading = False
return {'OK'}
except Exception as e:
print(traceback.format_exc())
return HTTPException(status_code=500, detail=str(e))
@app.post('/image')
def image(req : ImageRequest):
from sd_internal import runtime
r = Request()
r.session_id = req.session_id
r.prompt = req.prompt
r.negative_prompt = req.negative_prompt
r.init_image = req.init_image
r.mask = req.mask
r.num_outputs = req.num_outputs
r.num_inference_steps = req.num_inference_steps
r.guidance_scale = req.guidance_scale
r.width = req.width
r.height = req.height
r.seed = req.seed
r.prompt_strength = req.prompt_strength
r.sampler = req.sampler
# r.allow_nsfw = req.allow_nsfw
r.turbo = req.turbo
r.use_cpu = req.use_cpu
r.use_full_precision = req.use_full_precision
r.save_to_disk_path = req.save_to_disk_path
r.use_upscale: str = req.use_upscale
r.use_face_correction = req.use_face_correction
r.show_only_filtered_image = req.show_only_filtered_image
r.output_format = req.output_format
r.stream_progress_updates = True # the underlying implementation only supports streaming
r.stream_image_progress = req.stream_image_progress
try:
if not req.stream_progress_updates:
r.stream_image_progress = False
res = runtime.mk_img(r)
if req.stream_progress_updates:
return StreamingResponse(res, media_type='application/json')
else: # compatibility mode: buffer the streaming responses, and return the last one
last_result = None
for result in res:
last_result = result
return json.loads(last_result)
except Exception as e:
print(traceback.format_exc())
return HTTPException(status_code=500, detail=str(e))
@app.get('/image/stop')
def stop(session_id:str=None):
if not session_id:
if task_manager.current_state == task_manager.ServerStates.Online or task_manager.current_state == task_manager.ServerStates.Unavailable:
raise HTTPException(status_code=409, detail='Not currently running any tasks.') # HTTP409 Conflict
task_manager.current_state_error = StopAsyncIteration('')
return {'OK'}
task = task_manager.task_cache.tryGet(session_id)
if not task: raise HTTPException(status_code=404, detail=f'Session {session_id} has no active task.') # HTTP404 Not Found
if isinstance(task.error, StopAsyncIteration): raise HTTPException(status_code=409, detail=f'Session {session_id} task is already stopped.') # HTTP409 Conflict
task.error = StopAsyncIteration('')
return {'OK'}
@app.get('/image/tmp/{session_id}/{img_id:int}')
def get_image(session_id, img_id):
task = task_manager.task_cache.tryGet(session_id)
if not task: raise HTTPException(status_code=410, detail=f'Session {session_id} has not submitted a task.') # HTTP410 Gone
if not task.temp_images[img_id]: raise HTTPException(status_code=425, detail='Too Early, task data is not available yet.') # HTTP425 Too Early
def stop():
try:
img_data = task.temp_images[img_id]
if isinstance(img_data, str):
return img_data
img_data.seek(0)
return StreamingResponse(img_data, media_type='image/jpeg')
except KeyError as e:
raise HTTPException(status_code=500, detail=str(e))
if model_is_loading:
return {'ERROR'}
from sd_internal import runtime
runtime.stop_processing = True
return {'OK'}
except Exception as e:
print(traceback.format_exc())
return HTTPException(status_code=500, detail=str(e))
@app.get('/image/tmp/{session_id}/{img_id}')
def get_image(session_id, img_id):
from sd_internal import runtime
buf = runtime.temp_images[session_id + '/' + img_id]
buf.seek(0)
return StreamingResponse(buf, media_type='image/jpeg')
@app.post('/app_config')
async def setAppConfig(req : SetAppConfigRequest):
@ -192,95 +184,44 @@ async def setAppConfig(req : SetAppConfigRequest):
config_bat_path = os.path.join(CONFIG_DIR, 'config.bat')
config_sh_path = os.path.join(CONFIG_DIR, 'config.sh')
with open(config_json_path, 'w', encoding='utf-8') as f:
with open(config_json_path, 'w') as f:
f.write(config_json_str)
with open(config_bat_path, 'w', encoding='utf-8') as f:
with open(config_bat_path, 'w') as f:
f.write(config_bat_str)
with open(config_sh_path, 'w', encoding='utf-8') as f:
with open(config_sh_path, 'w') as f:
f.write(config_sh_str)
return {'OK'}
except Exception as e:
print(traceback.format_exc())
raise HTTPException(status_code=500, detail=str(e))
return HTTPException(status_code=500, detail=str(e))
def getConfig(default_val={}):
@app.get('/app_config')
def getAppConfig():
try:
config_json_path = os.path.join(CONFIG_DIR, 'config.json')
if not os.path.exists(config_json_path):
return default_val
with open(config_json_path, 'r', encoding='utf-8') as f:
return json.load(f)
return HTTPException(status_code=500, detail="No config file")
with open(config_json_path, 'r') as f:
config_json_str = f.read()
config = json.loads(config_json_str)
return config
except Exception as e:
print(str(e))
print(traceback.format_exc())
return default_val
return HTTPException(status_code=500, detail=str(e))
def setConfig(config):
try:
config_json_path = os.path.join(CONFIG_DIR, 'config.json')
with open(config_json_path, 'w', encoding='utf-8') as f:
return json.dump(config, f)
except Exception as e:
print(str(e))
print(traceback.format_exc())
@app.get('/modifiers.json')
def read_modifiers():
headers = {"Cache-Control": "no-cache, no-store, must-revalidate", "Pragma": "no-cache", "Expires": "0"}
return FileResponse(os.path.join(SD_UI_DIR, 'modifiers.json'), headers=headers)
def getModels():
models = {
'active': {
'stable-diffusion': 'sd-v1-4',
},
'options': {
'stable-diffusion': ['sd-v1-4'],
},
}
# custom models
sd_models_dir = os.path.join(MODELS_DIR, 'stable-diffusion')
for file in os.listdir(sd_models_dir):
if file.endswith('.ckpt'):
model_name = os.path.splitext(file)[0]
models['options']['stable-diffusion'].append(model_name)
# legacy
custom_weight_path = os.path.join(SD_DIR, 'custom-model.ckpt')
if os.path.exists(custom_weight_path):
models['active']['stable-diffusion'] = 'custom-model'
models['options']['stable-diffusion'].append('custom-model')
config = getConfig()
if 'model' in config and 'stable-diffusion' in config['model']:
models['active']['stable-diffusion'] = config['model']['stable-diffusion']
return models
def getUIPlugins():
plugins = []
for file in os.listdir(UI_PLUGINS_DIR):
if file.endswith('.plugin.js'):
plugins.append(f'/plugins/{file}')
return plugins
@app.get('/get/{key:path}')
def read_web_data(key:str=None):
if not key: # /get without parameters, stable-diffusion easter egg.
raise HTTPException(status_code=418, detail="StableDiffusion is drawing a teapot!") # HTTP418 I'm a teapot
elif key == 'app_config':
config = getConfig(default_val=None)
if config is None:
raise HTTPException(status_code=500, detail="Config file is missing or unreadable")
return JSONResponse(config, headers=NOCACHE_HEADERS)
elif key == 'models':
return JSONResponse(getModels(), headers=NOCACHE_HEADERS)
elif key == 'modifiers': return FileResponse(os.path.join(SD_UI_DIR, 'modifiers.json'), headers=NOCACHE_HEADERS)
elif key == 'output_dir': return JSONResponse({ 'output_dir': outpath }, headers=NOCACHE_HEADERS)
elif key == 'ui_plugins': return JSONResponse(getUIPlugins(), headers=NOCACHE_HEADERS)
else:
raise HTTPException(status_code=404, detail=f'Request for unknown {key}') # HTTP404 Not Found
@app.get('/output_dir')
def read_home_dir():
return {outpath}
# don't log certain requests
class LogSuppressFilter(logging.Filter):
@ -289,11 +230,10 @@ class LogSuppressFilter(logging.Filter):
for prefix in ACCESS_LOG_SUPPRESS_PATH_PREFIXES:
if path.find(prefix) != -1:
return False
return True
logging.getLogger('uvicorn.access').addFilter(LogSuppressFilter())
task_manager.default_model_to_load = get_initial_model_to_load()
task_manager.start_render_thread()
return True
logging.getLogger('uvicorn.access').addFilter(LogSuppressFilter())
# start the browser ui
import webbrowser; webbrowser.open('http://localhost:9000')