mirror of
https://github.com/easydiffusion/easydiffusion.git
synced 2025-08-13 17:57:20 +02:00
Compare commits
76 Commits
Author | SHA1 | Date | |
---|---|---|---|
94835c46a0 | |||
58b3d31526 | |||
b023f5c0da | |||
78b87c6ddd | |||
edde4dc2fa | |||
d9a6e41265 | |||
64f0f1aa2c | |||
b54c057c83 | |||
baa4acaf79 | |||
3c6bb41939 | |||
3f64d3729e | |||
21dc2ece1b | |||
fcac6c4f8c | |||
e5dc932717 | |||
6b65b05e2f | |||
d52b973b44 | |||
79d3f4ca9e | |||
06dd22d89a | |||
e79e425cf5 | |||
b4b2c351b4 | |||
73acaadf70 | |||
18ef36bbc3 | |||
021315f0f5 | |||
a4ee103ff0 | |||
618173c5f0 | |||
4cb906571b | |||
6092b6c4cf | |||
9bf17a1c8d | |||
542379dcf4 | |||
a565bb5889 | |||
9a96ff2edc | |||
9fa2e363cc | |||
a9939a31cf | |||
64fffbcdec | |||
a65f8f5d5c | |||
c5475fb028 | |||
f267c46595 | |||
b1f67a9a65 | |||
044a7524a3 | |||
495b15e065 | |||
f4e6c399f2 | |||
4519acb77e | |||
cf1ba6d459 | |||
c28cb67484 | |||
9017ee9a40 | |||
52086a2d39 | |||
facec59fe8 | |||
75eb79bd55 | |||
307209945c | |||
8db9f40001 | |||
472b8d0e51 | |||
dbebd32a6e | |||
ad8d2a913b | |||
38bd247d64 | |||
6a4e972de6 | |||
cc14ac0bac | |||
2e0d7fdbb8 | |||
2284eea2d8 | |||
867b5b2ee4 | |||
85e29fffc9 | |||
fa84d812f1 | |||
4ffa8420dd | |||
7dc1c54578 | |||
05434d3575 | |||
ddea9b9f38 | |||
9445ee41cf | |||
8325b4e5aa | |||
1ff9db3714 | |||
12ff102a21 | |||
f660111751 | |||
bac08306fb | |||
b860dbd9a6 | |||
c7713f559d | |||
7c60189f29 | |||
35dc13ffcf | |||
6b55f385c7 |
3
.gitignore
vendored
3
.gitignore
vendored
@ -1 +1,4 @@
|
||||
__pycache__
|
||||
installer
|
||||
installer.tar
|
||||
dist
|
15
Dockerfile
15
Dockerfile
@ -1,15 +0,0 @@
|
||||
FROM python:3.9
|
||||
|
||||
RUN mkdir /app
|
||||
WORKDIR /app
|
||||
|
||||
RUN apt update
|
||||
|
||||
COPY requirements.txt ./
|
||||
RUN pip install --no-cache-dir -r requirements.txt
|
||||
|
||||
COPY . .
|
||||
|
||||
EXPOSE 9000
|
||||
|
||||
ENTRYPOINT ["uvicorn", "main:app", "--reload", "--host", "0.0.0.0", "--port", "9000"]
|
24
How to install and run.txt
Normal file
24
How to install and run.txt
Normal file
@ -0,0 +1,24 @@
|
||||
Congrats on downloading Stable Diffusion UI, version 2!
|
||||
|
||||
If you haven't downloaded Stable Diffusion UI yet, please download from https://github.com/cmdr2/stable-diffusion-ui
|
||||
|
||||
After downloading, to install please follow these instructions:
|
||||
|
||||
For Windows:
|
||||
- Please double-click the "Start Stable Diffusion UI.cmd" file inside the "stable-diffusion-ui" folder.
|
||||
|
||||
For Linux:
|
||||
- Please open a terminal, and go to the "stable-diffusion-ui" directory. Then run ./start.sh
|
||||
|
||||
That file will automatically install everything. After that it will start the Stable Diffusion interface in a web browser.
|
||||
|
||||
To start the UI in the future, please run the same command mentioned above.
|
||||
|
||||
|
||||
If you have any problems, please:
|
||||
1. Try the troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting
|
||||
2. Or, seek help from the community at https://discord.com/invite/u9yhsFmEkB
|
||||
3. Or, file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues
|
||||
|
||||
Thanks
|
||||
cmdr2 (and contributors to the project)
|
@ -1,15 +0,0 @@
|
||||
FROM python:3.9
|
||||
|
||||
RUN mkdir /app
|
||||
WORKDIR /app
|
||||
|
||||
RUN apt update
|
||||
|
||||
COPY requirements.txt ./
|
||||
RUN pip install --no-cache-dir -r requirements.txt
|
||||
|
||||
COPY . .
|
||||
|
||||
EXPOSE 8000
|
||||
|
||||
ENTRYPOINT ["uvicorn", "old_port_main:app", "--host", "0.0.0.0", "--port", "8000"]
|
72
README.md
72
README.md
@ -1,41 +1,41 @@
|
||||
# Stable Diffusion UI
|
||||
### A simple way to install and use [Stable Diffusion](https://replicate.com/stability-ai/stable-diffusion) on your own computer
|
||||
# Stable Diffusion UI - v2 (beta)
|
||||
### A simple way to install and use [Stable Diffusion](https://github.com/CompVis/stable-diffusion) on your own computer (Win 10/11, Linux). No dependencies or technical knowledge required.
|
||||
|
||||
---
|
||||
[](https://discord.com/invite/u9yhsFmEkB) (for support, and development discussion)
|
||||
|
||||
🎉 **New!** `img2img` and `inpaint` (masking) are now supported! You can provide an image to generate new images based on it (and an optional text prompt). You can also use the generated image as the new input image in 1-click, to refine it further. (Thanks [Andreas](https://github.com/andreasjansson)!)
|
||||
|
||||
# What does this do?
|
||||
Two things:
|
||||
1. Automatically downloads and installs Stable Diffusion on your own computer (no need to mess with conda or environments)
|
||||
2. Gives you a simple browser-based UI to talk to your local Stable Diffusion. Enter text prompts and view the generated image. No API keys required.
|
||||
|
||||
All the processing will happen on your computer locally, it does not transmit your prompts or process on any remote server.
|
||||
|
||||
<p float="left">
|
||||
<img src="https://github.com/cmdr2/stable-diffusion-ui/raw/main/media/shot-v3a.jpg" height="500" />
|
||||
<img src="https://github.com/cmdr2/stable-diffusion-ui/raw/main/media/shot-v6a.jpg" height="500" />
|
||||
</p>
|
||||
# Features in the new v2 Version:
|
||||
- **No Dependencies or Technical Knowledge Required**: 1-click install for Windows 10/11 and Linux. *No dependencies*, no need for WSL or Docker or Conda or technical setup. Just download and run!
|
||||
- **Image Modifiers**: A library of *modifier tags* like *"Realistic"*, *"Pencil Sketch"*, *"ArtStation"* etc. Experiment with various styles quickly.
|
||||
- **New UI**: with cleaner design
|
||||
- Supports "*Text to Image*" and "*Image to Image*"
|
||||
- **NSFW Setting**: A setting in the UI to control *NSFW content*
|
||||
- **Use CPU setting**: If you don't have a compatible graphics card, but still want to run it on your CPU.
|
||||
- **Auto-updater**: Gets you the latest improvements and bug-fixes to a rapidly evolving project.
|
||||
|
||||

|
||||
|
||||
# System Requirements
|
||||
1. Computer capable of running Stable Diffusion.
|
||||
2. Linux or Windows 11 (with [WSL](https://docs.microsoft.com/en-us/windows/wsl/install)) or Windows 10 v2004+ (Build 19041+) with [WSL](https://docs.microsoft.com/en-us/windows/wsl/install).
|
||||
3. Requires (a) [Docker](https://docs.docker.com/engine/install/), (b) [docker-compose v1.29](https://docs.docker.com/compose/install/), and (c) [nvidia-container-toolkit](https://stackoverflow.com/a/58432877).
|
||||
1. Windows 10/11, or Linux. Experimental support for Mac is coming soon.
|
||||
2. An NVIDIA graphics card, preferably with 6GB or more of VRAM. But if you don't have a compatible graphics card, you can still use it with a "Use CPU" setting. It'll be very slow, but it should still work.
|
||||
|
||||
**Important:** If you're using Windows, please install docker inside your [WSL](https://docs.microsoft.com/en-us/windows/wsl/install)'s Linux. Install docker for the Linux distro in your WSL. **Don't install Docker for Windows.**
|
||||
You do not need anything else. You do not need WSL, Docker or Conda. The installer will take care of it.
|
||||
|
||||
# Installation
|
||||
1. Clone this repository: `git clone https://github.com/cmdr2/stable-diffusion-ui.git` or [download the zip file](https://github.com/cmdr2/stable-diffusion-ui/archive/refs/heads/main.zip) and unzip.
|
||||
2. Open your terminal, and in the project directory run: `./server` (warning: this will take some time during the first run, since it'll download Stable Diffusion's [docker image](https://replicate.com/stability-ai/stable-diffusion), nearly 17 GiB)
|
||||
3. Open http://localhost:9000 in your browser. That's it!
|
||||
1. Download [for Windows](https://drive.google.com/file/d/1MY5gzsQHV_KREbYs3gw33QL4gGIlQRqj/view?usp=sharing) or [for Linux](https://drive.google.com/file/d/1Gwz1LVQUCart8HhCjrmXkS6TWKbTsLsR/view?usp=sharing) (this will be hosted on GitHub in the future).
|
||||
|
||||
If you're getting errors, please check the [Troubleshooting](https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting) page.
|
||||
2. Extract:
|
||||
- For Windows: After unzipping the file, please move the `stable-diffusion-ui` folder to your `C:` (or any drive like D: at the top root level). For e.g. `C:\stable-diffusion-ui`. This will avoid a common problem with Windows (of file path length limits).
|
||||
- For Linux: After extracting the .tar.xz file, please open a terminal, and go to the `stable-diffusion-ui` directory.
|
||||
|
||||
3. Run:
|
||||
- For Windows: `Start Stable Diffusion UI.cmd` by double-clicking it.
|
||||
- For Linux: In the terminal, run `./start.sh` (or `bash start.sh`)
|
||||
|
||||
This will automatically install Stable Diffusion, set it up, and start the interface. No additional steps are needed.
|
||||
|
||||
To stop the server, please run `./server stop`
|
||||
|
||||
# Usage
|
||||
Open http://localhost:9000 in your browser (after running `./server` from step 2 previously).
|
||||
Open http://localhost:9000 in your browser (after running step 3 previously).
|
||||
|
||||
## With a text description
|
||||
1. Enter a text prompt, like `a photograph of an astronaut riding a horse` in the textbox.
|
||||
@ -47,34 +47,34 @@ Open http://localhost:9000 in your browser (after running `./server` from step 2
|
||||
2. An optional text prompt can help you further describe the kind of image you want to generate.
|
||||
3. Press `Make Image`. See the image generated using your prompt.
|
||||
|
||||
You can also set an `Image Mask` for telling Stable Diffusion to draw in only the black areas in your image mask. White areas in your mask will be ignored.
|
||||
|
||||
**Pro tip:** You can also click `Use as Input` on a generated image, to use it as the input image for your next generation. This can be useful for sequentially refining the generated image with a single click.
|
||||
|
||||
**Another tip:** Images with the same aspect ratio of your generated image work best. E.g. 1:1 if you're generating images sized 512x512.
|
||||
|
||||
## Problems?
|
||||
Please [file an issue](https://github.com/cmdr2/stable-diffusion-ui/issues) if this did not work for you (after trying the common [troubleshooting](#troubleshooting) steps)!
|
||||
Please ask on the new [discord server](https://discord.com/invite/u9yhsFmEkB), or [file an issue](https://github.com/cmdr2/stable-diffusion-ui/issues) if this did not work for you (after trying the common [troubleshooting](#troubleshooting) steps)!
|
||||
|
||||
# Advanced Settings
|
||||
You can also set the configuration like `seed`, `width`, `height`, `num_outputs`, `num_inference_steps` and `guidance_scale` using the 'show' button next to 'Advanced settings'.
|
||||
|
||||
Use the same `seed` number to get the same image for a certain prompt. This is useful for refining a prompt without losing the basic image design. Enable the `random images` checkbox to get random images.
|
||||
|
||||

|
||||

|
||||
|
||||
# Troubleshooting
|
||||
The [Troubleshooting wiki page](https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting) contains some common errors and their solutions. Please check that, and if it doesn't work, feel free to [file an issue](https://github.com/cmdr2/stable-diffusion-ui/issues).
|
||||
The [Troubleshooting wiki page](https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting) contains some common errors and their solutions. Please check that, and if it doesn't work, feel free to ask on the [discord server](https://discord.com/invite/u9yhsFmEkB) or [file an issue](https://github.com/cmdr2/stable-diffusion-ui/issues).
|
||||
|
||||
# Behind the scenes
|
||||
This project is a quick way to get started with Stable Diffusion. You do not need to have Stable Diffusion already installed, and do not need any API keys. This project will automatically download Stable Diffusion's docker image, the first time it is run.
|
||||
# What is this? Why no Docker?
|
||||
This version is a 1-click installer. You don't need WSL or Docker or anything beyond a working NVIDIA GPU with an updated driver. You don't need to use the command-line at all. Even if you don't have a compatible GPU, you can run it on your CPU (albeit very slowly).
|
||||
|
||||
This project runs Stable Diffusion in a docker container behind the scenes, using Stable Diffusion's [Docker image](https://replicate.com/stability-ai/stable-diffusion) on replicate.com.
|
||||
It'll download the necessary files from the original [Stable Diffusion](https://github.com/CompVis/stable-diffusion) git repository, and set it up. It'll then start the browser-based interface like before.
|
||||
|
||||
The NSFW option is currently off (temporarily), so it'll allow NSFW images, for those people who are unable to run their prompts without hitting the NSFW filter incorrectly.
|
||||
|
||||
# Bugs reports and code contributions welcome
|
||||
If there are any problems or suggestions, please feel free to [file an issue](https://github.com/cmdr2/stable-diffusion-ui/issues).
|
||||
If there are any problems or suggestions, please feel free to ask on the [discord server](https://discord.com/invite/u9yhsFmEkB) or [file an issue](https://github.com/cmdr2/stable-diffusion-ui/issues).
|
||||
|
||||
Also, please feel free to submit a pull request, if you have any code contributions in mind.
|
||||
Also, please feel free to submit a pull request, if you have any code contributions in mind. Join the [discord server](https://discord.com/invite/u9yhsFmEkB) for development-related discussions, and for helping other users.
|
||||
|
||||
# Disclaimer
|
||||
The authors of this project are not responsible for any content generated using this interface.
|
||||
|
39
build.bat
Normal file
39
build.bat
Normal file
@ -0,0 +1,39 @@
|
||||
@mkdir dist\stable-diffusion-ui
|
||||
|
||||
@echo "Downloading components for the installer.."
|
||||
|
||||
@call conda env create --prefix installer -f environment.yaml
|
||||
@call conda activate .\installer
|
||||
|
||||
@echo "Setting up startup scripts.."
|
||||
|
||||
@mkdir installer\etc\conda\activate.d
|
||||
@copy scripts\post_activate.bat installer\etc\conda\activate.d\
|
||||
|
||||
@echo "Creating a distributable package.."
|
||||
|
||||
@call conda install -c conda-forge -y conda-pack
|
||||
@call conda pack --n-threads -1 --prefix installer --format tar
|
||||
|
||||
@cd dist\stable-diffusion-ui
|
||||
@mkdir installer
|
||||
|
||||
@call tar -xf ..\..\installer.tar -C installer
|
||||
|
||||
@mkdir scripts
|
||||
|
||||
@copy ..\..\scripts\on_env_start.bat scripts\
|
||||
@copy "..\..\scripts\Start Stable Diffusion UI.cmd" .
|
||||
@copy ..\..\LICENSE .
|
||||
@copy "..\..\CreativeML Open RAIL-M License" .
|
||||
@copy "..\..\How to install and run.txt" .
|
||||
|
||||
@echo "Build ready. Zip the 'dist\stable-diffusion-ui' folder."
|
||||
|
||||
@echo "Cleaning up.."
|
||||
|
||||
@cd ..\..
|
||||
|
||||
@rmdir /s /q installer
|
||||
|
||||
@del installer.tar
|
39
build.sh
Normal file
39
build.sh
Normal file
@ -0,0 +1,39 @@
|
||||
#!/bin/bash
|
||||
|
||||
mkdir -p dist/stable-diffusion-ui
|
||||
|
||||
echo "Downloading components for the installer.."
|
||||
|
||||
source ~/miniconda3/etc/profile.d/conda.sh
|
||||
|
||||
conda install -c conda-forge -y conda-pack
|
||||
|
||||
conda env create --prefix installer -f environment.yaml
|
||||
conda activate ./installer
|
||||
|
||||
echo "Creating a distributable package.."
|
||||
|
||||
conda pack --n-threads -1 --prefix installer --format tar
|
||||
|
||||
cd dist/stable-diffusion-ui
|
||||
mkdir installer
|
||||
|
||||
tar -xf ../../installer.tar -C installer
|
||||
|
||||
mkdir scripts
|
||||
|
||||
cp ../../scripts/on_env_start.sh scripts/
|
||||
cp "../../scripts/start.sh" .
|
||||
cp ../../LICENSE .
|
||||
cp "../../CreativeML Open RAIL-M License" .
|
||||
cp "../../How to install and run.txt" .
|
||||
|
||||
echo "Build ready. Zip the 'dist/stable-diffusion-ui' folder."
|
||||
|
||||
echo "Cleaning up.."
|
||||
|
||||
cd ../..
|
||||
|
||||
rm -rf installer
|
||||
|
||||
rm installer.tar
|
@ -1,38 +0,0 @@
|
||||
version: '3.3'
|
||||
|
||||
services:
|
||||
stability-ai:
|
||||
container_name: sd
|
||||
ports:
|
||||
- '5000:5000'
|
||||
image: 'r8.im/stability-ai/stable-diffusion@sha256:be04660a5b93ef2aff61e3668dedb4cbeb14941e62a3fd5998364a32d613e35e'
|
||||
deploy:
|
||||
resources:
|
||||
reservations:
|
||||
devices:
|
||||
- capabilities: [gpu]
|
||||
|
||||
stable-diffusion-ui:
|
||||
container_name: sd-ui
|
||||
ports:
|
||||
- '9000:9000'
|
||||
build:
|
||||
context: .
|
||||
dockerfile: Dockerfile
|
||||
volumes:
|
||||
- .:/app
|
||||
depends_on:
|
||||
- stability-ai
|
||||
|
||||
stable-diffusion-old-port-redirect:
|
||||
container_name: sd-old-port-redirect
|
||||
ports:
|
||||
- '8000:8000'
|
||||
build:
|
||||
context: .
|
||||
dockerfile: OldPortDockerfile
|
||||
volumes:
|
||||
- .:/app
|
||||
|
||||
networks:
|
||||
default:
|
7
environment.yaml
Normal file
7
environment.yaml
Normal file
@ -0,0 +1,7 @@
|
||||
name: stable-diffusion-ui-installer
|
||||
channels:
|
||||
- defaults
|
||||
- conda-forge
|
||||
dependencies:
|
||||
- conda
|
||||
- git
|
505
index.html
505
index.html
@ -1,505 +0,0 @@
|
||||
<!DOCTYPE html>
|
||||
<html>
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||
<style>
|
||||
body {
|
||||
font-family: Arial, Helvetica, sans-serif;
|
||||
font-size: 11pt;
|
||||
}
|
||||
a {
|
||||
color: rgb(0, 102, 204);
|
||||
}
|
||||
a:visited {
|
||||
color: rgb(0, 102, 204);
|
||||
}
|
||||
@media (prefers-color-scheme: dark) {
|
||||
body {
|
||||
background-color: rgb(32, 33, 36);
|
||||
color: #eee;
|
||||
}
|
||||
}
|
||||
label {
|
||||
font-size: 10pt;
|
||||
}
|
||||
#prompt {
|
||||
width: 50vw;
|
||||
height: 50pt;
|
||||
}
|
||||
@media screen and (max-width: 600px) {
|
||||
#prompt {
|
||||
width: 95%;
|
||||
}
|
||||
}
|
||||
.image_preview_container {
|
||||
display: none;
|
||||
}
|
||||
.image_clear_btn {
|
||||
position: absolute;
|
||||
transform: translateX(-50%);
|
||||
background: black;
|
||||
color: white;
|
||||
border: 2pt solid #ccc;
|
||||
padding: 0;
|
||||
cursor: pointer;
|
||||
outline: inherit;
|
||||
border-radius: 8pt;
|
||||
width: 16pt;
|
||||
height: 16pt;
|
||||
font-size: 10pt;
|
||||
}
|
||||
#configHeader {
|
||||
margin-top: 5px;
|
||||
margin-bottom: 5px;
|
||||
font-size: 10pt;
|
||||
}
|
||||
#config {
|
||||
font-size: 9pt;
|
||||
margin-bottom: 5px;
|
||||
padding-left: 10px;
|
||||
}
|
||||
#outputMsg {
|
||||
font-size: small;
|
||||
}
|
||||
#footer {
|
||||
border-top: 1px solid #999;
|
||||
margin-top: 10px;
|
||||
padding-top: 10px;
|
||||
font-size: small;
|
||||
}
|
||||
.imgUseBtn {
|
||||
position: absolute;
|
||||
transform: translateX(-100%);
|
||||
margin-top: 5pt;
|
||||
margin-left: -5pt;
|
||||
}
|
||||
.imgSaveBtn {
|
||||
position: absolute;
|
||||
transform: translateX(-100%);
|
||||
margin-top: 30pt;
|
||||
margin-left: -5pt;
|
||||
}
|
||||
.imgItem {
|
||||
display: inline;
|
||||
padding-right: 10px;
|
||||
}
|
||||
</style>
|
||||
</html>
|
||||
<body>
|
||||
<div id="status">Server status: <span id="serverStatus">checking..</span> | Request status: <span id="reqStatus">n/a</span></div>
|
||||
|
||||
<br/>
|
||||
|
||||
<b>Prompt:</b><br/>
|
||||
<textarea id="prompt">a photograph of an astronaut riding a horse</textarea><br/>
|
||||
|
||||
<label for="init_image"><b>Initial Image:</b> (optional) </label> <input id="init_image" name="init_image" type="file" /> </button><br/>
|
||||
<div id="init_image_preview_container" class="image_preview_container">
|
||||
<img id="init_image_preview" src="" width="100" height="100" />
|
||||
<button id="init_image_clear" class="image_clear_btn">X</button>
|
||||
</div><br/>
|
||||
|
||||
<div id="mask_setting">
|
||||
<label for="mask"><b>Image Mask:</b> (optional) </label> <input id="mask" name="mask" type="file" /> </button><br/>
|
||||
<div id="mask_preview_container" class="image_preview_container">
|
||||
<img id="mask_preview" src="" width="100" height="100" />
|
||||
<button id="mask_clear" class="image_clear_btn">X</button>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div id="configHeader"><b>Advanced settings:</b> [<a id="configToggleBtn" href="#">show</a>]</div>
|
||||
<div id="config">
|
||||
<label for="seed">Seed:</label> <input id="seed" name="seed" value="30000"> <input id="random_seed" name="random_seed" type="checkbox" checked> <label for="random_seed">Random Image</label> <br/>
|
||||
<label for="num_outputs">Number of outputs:</label> <select id="num_outputs" name="num_outputs" value="1"><option value="1" selected>1</option><option value="4">4</option></select><br/>
|
||||
<label for="width">Width:</label> <select id="width" name="width" value="512"><option value="128">128</option><option value="256">256</option><option value="512" selected>512</option><option value="768">768</option><option value="1024">1024</option></select><br/>
|
||||
<label for="height">Height:</label> <select id="height" name="height" value="512"><option value="128">128</option><option value="256">256</option><option value="512" selected>512</option><option value="768">768</option></select><br/>
|
||||
<label for="num_inference_steps">Number of inference steps:</label> <input id="num_inference_steps" name="num_inference_steps" value="50"><br/>
|
||||
<label for="guidance_scale">Guidance Scale:</label> <input id="guidance_scale" name="guidance_scale" value="75" type="range" min="10" max="200"> <span id="guidance_scale_value"></span><br/>
|
||||
<span id="prompt_strength_container"><label for="prompt_strength">Prompt Strength:</label> <input id="prompt_strength" name="prompt_strength" value="8" type="range" min="0" max="10"> <span id="prompt_strength_value"></span><br/></span><br/>
|
||||
<input id="sound_toggle" name="sound_toggle" type="checkbox" checked> <label for="sound_toggle">Play sound on task completion</label><br/>
|
||||
</div>
|
||||
|
||||
<button id="makeImage">Make Image</button> <br/><br/>
|
||||
|
||||
<div id="outputMsg"></div>
|
||||
|
||||
<div id="images"></div>
|
||||
|
||||
<div id="footer">
|
||||
<p>Please feel free to <a href="https://github.com/cmdr2/stable-diffusion-ui/issues" target="_blank">file an issue</a> if you have any problems or suggestions in using this interface.</p>
|
||||
<p><b>Disclaimer:</b> The authors of this project are not responsible for any content generated using this interface.</p>
|
||||
<p>This license of this software forbids you from sharing any content that violates any laws, produce any harm to a person, disseminate any personal information that would be meant for harm, <br/>spread misinformation and target vulnerable groups. For the full list of restrictions please read <a href="https://github.com/cmdr2/stable-diffusion-ui/blob/main/LICENSE" target="_blank">the license</a>.</p>
|
||||
<p>By using this software, you consent to the terms and conditions of the license.</p>
|
||||
</div>
|
||||
</body>
|
||||
|
||||
<script>
|
||||
const SOUND_ENABLED_KEY = "soundEnabled"
|
||||
const HEALTH_PING_INTERVAL = 5 // seconds
|
||||
|
||||
let promptField = document.querySelector('#prompt')
|
||||
let numOutputsField = document.querySelector('#num_outputs')
|
||||
let numInferenceStepsField = document.querySelector('#num_inference_steps')
|
||||
let guidanceScaleField = document.querySelector('#guidance_scale')
|
||||
let guidanceScaleValueLabel = document.querySelector('#guidance_scale_value')
|
||||
let randomSeedField = document.querySelector("#random_seed")
|
||||
let seedField = document.querySelector('#seed')
|
||||
let widthField = document.querySelector('#width')
|
||||
let heightField = document.querySelector('#height')
|
||||
let initImageSelector = document.querySelector("#init_image")
|
||||
let initImagePreview = document.querySelector("#init_image_preview")
|
||||
let maskImageSelector = document.querySelector("#mask")
|
||||
let maskImagePreview = document.querySelector("#mask_preview")
|
||||
let promptStrengthField = document.querySelector('#prompt_strength')
|
||||
let promptStrengthValueLabel = document.querySelector('#prompt_strength_value')
|
||||
|
||||
let makeImageBtn = document.querySelector('#makeImage')
|
||||
|
||||
let imagesContainer = document.querySelector('#images')
|
||||
let initImagePreviewContainer = document.querySelector('#init_image_preview_container')
|
||||
let initImageClearBtn = document.querySelector('#init_image_clear')
|
||||
let promptStrengthContainer = document.querySelector('#prompt_strength_container')
|
||||
|
||||
let maskSetting = document.querySelector('#mask_setting')
|
||||
let maskImagePreviewContainer = document.querySelector('#mask_preview_container')
|
||||
let maskImageClearBtn = document.querySelector('#mask_clear')
|
||||
|
||||
let showConfigToggle = document.querySelector('#configToggleBtn')
|
||||
let configBox = document.querySelector('#config')
|
||||
let outputMsg = document.querySelector('#outputMsg')
|
||||
|
||||
let soundToggle = document.querySelector('#sound_toggle')
|
||||
|
||||
let serverStatus = 'offline'
|
||||
|
||||
function isSoundEnabled() {
|
||||
if (localStorage.getItem(SOUND_ENABLED_KEY) === 'false') {
|
||||
return false
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
function setStatus(statusType, msg, msgType) {
|
||||
let el = ''
|
||||
|
||||
if (statusType === 'server') {
|
||||
el = '#serverStatus'
|
||||
serverStatus = msg
|
||||
} else if (statusType === 'request') {
|
||||
el = '#reqStatus'
|
||||
}
|
||||
|
||||
if (msgType == 'error') {
|
||||
msg = '<span style="color: red">' + msg + '<span>'
|
||||
} else if (msgType == 'success') {
|
||||
msg = '<span style="color: green">' + msg + '<span>'
|
||||
}
|
||||
|
||||
if (el) {
|
||||
document.querySelector(el).innerHTML = msg
|
||||
}
|
||||
}
|
||||
|
||||
function playSound() {
|
||||
const audio = new Audio('/media/ding.mp3')
|
||||
audio.volume = 0.2
|
||||
audio.play()
|
||||
}
|
||||
|
||||
async function healthCheck() {
|
||||
try {
|
||||
let res = await fetch('/ping')
|
||||
res = await res.json()
|
||||
|
||||
if (res[0] == 'OK') {
|
||||
setStatus('server', 'online', 'success')
|
||||
} else {
|
||||
setStatus('server', 'offline', 'error')
|
||||
}
|
||||
} catch (e) {
|
||||
setStatus('server', 'offline', 'error')
|
||||
}
|
||||
}
|
||||
|
||||
async function makeImage() {
|
||||
setStatus('request', 'fetching..')
|
||||
|
||||
makeImageBtn.innerHTML = 'Processing..'
|
||||
makeImageBtn.disabled = true
|
||||
|
||||
outputMsg.innerHTML = 'Fetching..'
|
||||
|
||||
function logError(msg, res) {
|
||||
outputMsg.innerHTML = '<span style="color: red">Error: ' + msg + '</span>'
|
||||
console.log('request error', res)
|
||||
setStatus('request', 'error', 'error')
|
||||
}
|
||||
|
||||
let seed = (randomSeedField.checked ? Math.floor(Math.random() * 10000) : seedField.value)
|
||||
|
||||
let reqBody = {
|
||||
prompt: promptField.value,
|
||||
num_outputs: numOutputsField.value,
|
||||
num_inference_steps: numInferenceStepsField.value,
|
||||
guidance_scale: guidanceScaleField.value / 10,
|
||||
width: widthField.value,
|
||||
height: heightField.value,
|
||||
seed: seed,
|
||||
}
|
||||
|
||||
if (initImagePreview.src.indexOf('data:image/png;base64') !== -1) {
|
||||
reqBody['init_image'] = initImagePreview.src
|
||||
reqBody['prompt_strength'] = promptStrengthField.value / 10
|
||||
|
||||
if (maskImagePreview.src.indexOf('data:image/png;base64') !== -1) {
|
||||
reqBody['mask'] = maskImagePreview.src
|
||||
}
|
||||
}
|
||||
|
||||
let res = ''
|
||||
let time = new Date().getTime()
|
||||
|
||||
try {
|
||||
res = await fetch('/image', {
|
||||
method: 'POST',
|
||||
headers: {
|
||||
'Content-Type': 'application/json'
|
||||
},
|
||||
body: JSON.stringify(reqBody)
|
||||
})
|
||||
|
||||
if (res.status != 200) {
|
||||
if (serverStatus === 'online') {
|
||||
logError('Stable Diffusion had an error: ' + await res.text() + '. This happens sometimes. Maybe modify the prompt or seed a little bit?', res)
|
||||
} else {
|
||||
logError("Stable Diffusion is still starting up, please wait. If this goes on beyond a few minutes, Stable Diffusion has probably crashed.", res)
|
||||
}
|
||||
res = undefined
|
||||
} else {
|
||||
res = await res.json()
|
||||
|
||||
if (res.status !== 'succeeded') {
|
||||
let msg = ''
|
||||
if (res.detail !== undefined) {
|
||||
msg = res.detail[0].msg + " in " + JSON.stringify(res.detail[0].loc)
|
||||
} else {
|
||||
msg = res
|
||||
}
|
||||
logError(msg, res)
|
||||
res = undefined
|
||||
}
|
||||
}
|
||||
} catch (e) {
|
||||
console.log('request error', e)
|
||||
setStatus('request', 'error', 'error')
|
||||
}
|
||||
|
||||
makeImageBtn.innerHTML = 'Make Image'
|
||||
makeImageBtn.disabled = false
|
||||
|
||||
if (isSoundEnabled()) {
|
||||
playSound()
|
||||
}
|
||||
|
||||
if (!res) {
|
||||
return
|
||||
}
|
||||
|
||||
time = new Date().getTime() - time
|
||||
time /= 1000
|
||||
|
||||
outputMsg.innerHTML = 'Processed in ' + time + ' seconds. Seed: ' + seed
|
||||
|
||||
imagesContainer.innerHTML = ''
|
||||
|
||||
for (let idx in res.output) {
|
||||
let imgBody = ''
|
||||
|
||||
try {
|
||||
imgBody = res.output[idx]
|
||||
} catch (e) {
|
||||
console.log(imgBody)
|
||||
setStatus('request', 'invalid image', 'error')
|
||||
return
|
||||
}
|
||||
|
||||
let imgItem = document.createElement('div')
|
||||
imgItem.className = 'imgItem'
|
||||
|
||||
let img = document.createElement('img')
|
||||
img.width = parseInt(reqBody.width)
|
||||
img.height = parseInt(reqBody.height)
|
||||
img.src = imgBody
|
||||
|
||||
let imgUseBtn = document.createElement('button')
|
||||
imgUseBtn.className = 'imgUseBtn'
|
||||
imgUseBtn.innerHTML = 'Use as Input'
|
||||
|
||||
let imgSaveBtn = document.createElement('button')
|
||||
imgSaveBtn.className = 'imgSaveBtn'
|
||||
imgSaveBtn.innerHTML = 'Download'
|
||||
|
||||
imgItem.appendChild(img)
|
||||
imgItem.appendChild(imgUseBtn)
|
||||
imgItem.appendChild(imgSaveBtn)
|
||||
imagesContainer.appendChild(imgItem)
|
||||
|
||||
imgUseBtn.addEventListener('click', function() {
|
||||
initImageSelector.value = null
|
||||
initImagePreview.src = imgBody
|
||||
|
||||
initImagePreviewContainer.style.display = 'block'
|
||||
promptStrengthContainer.style.display = 'block'
|
||||
|
||||
maskSetting.style.display = 'block'
|
||||
|
||||
randomSeedField.checked = false
|
||||
seedField.value = seed
|
||||
seedField.disabled = false
|
||||
})
|
||||
|
||||
imgSaveBtn.addEventListener('click', function() {
|
||||
let imgDownload = document.createElement('a')
|
||||
imgDownload.download = generateUUID() + '.png'
|
||||
imgDownload.href = imgBody
|
||||
imgDownload.click()
|
||||
})
|
||||
}
|
||||
|
||||
setStatus('request', 'done', 'success')
|
||||
|
||||
if (randomSeedField.checked) {
|
||||
seedField.value = seed
|
||||
}
|
||||
}
|
||||
|
||||
function generateUUID() { // Public Domain/MIT
|
||||
var d = new Date().getTime();//Timestamp
|
||||
var d2 = ((typeof performance !== 'undefined') && performance.now && (performance.now()*1000)) || 0;//Time in microseconds since page-load or 0 if unsupported
|
||||
return 'xxxxxxxx-xxxx-4xxx-yxxx-xxxxxxxxxxxx'.replace(/[xy]/g, function(c) {
|
||||
var r = Math.random() * 16;//random number between 0 and 16
|
||||
if(d > 0){//Use timestamp until depleted
|
||||
r = (d + r)%16 | 0;
|
||||
d = Math.floor(d/16);
|
||||
} else {//Use microseconds since page-load if supported
|
||||
r = (d2 + r)%16 | 0;
|
||||
d2 = Math.floor(d2/16);
|
||||
}
|
||||
return (c === 'x' ? r : (r & 0x3 | 0x8)).toString(16);
|
||||
});
|
||||
}
|
||||
|
||||
function handleAudioEnabledChange(e) {
|
||||
localStorage.setItem(SOUND_ENABLED_KEY, e.target.checked.toString())
|
||||
}
|
||||
|
||||
soundToggle.addEventListener('click', handleAudioEnabledChange)
|
||||
soundToggle.checked = isSoundEnabled();
|
||||
|
||||
makeImageBtn.addEventListener('click', makeImage)
|
||||
|
||||
configBox.style.display = 'none'
|
||||
|
||||
showConfigToggle.addEventListener('click', function() {
|
||||
configBox.style.display = (configBox.style.display === 'none' ? 'block' : 'none')
|
||||
showConfigToggle.innerHTML = (configBox.style.display === 'none' ? 'show' : 'hide')
|
||||
return false
|
||||
})
|
||||
|
||||
function updateGuidanceScale() {
|
||||
guidanceScaleValueLabel.innerHTML = guidanceScaleField.value / 10
|
||||
}
|
||||
|
||||
guidanceScaleField.addEventListener('input', updateGuidanceScale)
|
||||
updateGuidanceScale()
|
||||
|
||||
function updatePromptStrength() {
|
||||
promptStrengthValueLabel.innerHTML = promptStrengthField.value / 10
|
||||
}
|
||||
|
||||
promptStrengthField.addEventListener('input', updatePromptStrength)
|
||||
updatePromptStrength()
|
||||
|
||||
function checkRandomSeed() {
|
||||
if (randomSeedField.checked) {
|
||||
seedField.disabled = true
|
||||
seedField.value = "random"
|
||||
} else {
|
||||
seedField.disabled = false
|
||||
}
|
||||
}
|
||||
randomSeedField.addEventListener('input', checkRandomSeed)
|
||||
checkRandomSeed()
|
||||
|
||||
function showInitImagePreview() {
|
||||
if (initImageSelector.files.length === 0) {
|
||||
initImagePreviewContainer.style.display = 'none'
|
||||
promptStrengthContainer.style.display = 'none'
|
||||
maskSetting.style.display = 'none'
|
||||
return
|
||||
}
|
||||
|
||||
let reader = new FileReader()
|
||||
let file = initImageSelector.files[0]
|
||||
|
||||
reader.addEventListener('load', function() {
|
||||
// console.log(file.name, reader.result)
|
||||
initImagePreview.src = reader.result
|
||||
initImagePreviewContainer.style.display = 'block'
|
||||
promptStrengthContainer.style.display = 'block'
|
||||
|
||||
maskSetting.style.display = 'block'
|
||||
})
|
||||
|
||||
if (file) {
|
||||
reader.readAsDataURL(file)
|
||||
}
|
||||
}
|
||||
initImageSelector.addEventListener('change', showInitImagePreview)
|
||||
showInitImagePreview()
|
||||
|
||||
initImageClearBtn.addEventListener('click', function() {
|
||||
initImageSelector.value = null
|
||||
maskImageSelector.value = null
|
||||
|
||||
initImagePreview.src = ''
|
||||
maskImagePreview.src = ''
|
||||
|
||||
initImagePreviewContainer.style.display = 'none'
|
||||
maskImagePreviewContainer.style.display = 'none'
|
||||
|
||||
maskSetting.style.display = 'none'
|
||||
|
||||
promptStrengthContainer.style.display = 'none'
|
||||
})
|
||||
|
||||
function showMaskImagePreview() {
|
||||
if (maskImageSelector.files.length === 0) {
|
||||
maskImagePreviewContainer.style.display = 'none'
|
||||
return
|
||||
}
|
||||
|
||||
let reader = new FileReader()
|
||||
let file = maskImageSelector.files[0]
|
||||
|
||||
reader.addEventListener('load', function() {
|
||||
maskImagePreview.src = reader.result
|
||||
maskImagePreviewContainer.style.display = 'block'
|
||||
})
|
||||
|
||||
if (file) {
|
||||
reader.readAsDataURL(file)
|
||||
}
|
||||
}
|
||||
maskImageSelector.addEventListener('change', showMaskImagePreview)
|
||||
showMaskImagePreview()
|
||||
|
||||
maskImageClearBtn.addEventListener('click', function() {
|
||||
maskImageSelector.value = null
|
||||
maskImagePreview.src = ''
|
||||
maskImagePreviewContainer.style.display = 'none'
|
||||
})
|
||||
|
||||
setInterval(healthCheck, HEALTH_PING_INTERVAL * 1000)
|
||||
</script>
|
||||
|
||||
</html>
|
69
main.py
69
main.py
@ -1,69 +0,0 @@
|
||||
from fastapi import FastAPI, HTTPException
|
||||
from starlette.responses import FileResponse
|
||||
from pydantic import BaseModel
|
||||
|
||||
import requests
|
||||
|
||||
LOCAL_SERVER_URL = 'http://stability-ai:5000'
|
||||
PREDICT_URL = LOCAL_SERVER_URL + '/predictions'
|
||||
|
||||
app = FastAPI()
|
||||
|
||||
# defaults from https://huggingface.co/blog/stable_diffusion
|
||||
class ImageRequest(BaseModel):
|
||||
prompt: str
|
||||
init_image: str = None # base64
|
||||
mask: str = None # base64
|
||||
num_outputs: str = "1"
|
||||
num_inference_steps: str = "50"
|
||||
guidance_scale: str = "7.5"
|
||||
width: str = "512"
|
||||
height: str = "512"
|
||||
seed: str = "30000"
|
||||
prompt_strength: str = "0.8"
|
||||
|
||||
@app.get('/')
|
||||
def read_root():
|
||||
return FileResponse('index.html')
|
||||
|
||||
@app.get('/ping')
|
||||
async def ping():
|
||||
try:
|
||||
requests.get(LOCAL_SERVER_URL)
|
||||
return {'OK'}
|
||||
except:
|
||||
return {'ERROR'}
|
||||
|
||||
@app.post('/image')
|
||||
async def image(req : ImageRequest):
|
||||
data = {
|
||||
"input": {
|
||||
"prompt": req.prompt,
|
||||
"num_outputs": req.num_outputs,
|
||||
"num_inference_steps": req.num_inference_steps,
|
||||
"width": req.width,
|
||||
"height": req.height,
|
||||
"seed": req.seed,
|
||||
"guidance_scale": req.guidance_scale,
|
||||
}
|
||||
}
|
||||
|
||||
if req.init_image is not None:
|
||||
data['input']['init_image'] = req.init_image
|
||||
data['input']['prompt_strength'] = req.prompt_strength
|
||||
|
||||
if req.mask is not None:
|
||||
data['input']['mask'] = req.mask
|
||||
|
||||
if req.seed == "-1":
|
||||
del data['input']['seed']
|
||||
|
||||
res = requests.post(PREDICT_URL, json=data)
|
||||
if res.status_code != 200:
|
||||
raise HTTPException(status_code=500, detail=res.text)
|
||||
|
||||
return res.json()
|
||||
|
||||
@app.get('/media/ding.mp3')
|
||||
def read_root():
|
||||
return FileResponse('media/ding.mp3')
|
Binary file not shown.
Before Width: | Height: | Size: 18 KiB |
BIN
media/config-v3.jpg
Normal file
BIN
media/config-v3.jpg
Normal file
Binary file not shown.
After Width: | Height: | Size: 22 KiB |
BIN
media/config-v4.jpg
Normal file
BIN
media/config-v4.jpg
Normal file
Binary file not shown.
After Width: | Height: | Size: 29 KiB |
BIN
media/config-v5.jpg
Normal file
BIN
media/config-v5.jpg
Normal file
Binary file not shown.
After Width: | Height: | Size: 55 KiB |
BIN
media/shot-v8.jpg
Normal file
BIN
media/shot-v8.jpg
Normal file
Binary file not shown.
After Width: | Height: | Size: 244 KiB |
@ -1,38 +0,0 @@
|
||||
from fastapi import FastAPI
|
||||
from fastapi.responses import HTMLResponse
|
||||
|
||||
app = FastAPI()
|
||||
|
||||
@app.get('/', response_class=HTMLResponse)
|
||||
def read_root():
|
||||
return '''
|
||||
<style>
|
||||
body {
|
||||
font-family: Arial;
|
||||
font-size: 11pt;
|
||||
}
|
||||
pre {
|
||||
display: inline;
|
||||
background: #aaa;
|
||||
padding: 2px;
|
||||
border: 1px solid #777;
|
||||
border-radius: 3px;
|
||||
}
|
||||
@media (prefers-color-scheme: dark) {
|
||||
body {
|
||||
background-color: rgb(32, 33, 36);
|
||||
color: #eee;
|
||||
}
|
||||
pre {
|
||||
background: #444;
|
||||
}
|
||||
}
|
||||
</style>
|
||||
<h4>The UI has moved to <a href="http://localhost:9000">http://localhost:9000</a>. The current address that you used (ending with :8000) will be removed in the future, so please use <a href="http://localhost:9000">http://localhost:9000</a> going ahead (and in any bookmarks you've saved).</h4>
|
||||
|
||||
<h4>Also, please use <pre>./server</pre> instead of <pre>docker-compose up &</pre>. To stop, please use <pre>./server stop</pre>. This will help the project better manage the startup process in the future.</h4>
|
||||
|
||||
<h3>Why has the address changed?</h3>
|
||||
<p>The previously used port (8000) is often used by other servers, which results in port conflicts. So the project's port number has been changed, while the project is still young. Otherwise port-conflicts with 8000 will be a common source of new-user issues in the future.</p>
|
||||
<p>Sorry about this, and apologies for the inconvenience :)</p>
|
||||
'''
|
@ -1,3 +0,0 @@
|
||||
requests
|
||||
fastapi==0.80.0
|
||||
uvicorn==0.18.2
|
1
scripts/Start Stable Diffusion UI.cmd
Normal file
1
scripts/Start Stable Diffusion UI.cmd
Normal file
@ -0,0 +1 @@
|
||||
installer\Scripts\activate.bat
|
30
scripts/on_env_start.bat
Normal file
30
scripts/on_env_start.bat
Normal file
@ -0,0 +1,30 @@
|
||||
@echo. & echo "Stable Diffusion UI" & echo.
|
||||
|
||||
@cd ..
|
||||
|
||||
@>nul grep -c "sd_ui_git_cloned" scripts\install_status.txt
|
||||
@if "%ERRORLEVEL%" EQU "0" (
|
||||
@echo "Stable Diffusion UI's git repository was already installed. Updating.."
|
||||
|
||||
@cd sd-ui-files
|
||||
|
||||
@call git reset --hard
|
||||
@call git pull
|
||||
|
||||
@cd ..
|
||||
) else (
|
||||
@echo. & echo "Downloading Stable Diffusion UI.." & echo.
|
||||
|
||||
@call git clone https://github.com/cmdr2/stable-diffusion-ui.git sd-ui-files && (
|
||||
@echo sd_ui_git_cloned >> scripts\install_status.txt
|
||||
) || (
|
||||
@echo "Error downloading Stable Diffusion UI. Please try re-running this installer. If it doesn't work, please copy the messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB or file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues"
|
||||
pause
|
||||
@exit /b
|
||||
)
|
||||
)
|
||||
|
||||
@xcopy sd-ui-files\ui ui /s /i /Y
|
||||
@xcopy sd-ui-files\scripts scripts /s /i /Y
|
||||
|
||||
@call scripts\on_sd_start.bat
|
28
scripts/on_env_start.sh
Executable file
28
scripts/on_env_start.sh
Executable file
@ -0,0 +1,28 @@
|
||||
printf "\n\nStable Diffusion UI\n\n"
|
||||
|
||||
if [ -f "scripts/install_status.txt" ] && [ `grep -c sd_ui_git_cloned scripts/install_status.txt` -gt "0" ]; then
|
||||
echo "Stable Diffusion UI's git repository was already installed. Updating.."
|
||||
|
||||
cd sd-ui-files
|
||||
|
||||
git reset --hard
|
||||
git pull
|
||||
|
||||
cd ..
|
||||
else
|
||||
printf "\n\nDownloading Stable Diffusion UI..\n\n"
|
||||
|
||||
if git clone https://github.com/cmdr2/stable-diffusion-ui.git sd-ui-files ; then
|
||||
echo sd_ui_git_cloned >> scripts/install_status.txt
|
||||
else
|
||||
printf "\n\nError downloading Stable Diffusion UI. Please try re-running this installer. If it doesn't work, please copy the messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB or file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\n\n"
|
||||
read -p "Press any key to continue"
|
||||
exit
|
||||
fi
|
||||
fi
|
||||
|
||||
cp -Rf sd-ui-files/ui ui
|
||||
cp -Rf sd-ui-files/scripts/* scripts/
|
||||
cp "scripts/start.sh" .
|
||||
|
||||
./scripts/on_sd_start.sh
|
104
scripts/on_sd_start.bat
Normal file
104
scripts/on_sd_start.bat
Normal file
@ -0,0 +1,104 @@
|
||||
@set cmd_had_error=F
|
||||
|
||||
@>nul grep -c "sd_git_cloned" scripts\install_status.txt
|
||||
@if "%ERRORLEVEL%" EQU "0" (
|
||||
@echo "Stable Diffusion's git repository was already installed. Updating.."
|
||||
|
||||
@cd stable-diffusion
|
||||
|
||||
@call git reset --hard
|
||||
@call git pull
|
||||
|
||||
@cd ..
|
||||
) else (
|
||||
@echo. & echo "Downloading Stable Diffusion.." & echo.
|
||||
|
||||
@call git clone https://github.com/basujindal/stable-diffusion.git && (
|
||||
@echo sd_git_cloned >> scripts\install_status.txt
|
||||
) || (
|
||||
@set cmd_had_error=T
|
||||
)
|
||||
|
||||
if "%ERRORLEVEL%" NEQ "0" (
|
||||
@set cmd_had_error=T
|
||||
)
|
||||
|
||||
if "%cmd_had_error%"=="T" (
|
||||
@echo "Error downloading Stable Diffusion. Please try re-running this installer. If it doesn't work, please copy the messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB or file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues"
|
||||
pause
|
||||
@exit /b
|
||||
)
|
||||
)
|
||||
|
||||
@cd stable-diffusion
|
||||
|
||||
@>nul grep -c "conda_sd_env_created" ..\scripts\install_status.txt
|
||||
@if "%ERRORLEVEL%" EQU "0" (
|
||||
@echo "Packages necessary for Stable Diffusion were already installed"
|
||||
) else (
|
||||
@echo. & echo "Downloading packages necessary for Stable Diffusion.." & echo. & echo "***** This will take some time (depending on the speed of the Internet connection) and may appear to be stuck, but please be patient ***** .." & echo.
|
||||
|
||||
@rmdir /s /q .\env
|
||||
|
||||
@call conda env create --prefix env -f environment.yaml && (
|
||||
@echo conda_sd_env_created >> ..\scripts\install_status.txt
|
||||
) || (
|
||||
@set cmd_had_error=T
|
||||
)
|
||||
|
||||
if "%cmd_had_error%"=="T" (
|
||||
echo "Error installing the packages necessary for Stable Diffusion. Please try re-running this installer. If it doesn't work, please copy the messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB or file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues"
|
||||
pause
|
||||
exit /b
|
||||
)
|
||||
)
|
||||
|
||||
@call conda activate .\env
|
||||
|
||||
@>nul grep -c "conda_sd_ui_deps_installed" ..\scripts\install_status.txt
|
||||
@if "%ERRORLEVEL%" EQU "0" (
|
||||
echo "Packages necessary for Stable Diffusion UI were already installed"
|
||||
) else (
|
||||
@echo. & echo "Downloading packages necessary for Stable Diffusion UI.." & echo.
|
||||
|
||||
@call conda install -c conda-forge -y --prefix env uvicorn fastapi && (
|
||||
@echo conda_sd_ui_deps_installed >> ..\scripts\install_status.txt
|
||||
) || (
|
||||
@set cmd_had_error=T
|
||||
)
|
||||
|
||||
if "%ERRORLEVEL%" NEQ "0" (
|
||||
@set cmd_had_error=T
|
||||
)
|
||||
|
||||
if "%cmd_had_error%"=="T" (
|
||||
echo "Error installing the packages necessary for Stable Diffusion UI. Please try re-running this installer. If it doesn't work, please copy the messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB or file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues"
|
||||
pause
|
||||
exit /b
|
||||
)
|
||||
)
|
||||
|
||||
@if exist "sd-v1-4.ckpt" (
|
||||
echo "Data files (weights) necessary for Stable Diffusion were already downloaded"
|
||||
) else (
|
||||
@echo. & echo "Downloading data files (weights) for Stable Diffusion.." & echo.
|
||||
|
||||
@call curl -L https://me.cmdr2.org/stable-diffusion-ui/sd-v1-4.ckpt > sd-v1-4.ckpt
|
||||
|
||||
@if not exist "sd-v1-4.ckpt" (
|
||||
echo "Error downloading the data files (weights) for Stable Diffusion. Please try re-running this installer. If it doesn't work, please copy the messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB or file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues"
|
||||
pause
|
||||
exit /b
|
||||
)
|
||||
|
||||
@echo sd_weights_downloaded >> ..\scripts\install_status.txt
|
||||
@echo sd_install_complete >> ..\scripts\install_status.txt
|
||||
)
|
||||
|
||||
@echo. & echo "Stable Diffusion is ready!" & echo.
|
||||
|
||||
@set SD_UI_PATH=%cd%\..\ui
|
||||
|
||||
@uvicorn server:app --app-dir "%SD_UI_PATH%" --port 9000 --host 0.0.0.0
|
||||
|
||||
@pause
|
80
scripts/on_sd_start.sh
Executable file
80
scripts/on_sd_start.sh
Executable file
@ -0,0 +1,80 @@
|
||||
source installer/etc/profile.d/conda.sh
|
||||
|
||||
if [ `grep -c sd_git_cloned scripts/install_status.txt` -gt "0" ]; then
|
||||
echo "Stable Diffusion's git repository was already installed. Updating.."
|
||||
|
||||
cd stable-diffusion
|
||||
|
||||
git reset --hard
|
||||
git pull
|
||||
|
||||
cd ..
|
||||
else
|
||||
printf "\n\nDownloading Stable Diffusion..\n\n"
|
||||
|
||||
if git clone https://github.com/basujindal/stable-diffusion.git ; then
|
||||
echo sd_git_cloned >> scripts/install_status.txt
|
||||
else
|
||||
printf "\n\nError downloading Stable Diffusion. Please try re-running this installer. If it doesn't work, please copy the messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB or file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\n\n"
|
||||
read -p "Press any key to continue"
|
||||
exit
|
||||
fi
|
||||
fi
|
||||
|
||||
cd stable-diffusion
|
||||
|
||||
if [ `grep -c conda_sd_env_created ../scripts/install_status.txt` -gt "0" ]; then
|
||||
echo "Packages necessary for Stable Diffusion were already installed"
|
||||
else
|
||||
printf "\n\nDownloading packages necessary for Stable Diffusion..\n"
|
||||
printf "\n\n***** This will take some time (depending on the speed of the Internet connection) and may appear to be stuck, but please be patient ***** ..\n\n"
|
||||
|
||||
if conda env create --prefix env --force -f environment.yaml ; then
|
||||
echo conda_sd_env_created >> ../scripts/install_status.txt
|
||||
else
|
||||
printf "\n\nError installing the packages necessary for Stable Diffusion. Please try re-running this installer. If it doesn't work, please copy the messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB or file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\n\n"
|
||||
read -p "Press any key to continue"
|
||||
exit
|
||||
fi
|
||||
fi
|
||||
|
||||
conda activate ./env
|
||||
|
||||
if [ `grep -c conda_sd_ui_deps_installed ../scripts/install_status.txt` -gt "0" ]; then
|
||||
echo "Packages necessary for Stable Diffusion UI were already installed"
|
||||
else
|
||||
printf "\n\nDownloading packages necessary for Stable Diffusion UI..\n\n"
|
||||
|
||||
if conda install -c conda-forge --prefix ./env -y uvicorn fastapi ; then
|
||||
echo conda_sd_ui_deps_installed >> ../scripts/install_status.txt
|
||||
else
|
||||
printf "\n\nError installing the packages necessary for Stable Diffusion UI. Please try re-running this installer. If it doesn't work, please copy the messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB or file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\n\n"
|
||||
read -p "Press any key to continue"
|
||||
exit
|
||||
fi
|
||||
fi
|
||||
|
||||
if [ -f "sd-v1-4.ckpt" ]; then
|
||||
echo "Data files (weights) necessary for Stable Diffusion were already downloaded"
|
||||
else
|
||||
echo "Downloading data files (weights) for Stable Diffusion.."
|
||||
|
||||
curl -L https://me.cmdr2.org/stable-diffusion-ui/sd-v1-4.ckpt > sd-v1-4.ckpt
|
||||
|
||||
if [ ! -f "sd-v1-4.ckpt" ]; then
|
||||
printf "\n\nError downloading the data files (weights) for Stable Diffusion. Please try re-running this installer. If it doesn't work, please copy the messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB or file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\n\n"
|
||||
read -p "Press any key to continue"
|
||||
exit
|
||||
fi
|
||||
|
||||
echo sd_weights_downloaded >> ../scripts/install_status.txt
|
||||
echo sd_install_complete >> ../scripts/install_status.txt
|
||||
fi
|
||||
|
||||
printf "\n\nStable Diffusion is ready!\n\n"
|
||||
|
||||
export SD_UI_PATH=`pwd`/../ui
|
||||
|
||||
uvicorn server:app --app-dir "$SD_UI_PATH" --port 9000 --host 0.0.0.0
|
||||
|
||||
read -p "Press any key to continue"
|
6
scripts/post_activate.bat
Normal file
6
scripts/post_activate.bat
Normal file
@ -0,0 +1,6 @@
|
||||
@call conda --version
|
||||
@call git --version
|
||||
|
||||
cd %CONDA_PREFIX%\..\scripts
|
||||
|
||||
on_env_start.bat
|
10
scripts/post_activate.sh
Executable file
10
scripts/post_activate.sh
Executable file
@ -0,0 +1,10 @@
|
||||
conda-unpack
|
||||
|
||||
source $CONDA_PREFIX/etc/profile.d/conda.sh
|
||||
|
||||
conda --version
|
||||
git --version
|
||||
|
||||
cd $CONDA_PREFIX/../scripts
|
||||
|
||||
./on_env_start.sh
|
5
scripts/start.sh
Normal file
5
scripts/start.sh
Normal file
@ -0,0 +1,5 @@
|
||||
source installer/bin/activate
|
||||
|
||||
conda-unpack
|
||||
|
||||
scripts/on_env_start.sh
|
2
scripts/win_enable_long_filepaths.ps1
Normal file
2
scripts/win_enable_long_filepaths.ps1
Normal file
@ -0,0 +1,2 @@
|
||||
Set-ItemProperty -Path 'HKLM:\SYSTEM\CurrentControlSet\Control\FileSystem' -Name LongPathsEnabled -Type DWord -Value 1
|
||||
pause
|
26
server
26
server
@ -1,26 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
CMD="$1"
|
||||
if [ -z "$1" ]; then
|
||||
CMD="start"
|
||||
fi
|
||||
|
||||
start_server() {
|
||||
docker-compose up -d stable-diffusion-old-port-redirect # old port 8000 server, show redirect notice
|
||||
docker-compose up stability-ai stable-diffusion-ui
|
||||
}
|
||||
|
||||
stop_server() {
|
||||
docker-compose down
|
||||
}
|
||||
|
||||
if [ "$CMD" == "start" ]; then
|
||||
start_server
|
||||
elif [ "$CMD" == "stop" ]; then
|
||||
stop_server
|
||||
elif [ "$CMD" == "restart" ]; then
|
||||
stop_server
|
||||
start_server
|
||||
else
|
||||
echo "Unknown option: $1 (Expected start or stop)"
|
||||
fi
|
1014
ui/index.html
Normal file
1014
ui/index.html
Normal file
File diff suppressed because it is too large
Load Diff
BIN
ui/media/ding.mp3
Normal file
BIN
ui/media/ding.mp3
Normal file
Binary file not shown.
140
ui/modifiers.json
Normal file
140
ui/modifiers.json
Normal file
@ -0,0 +1,140 @@
|
||||
[
|
||||
[
|
||||
"Drawing Style",
|
||||
[
|
||||
"Cel Shading",
|
||||
"Children's Drawing",
|
||||
"Crosshatch",
|
||||
"Detailed and Intricate",
|
||||
"Doodle",
|
||||
"Dot Art",
|
||||
"Line Art",
|
||||
"Sketch"
|
||||
]
|
||||
],
|
||||
[
|
||||
"Visual Style",
|
||||
[
|
||||
"2D",
|
||||
"8-bit",
|
||||
"16-bit",
|
||||
"Anaglyph",
|
||||
"Anime",
|
||||
"Cartoon",
|
||||
"CGI",
|
||||
"Comic Book",
|
||||
"Concept Art",
|
||||
"Digital Art",
|
||||
"Fantasy",
|
||||
"Graphic Novel",
|
||||
"Hard Edge Painting",
|
||||
"Hydrodipped",
|
||||
"Lithography",
|
||||
"Modern Art",
|
||||
"Mosaic",
|
||||
"Mural",
|
||||
"Photo",
|
||||
"Realistic",
|
||||
"Street Art",
|
||||
"Visual Novel",
|
||||
"Watercolor"
|
||||
]
|
||||
],
|
||||
[
|
||||
"Pen",
|
||||
[
|
||||
"Chalk",
|
||||
"Colored Pencil",
|
||||
"Graphite",
|
||||
"Ink",
|
||||
"Oil Paint",
|
||||
"Pastel Art"
|
||||
]
|
||||
],
|
||||
[
|
||||
"Carving and Etching",
|
||||
[
|
||||
"Etching",
|
||||
"Linocut",
|
||||
"Paper Model",
|
||||
"Paper-Mache",
|
||||
"Papercutting",
|
||||
"Pyrography",
|
||||
"Wood-Carving"
|
||||
]
|
||||
],
|
||||
[
|
||||
"Camera",
|
||||
[
|
||||
"Aerial View",
|
||||
"Cinematic",
|
||||
"Color Grading",
|
||||
"Dramatic",
|
||||
"Film Grain",
|
||||
"Fisheye Lens",
|
||||
"Glamor Shot",
|
||||
"Golden Hour",
|
||||
"HD",
|
||||
"Lens Flare",
|
||||
"Macro",
|
||||
"Polaroid",
|
||||
"Vintage",
|
||||
"War Photography",
|
||||
"White Balance",
|
||||
"Wildlife Photography"
|
||||
]
|
||||
],
|
||||
[
|
||||
"Color",
|
||||
[
|
||||
"Beautiful Lighting",
|
||||
"Colorful",
|
||||
"Electric Colors",
|
||||
"Infrared",
|
||||
"Synthwave",
|
||||
"Warm Color Palette"
|
||||
]
|
||||
],
|
||||
[
|
||||
"Emotions",
|
||||
[
|
||||
"Angry",
|
||||
"Evil",
|
||||
"Excited",
|
||||
"Good",
|
||||
"Happy",
|
||||
"Lonely",
|
||||
"Sad"
|
||||
]
|
||||
],
|
||||
[
|
||||
"Style of an artist or community",
|
||||
[
|
||||
"Artstation",
|
||||
"by Agnes Lawrence Pelton",
|
||||
"by Akihito Yoshida",
|
||||
"by Andy Warhol",
|
||||
"by Asaf Hanuka",
|
||||
"by Aubrey Beardsley",
|
||||
"by Ben Enwonwu",
|
||||
"by Caravaggio Michelangelo Merisi",
|
||||
"by David Mann",
|
||||
"by Frida Kahlo",
|
||||
"by H.R. Giger",
|
||||
"by Hayao Mizaki",
|
||||
"by Ivan Shishkin",
|
||||
"by Johannes Vermeer",
|
||||
"by Katsushika Hokusai",
|
||||
"by Ko Young Hoon",
|
||||
"by Leonardo Da Vinci",
|
||||
"by Lisa Frank",
|
||||
"by Mahmoud Sai",
|
||||
"by Mark Brooks",
|
||||
"by Pablo Picasso",
|
||||
"by Richard Dadd",
|
||||
"by Salvador Dali",
|
||||
"by Tivadar Csontváry Kosztka",
|
||||
"by Yoshitaka Amano"
|
||||
]
|
||||
]
|
||||
]
|
62
ui/sd_internal/__init__.py
Normal file
62
ui/sd_internal/__init__.py
Normal file
@ -0,0 +1,62 @@
|
||||
import json
|
||||
|
||||
class Request:
|
||||
prompt: str = ""
|
||||
init_image: str = None # base64
|
||||
mask: str = None # base64
|
||||
num_outputs: int = 1
|
||||
num_inference_steps: int = 50
|
||||
guidance_scale: float = 7.5
|
||||
width: int = 512
|
||||
height: int = 512
|
||||
seed: int = 42
|
||||
prompt_strength: float = 0.8
|
||||
# allow_nsfw: bool = False
|
||||
precision: str = "autocast" # or "full"
|
||||
save_to_disk_path: str = None
|
||||
turbo: bool = True
|
||||
use_cpu: bool = False
|
||||
use_full_precision: bool = False
|
||||
|
||||
def to_string(self):
|
||||
return f'''
|
||||
prompt: {self.prompt}
|
||||
seed: {self.seed}
|
||||
num_inference_steps: {self.num_inference_steps}
|
||||
guidance_scale: {self.guidance_scale}
|
||||
w: {self.width}
|
||||
h: {self.height}
|
||||
precision: {self.precision}
|
||||
save_to_disk_path: {self.save_to_disk_path}
|
||||
turbo: {self.turbo}
|
||||
use_cpu: {self.use_cpu}
|
||||
use_full_precision: {self.use_full_precision}'''
|
||||
|
||||
class Image:
|
||||
data: str # base64
|
||||
seed: int
|
||||
is_nsfw: bool
|
||||
|
||||
def __init__(self, data, seed):
|
||||
self.data = data
|
||||
self.seed = seed
|
||||
|
||||
def json(self):
|
||||
return {
|
||||
"data": self.data,
|
||||
"seed": self.seed,
|
||||
}
|
||||
|
||||
class Response:
|
||||
images: list
|
||||
|
||||
def json(self):
|
||||
res = {
|
||||
"status": 'succeeded',
|
||||
"output": [],
|
||||
}
|
||||
|
||||
for image in self.images:
|
||||
res["output"].append(image.json())
|
||||
|
||||
return res
|
359
ui/sd_internal/runtime.py
Normal file
359
ui/sd_internal/runtime.py
Normal file
@ -0,0 +1,359 @@
|
||||
import os, re
|
||||
import torch
|
||||
import numpy as np
|
||||
from omegaconf import OmegaConf
|
||||
from PIL import Image
|
||||
from tqdm import tqdm, trange
|
||||
from itertools import islice
|
||||
from einops import rearrange
|
||||
import time
|
||||
from pytorch_lightning import seed_everything
|
||||
from torch import autocast
|
||||
from contextlib import nullcontext
|
||||
from einops import rearrange, repeat
|
||||
from ldm.util import instantiate_from_config
|
||||
from optimizedSD.optimUtils import split_weighted_subprompts
|
||||
from transformers import logging
|
||||
|
||||
import uuid
|
||||
|
||||
logging.set_verbosity_error()
|
||||
|
||||
# consts
|
||||
config_yaml = "optimizedSD/v1-inference.yaml"
|
||||
|
||||
# api stuff
|
||||
from . import Request, Response, Image as ResponseImage
|
||||
import base64
|
||||
from io import BytesIO
|
||||
|
||||
# local
|
||||
session_id = str(uuid.uuid4())
|
||||
|
||||
ckpt = None
|
||||
model = None
|
||||
modelCS = None
|
||||
modelFS = None
|
||||
model_is_half = False
|
||||
model_fs_is_half = False
|
||||
device = None
|
||||
unet_bs = 1
|
||||
precision = 'autocast'
|
||||
sampler_plms = None
|
||||
sampler_ddim = None
|
||||
|
||||
# api
|
||||
def load_model(ckpt_to_use, device_to_use='cuda', turbo=False, unet_bs_to_use=1, precision_to_use='autocast', half_model_fs=False):
|
||||
global ckpt, model, modelCS, modelFS, model_is_half, device, unet_bs, precision, model_fs_is_half
|
||||
|
||||
ckpt = ckpt_to_use
|
||||
device = device_to_use
|
||||
precision = precision_to_use
|
||||
unet_bs = unet_bs_to_use
|
||||
|
||||
sd = load_model_from_config(f"{ckpt}")
|
||||
li, lo = [], []
|
||||
for key, value in sd.items():
|
||||
sp = key.split(".")
|
||||
if (sp[0]) == "model":
|
||||
if "input_blocks" in sp:
|
||||
li.append(key)
|
||||
elif "middle_block" in sp:
|
||||
li.append(key)
|
||||
elif "time_embed" in sp:
|
||||
li.append(key)
|
||||
else:
|
||||
lo.append(key)
|
||||
for key in li:
|
||||
sd["model1." + key[6:]] = sd.pop(key)
|
||||
for key in lo:
|
||||
sd["model2." + key[6:]] = sd.pop(key)
|
||||
|
||||
config = OmegaConf.load(f"{config_yaml}")
|
||||
|
||||
model = instantiate_from_config(config.modelUNet)
|
||||
_, _ = model.load_state_dict(sd, strict=False)
|
||||
model.eval()
|
||||
model.cdevice = device
|
||||
model.unet_bs = unet_bs
|
||||
model.turbo = turbo
|
||||
|
||||
modelCS = instantiate_from_config(config.modelCondStage)
|
||||
_, _ = modelCS.load_state_dict(sd, strict=False)
|
||||
modelCS.eval()
|
||||
modelCS.cond_stage_model.device = device
|
||||
|
||||
modelFS = instantiate_from_config(config.modelFirstStage)
|
||||
_, _ = modelFS.load_state_dict(sd, strict=False)
|
||||
modelFS.eval()
|
||||
del sd
|
||||
|
||||
if device != "cpu" and precision == "autocast":
|
||||
model.half()
|
||||
modelCS.half()
|
||||
model_is_half = True
|
||||
else:
|
||||
model_is_half = False
|
||||
|
||||
if half_model_fs:
|
||||
modelFS.half()
|
||||
model_fs_is_half = True
|
||||
else:
|
||||
model_fs_is_half = False
|
||||
|
||||
def mk_img(req: Request):
|
||||
global modelFS, device
|
||||
|
||||
res = Response()
|
||||
res.images = []
|
||||
|
||||
model.turbo = req.turbo
|
||||
if req.use_cpu:
|
||||
device = 'cpu'
|
||||
|
||||
if model_is_half:
|
||||
print('reloading model for cpu')
|
||||
load_model(ckpt, device)
|
||||
else:
|
||||
device = 'cuda'
|
||||
|
||||
if (precision == 'autocast' and (req.use_full_precision or not model_is_half)) or \
|
||||
(precision == 'full' and not req.use_full_precision) or \
|
||||
(req.init_image is None and model_fs_is_half) or \
|
||||
(req.init_image is not None and not model_fs_is_half):
|
||||
|
||||
print('reloading model for cuda')
|
||||
load_model(ckpt, device, model.turbo, unet_bs, ('full' if req.use_full_precision else 'autocast'), half_model_fs=(req.init_image is not None and not req.use_full_precision))
|
||||
|
||||
model.cdevice = device
|
||||
modelCS.cond_stage_model.device = device
|
||||
|
||||
opt_prompt = req.prompt
|
||||
opt_seed = req.seed
|
||||
opt_n_samples = req.num_outputs
|
||||
opt_n_iter = 1
|
||||
opt_scale = req.guidance_scale
|
||||
opt_C = 4
|
||||
opt_H = req.height
|
||||
opt_W = req.width
|
||||
opt_f = 8
|
||||
opt_ddim_steps = req.num_inference_steps
|
||||
opt_ddim_eta = 0.0
|
||||
opt_strength = req.prompt_strength
|
||||
opt_save_to_disk_path = req.save_to_disk_path
|
||||
opt_init_img = req.init_image
|
||||
opt_format = 'png'
|
||||
|
||||
print(req.to_string(), '\n device', device)
|
||||
|
||||
seed_everything(opt_seed)
|
||||
|
||||
batch_size = opt_n_samples
|
||||
prompt = opt_prompt
|
||||
assert prompt is not None
|
||||
data = [batch_size * [prompt]]
|
||||
|
||||
if precision == "autocast" and device != "cpu":
|
||||
precision_scope = autocast
|
||||
else:
|
||||
precision_scope = nullcontext
|
||||
|
||||
if req.init_image is None:
|
||||
handler = _txt2img
|
||||
|
||||
init_latent = None
|
||||
t_enc = None
|
||||
else:
|
||||
handler = _img2img
|
||||
|
||||
init_image = load_img(req.init_image)
|
||||
init_image = init_image.to(device)
|
||||
|
||||
if device != "cpu" and precision == "autocast":
|
||||
init_image = init_image.half()
|
||||
|
||||
modelFS.to(device)
|
||||
|
||||
init_image = repeat(init_image, '1 ... -> b ...', b=batch_size)
|
||||
init_latent = modelFS.get_first_stage_encoding(modelFS.encode_first_stage(init_image)) # move to latent space
|
||||
|
||||
if device != "cpu":
|
||||
mem = torch.cuda.memory_allocated() / 1e6
|
||||
modelFS.to("cpu")
|
||||
while torch.cuda.memory_allocated() / 1e6 >= mem:
|
||||
time.sleep(1)
|
||||
|
||||
assert 0. <= opt_strength <= 1., 'can only work with strength in [0.0, 1.0]'
|
||||
t_enc = int(opt_strength * opt_ddim_steps)
|
||||
print(f"target t_enc is {t_enc} steps")
|
||||
|
||||
if opt_save_to_disk_path is not None:
|
||||
session_out_path = os.path.join(opt_save_to_disk_path, 'session-' + session_id)
|
||||
os.makedirs(session_out_path, exist_ok=True)
|
||||
else:
|
||||
session_out_path = None
|
||||
|
||||
seeds = ""
|
||||
with torch.no_grad():
|
||||
for n in trange(opt_n_iter, desc="Sampling"):
|
||||
for prompts in tqdm(data, desc="data"):
|
||||
|
||||
if opt_save_to_disk_path is not None:
|
||||
base_count = len(os.listdir(session_out_path))
|
||||
|
||||
with precision_scope("cuda"):
|
||||
modelCS.to(device)
|
||||
uc = None
|
||||
if opt_scale != 1.0:
|
||||
uc = modelCS.get_learned_conditioning(batch_size * [""])
|
||||
if isinstance(prompts, tuple):
|
||||
prompts = list(prompts)
|
||||
|
||||
subprompts, weights = split_weighted_subprompts(prompts[0])
|
||||
if len(subprompts) > 1:
|
||||
c = torch.zeros_like(uc)
|
||||
totalWeight = sum(weights)
|
||||
# normalize each "sub prompt" and add it
|
||||
for i in range(len(subprompts)):
|
||||
weight = weights[i]
|
||||
# if not skip_normalize:
|
||||
weight = weight / totalWeight
|
||||
c = torch.add(c, modelCS.get_learned_conditioning(subprompts[i]), alpha=weight)
|
||||
else:
|
||||
c = modelCS.get_learned_conditioning(prompts)
|
||||
|
||||
# run the handler
|
||||
if handler == _txt2img:
|
||||
x_samples = _txt2img(opt_W, opt_H, opt_n_samples, opt_ddim_steps, opt_scale, None, opt_C, opt_f, opt_ddim_eta, c, uc, opt_seed)
|
||||
else:
|
||||
x_samples = _img2img(init_latent, t_enc, batch_size, opt_scale, c, uc, opt_ddim_steps, opt_ddim_eta, opt_seed)
|
||||
|
||||
modelFS.to(device)
|
||||
|
||||
print("saving images")
|
||||
for i in range(batch_size):
|
||||
|
||||
x_samples_ddim = modelFS.decode_first_stage(x_samples[i].unsqueeze(0))
|
||||
x_sample = torch.clamp((x_samples_ddim + 1.0) / 2.0, min=0.0, max=1.0)
|
||||
x_sample = 255.0 * rearrange(x_sample[0].cpu().numpy(), "c h w -> h w c")
|
||||
img = Image.fromarray(x_sample.astype(np.uint8))
|
||||
|
||||
img_data = img_to_base64_str(img)
|
||||
res.images.append(ResponseImage(data=img_data, seed=opt_seed))
|
||||
|
||||
if opt_save_to_disk_path is not None:
|
||||
prompt_flattened = "_".join(re.split(":| ", prompts[0]))
|
||||
prompt_flattened = prompt_flattened[:150]
|
||||
|
||||
file_path = f"sd_{prompt_flattened}_Seed-{opt_seed}_Steps-{opt_ddim_steps}_Guidance-{opt_scale}_{base_count:05}"
|
||||
img_out_path = os.path.join(session_out_path, f"{file_path}.{opt_format}")
|
||||
meta_out_path = os.path.join(session_out_path, f"{file_path}.txt")
|
||||
|
||||
metadata = f"{prompts[0]}\nSeed: {opt_seed}\nSteps: {opt_ddim_steps}\nGuidance Scale: {opt_scale}"
|
||||
img.save(img_out_path)
|
||||
with open(meta_out_path, 'w') as f:
|
||||
f.write(metadata)
|
||||
|
||||
base_count += 1
|
||||
|
||||
seeds += str(opt_seed) + ","
|
||||
opt_seed += 1
|
||||
|
||||
if device != "cpu":
|
||||
mem = torch.cuda.memory_allocated() / 1e6
|
||||
modelFS.to("cpu")
|
||||
while torch.cuda.memory_allocated() / 1e6 >= mem:
|
||||
time.sleep(1)
|
||||
del x_samples
|
||||
print("memory_final = ", torch.cuda.memory_allocated() / 1e6)
|
||||
|
||||
return res
|
||||
|
||||
def _txt2img(opt_W, opt_H, opt_n_samples, opt_ddim_steps, opt_scale, start_code, opt_C, opt_f, opt_ddim_eta, c, uc, opt_seed):
|
||||
shape = [opt_n_samples, opt_C, opt_H // opt_f, opt_W // opt_f]
|
||||
|
||||
if device != "cpu":
|
||||
mem = torch.cuda.memory_allocated() / 1e6
|
||||
modelCS.to("cpu")
|
||||
while torch.cuda.memory_allocated() / 1e6 >= mem:
|
||||
time.sleep(1)
|
||||
|
||||
samples_ddim = model.sample(
|
||||
S=opt_ddim_steps,
|
||||
conditioning=c,
|
||||
seed=opt_seed,
|
||||
shape=shape,
|
||||
verbose=False,
|
||||
unconditional_guidance_scale=opt_scale,
|
||||
unconditional_conditioning=uc,
|
||||
eta=opt_ddim_eta,
|
||||
x_T=start_code,
|
||||
sampler = 'plms',
|
||||
)
|
||||
|
||||
return samples_ddim
|
||||
|
||||
def _img2img(init_latent, t_enc, batch_size, opt_scale, c, uc, opt_ddim_steps, opt_ddim_eta, opt_seed):
|
||||
# encode (scaled latent)
|
||||
z_enc = model.stochastic_encode(
|
||||
init_latent,
|
||||
torch.tensor([t_enc] * batch_size).to(device),
|
||||
opt_seed,
|
||||
opt_ddim_eta,
|
||||
opt_ddim_steps,
|
||||
)
|
||||
# decode it
|
||||
samples_ddim = model.sample(
|
||||
t_enc,
|
||||
c,
|
||||
z_enc,
|
||||
unconditional_guidance_scale=opt_scale,
|
||||
unconditional_conditioning=uc,
|
||||
sampler = 'ddim'
|
||||
)
|
||||
|
||||
return samples_ddim
|
||||
|
||||
# internal
|
||||
|
||||
def chunk(it, size):
|
||||
it = iter(it)
|
||||
return iter(lambda: tuple(islice(it, size)), ())
|
||||
|
||||
|
||||
def load_model_from_config(ckpt, verbose=False):
|
||||
print(f"Loading model from {ckpt}")
|
||||
pl_sd = torch.load(ckpt, map_location="cpu")
|
||||
if "global_step" in pl_sd:
|
||||
print(f"Global Step: {pl_sd['global_step']}")
|
||||
sd = pl_sd["state_dict"]
|
||||
return sd
|
||||
|
||||
# utils
|
||||
|
||||
def load_img(img_str):
|
||||
image = base64_str_to_img(img_str).convert("RGB")
|
||||
w, h = image.size
|
||||
print(f"loaded input image of size ({w}, {h}) from base64")
|
||||
w, h = map(lambda x: x - x % 64, (w, h)) # resize to integer multiple of 64
|
||||
image = image.resize((w, h), resample=Image.LANCZOS)
|
||||
image = np.array(image).astype(np.float32) / 255.0
|
||||
image = image[None].transpose(0, 3, 1, 2)
|
||||
image = torch.from_numpy(image)
|
||||
return 2.*image - 1.
|
||||
|
||||
# https://stackoverflow.com/a/61114178
|
||||
def img_to_base64_str(img):
|
||||
buffered = BytesIO()
|
||||
img.save(buffered, format="PNG")
|
||||
buffered.seek(0)
|
||||
img_byte = buffered.getvalue()
|
||||
img_str = "data:image/png;base64," + base64.b64encode(img_byte).decode()
|
||||
return img_str
|
||||
|
||||
def base64_str_to_img(img_str):
|
||||
img_str = img_str[len("data:image/png;base64,"):]
|
||||
data = base64.b64decode(img_str)
|
||||
buffered = BytesIO(data)
|
||||
img = Image.open(buffered)
|
||||
return img
|
118
ui/server.py
Normal file
118
ui/server.py
Normal file
@ -0,0 +1,118 @@
|
||||
import traceback
|
||||
|
||||
import sys
|
||||
import os
|
||||
|
||||
SCRIPT_DIR = os.getcwd()
|
||||
print('started in ', SCRIPT_DIR)
|
||||
|
||||
SD_UI_DIR = os.getenv('SD_UI_PATH', None)
|
||||
sys.path.append(os.path.dirname(SD_UI_DIR))
|
||||
|
||||
OUTPUT_DIRNAME = "Stable Diffusion UI" # in the user's home folder
|
||||
|
||||
from fastapi import FastAPI, HTTPException
|
||||
from starlette.responses import FileResponse
|
||||
from pydantic import BaseModel
|
||||
|
||||
from sd_internal import Request, Response
|
||||
|
||||
app = FastAPI()
|
||||
|
||||
model_loaded = False
|
||||
model_is_loading = False
|
||||
|
||||
modifiers_cache = None
|
||||
outpath = os.path.join(os.path.expanduser("~"), OUTPUT_DIRNAME)
|
||||
|
||||
# defaults from https://huggingface.co/blog/stable_diffusion
|
||||
class ImageRequest(BaseModel):
|
||||
prompt: str = ""
|
||||
init_image: str = None # base64
|
||||
mask: str = None # base64
|
||||
num_outputs: int = 1
|
||||
num_inference_steps: int = 50
|
||||
guidance_scale: float = 7.5
|
||||
width: int = 512
|
||||
height: int = 512
|
||||
seed: int = 42
|
||||
prompt_strength: float = 0.8
|
||||
# allow_nsfw: bool = False
|
||||
save_to_disk: bool = False
|
||||
turbo: bool = True
|
||||
use_cpu: bool = False
|
||||
use_full_precision: bool = False
|
||||
|
||||
@app.get('/')
|
||||
def read_root():
|
||||
return FileResponse(os.path.join(SD_UI_DIR, 'index.html'))
|
||||
|
||||
@app.get('/ping')
|
||||
async def ping():
|
||||
global model_loaded, model_is_loading
|
||||
|
||||
try:
|
||||
if model_loaded:
|
||||
return {'OK'}
|
||||
|
||||
if model_is_loading:
|
||||
return {'ERROR'}
|
||||
|
||||
model_is_loading = True
|
||||
|
||||
from sd_internal import runtime
|
||||
runtime.load_model(ckpt_to_use="sd-v1-4.ckpt")
|
||||
|
||||
model_loaded = True
|
||||
model_is_loading = False
|
||||
|
||||
return {'OK'}
|
||||
except Exception as e:
|
||||
print(traceback.format_exc())
|
||||
return HTTPException(status_code=500, detail=str(e))
|
||||
|
||||
@app.post('/image')
|
||||
async def image(req : ImageRequest):
|
||||
from sd_internal import runtime
|
||||
|
||||
r = Request()
|
||||
r.prompt = req.prompt
|
||||
r.init_image = req.init_image
|
||||
r.mask = req.mask
|
||||
r.num_outputs = req.num_outputs
|
||||
r.num_inference_steps = req.num_inference_steps
|
||||
r.guidance_scale = req.guidance_scale
|
||||
r.width = req.width
|
||||
r.height = req.height
|
||||
r.seed = req.seed
|
||||
r.prompt_strength = req.prompt_strength
|
||||
# r.allow_nsfw = req.allow_nsfw
|
||||
r.turbo = req.turbo
|
||||
r.use_cpu = req.use_cpu
|
||||
r.use_full_precision = req.use_full_precision
|
||||
|
||||
if req.save_to_disk:
|
||||
r.save_to_disk_path = outpath
|
||||
|
||||
try:
|
||||
res: Response = runtime.mk_img(r)
|
||||
|
||||
return res.json()
|
||||
except Exception as e:
|
||||
print(traceback.format_exc())
|
||||
return HTTPException(status_code=500, detail=str(e))
|
||||
|
||||
@app.get('/media/ding.mp3')
|
||||
def read_ding():
|
||||
return FileResponse(os.path.join(SD_UI_DIR, 'media/ding.mp3'))
|
||||
|
||||
@app.get('/modifiers.json')
|
||||
def read_modifiers():
|
||||
return FileResponse(os.path.join(SD_UI_DIR, 'modifiers.json'))
|
||||
|
||||
@app.get('/output_dir')
|
||||
def read_home_dir():
|
||||
return {outpath}
|
||||
|
||||
# start the browser ui
|
||||
import webbrowser; webbrowser.open('http://localhost:9000')
|
Reference in New Issue
Block a user