Compare commits

..

55 Commits
v1.25 ... v2.05

Author SHA1 Message Date
94835c46a0 Update README.md 2022-09-04 23:00:22 +05:30
58b3d31526 Note about CPU 2022-09-04 19:34:40 +05:30
b023f5c0da Fix readme usage 2022-09-04 19:33:51 +05:30
78b87c6ddd Readme for v2 2022-09-04 19:32:52 +05:30
edde4dc2fa v2 moving to the main branch 2022-09-04 19:31:34 +05:30
d9a6e41265 Switch to a standardized model link, for v2 public release 2022-09-04 19:15:42 +05:30
64f0f1aa2c Fix bug with broken tag name in filename generator 2022-09-04 00:28:38 +05:30
b54c057c83 bump version 2022-09-04 00:20:14 +05:30
baa4acaf79 Use - instead of : in filename 2022-09-04 00:19:36 +05:30
3c6bb41939 Merge branch 'main' of github.com:cmdr2/stable-diffusion-ui 2022-09-04 00:17:16 +05:30
3f64d3729e Merge pull request #44 from caranicas/informative-filenames
updated filename
2022-09-04 00:16:52 +05:30
21dc2ece1b Store images during a session in the same folder; Store the metadata for each image as a txt file next to it 2022-09-04 00:12:48 +05:30
fcac6c4f8c fix prompt value
fix accidental underscore addition to the text display
2022-09-03 12:38:30 -04:00
e5dc932717 Allow more width and height options 2022-09-03 21:22:56 +05:30
6b65b05e2f updated filename 2022-09-03 11:06:34 -04:00
d52b973b44 Rename the linux v2 start script to start.sh 2022-09-03 20:04:55 +05:30
79d3f4ca9e Update test works for linux v2 script 2022-09-03 18:50:33 +05:30
06dd22d89a Test update of main linux v2 script 2022-09-03 18:49:18 +05:30
e79e425cf5 Typo in linux v2 script for env var 2022-09-03 18:48:11 +05:30
b4b2c351b4 Conda needs to be reactivated in the final script, l
inux v2
2022-09-03 18:26:14 +05:30
73acaadf70 Linux v2 newlines 2022-09-03 18:13:21 +05:30
18ef36bbc3 Init conda linux v2 2022-09-03 18:08:52 +05:30
021315f0f5 Merge branch 'main' of github.com:cmdr2/stable-diffusion-ui 2022-09-03 17:55:48 +05:30
a4ee103ff0 Simplified script for linux v2 2022-09-03 17:55:38 +05:30
618173c5f0 Merge pull request #40 from MrManny/feature/additional-modifiers
Additional modifiers
2022-09-03 14:52:39 +05:30
4cb906571b Add Leonardo Da Vinci as an artist modifier
How I could forget him in the first place is beyond me
2022-09-03 11:06:18 +02:00
6092b6c4cf Add a few assorted styles
mainly courtesy of https://promptomania.com/stable-diffusion-prompt-builder/
2022-09-03 10:59:45 +02:00
9bf17a1c8d v2 linux, try without prefix mode 2022-09-03 14:12:47 +05:30
542379dcf4 Add additional artist modifiers 2022-09-03 10:39:18 +02:00
a565bb5889 Sort modifiers alphabetically within their group 2022-09-03 10:35:37 +02:00
9a96ff2edc v2 Linux script update 2022-09-03 13:04:07 +05:30
9fa2e363cc Updated linux scripts 2022-09-03 12:34:23 +05:30
a9939a31cf img2img tip 2022-09-03 11:45:50 +05:30
64fffbcdec Increment version 2022-09-03 11:44:12 +05:30
a65f8f5d5c Preserve across restarts the settings for 'use cpu', 'use full precision', 'use turbo' 2022-09-03 11:43:05 +05:30
c5475fb028 Linux v2 activate script 2022-09-02 23:31:51 +05:30
f267c46595 Make linux start command executable 2022-09-02 23:11:28 +05:30
b1f67a9a65 Merge branch 'main' of github.com:cmdr2/stable-diffusion-ui 2022-09-02 23:09:04 +05:30
044a7524a3 v2 start linux script 2022-09-02 23:08:53 +05:30
495b15e065 Make the new Linux v2 scripts executable 2022-09-02 22:48:43 +05:30
f4e6c399f2 Linux scripts for v2 2022-09-02 21:54:32 +05:30
4519acb77e Increment version test 2022-09-02 18:43:31 +05:30
cf1ba6d459 v2 scripts 2022-09-02 16:55:08 +05:30
c28cb67484 Script change 2022-09-02 16:31:34 +05:30
9017ee9a40 v2: Add version number 2022-09-02 16:15:36 +05:30
52086a2d39 v2 2022-09-02 16:13:03 +05:30
facec59fe8 v2: Use config.bat for host and port 2022-09-02 16:00:53 +05:30
75eb79bd55 v2 script file 2022-09-02 15:45:47 +05:30
307209945c Merge branch 'main' of github.com:cmdr2/stable-diffusion-ui 2022-09-02 15:42:08 +05:30
8db9f40001 v2 scripts, trap more errors 2022-09-02 15:41:53 +05:30
472b8d0e51 Keep v2 files in the repo, for the updater 2022-09-02 13:58:36 +05:30
dbebd32a6e Update README.md 2022-09-02 08:49:39 +05:30
ad8d2a913b Update README.md 2022-09-01 22:50:03 +05:30
38bd247d64 Update README.md 2022-09-01 22:49:37 +05:30
6a4e972de6 Update README.md 2022-09-01 22:48:35 +05:30
30 changed files with 1297 additions and 385 deletions

3
.gitignore vendored
View File

@ -1 +1,4 @@
__pycache__
installer
installer.tar
dist

View File

@ -1,15 +0,0 @@
FROM python:3.9
RUN mkdir /app
WORKDIR /app
RUN apt update
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
EXPOSE 9000
ENTRYPOINT ["uvicorn", "main:app", "--reload", "--host", "0.0.0.0", "--port", "9000"]

View File

@ -0,0 +1,24 @@
Congrats on downloading Stable Diffusion UI, version 2!
If you haven't downloaded Stable Diffusion UI yet, please download from https://github.com/cmdr2/stable-diffusion-ui
After downloading, to install please follow these instructions:
For Windows:
- Please double-click the "Start Stable Diffusion UI.cmd" file inside the "stable-diffusion-ui" folder.
For Linux:
- Please open a terminal, and go to the "stable-diffusion-ui" directory. Then run ./start.sh
That file will automatically install everything. After that it will start the Stable Diffusion interface in a web browser.
To start the UI in the future, please run the same command mentioned above.
If you have any problems, please:
1. Try the troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting
2. Or, seek help from the community at https://discord.com/invite/u9yhsFmEkB
3. Or, file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues
Thanks
cmdr2 (and contributors to the project)

View File

@ -1,15 +0,0 @@
FROM python:3.9
RUN mkdir /app
WORKDIR /app
RUN apt update
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
EXPOSE 8000
ENTRYPOINT ["uvicorn", "old_port_main:app", "--host", "0.0.0.0", "--port", "8000"]

View File

@ -1,38 +1,41 @@
# Stable Diffusion UI
### A simple way to install and use [Stable Diffusion](https://replicate.com/stability-ai/stable-diffusion) on your own computer
# Stable Diffusion UI - v2 (beta)
### A simple way to install and use [Stable Diffusion](https://github.com/CompVis/stable-diffusion) on your own computer (Win 10/11, Linux). No dependencies or technical knowledge required.
[![Discord Server](https://badgen.net/badge/icon/discord?icon=discord&label)](https://discord.com/invite/u9yhsFmEkB) (for support, and development discussion)
---
# What does this do?
Two things:
1. Automatically downloads and installs Stable Diffusion on your own computer (no need to mess with conda or environments)
2. Gives you a simple browser-based UI to talk to your local Stable Diffusion. Enter text prompts and view the generated image. No API keys required.
All the processing will happen on your computer locally, it does not transmit your prompts or process on any remote server.
# Features in the new v2 Version:
- **No Dependencies or Technical Knowledge Required**: 1-click install for Windows 10/11 and Linux. *No dependencies*, no need for WSL or Docker or Conda or technical setup. Just download and run!
- **Image Modifiers**: A library of *modifier tags* like *"Realistic"*, *"Pencil Sketch"*, *"ArtStation"* etc. Experiment with various styles quickly.
- **New UI**: with cleaner design
- Supports "*Text to Image*" and "*Image to Image*"
- **NSFW Setting**: A setting in the UI to control *NSFW content*
- **Use CPU setting**: If you don't have a compatible graphics card, but still want to run it on your CPU.
- **Auto-updater**: Gets you the latest improvements and bug-fixes to a rapidly evolving project.
![Screenshot](media/shot-v8.jpg?raw=true)
# System Requirements
1. Computer capable of running Stable Diffusion.
2. Linux or Windows 11 (with [WSL](https://docs.microsoft.com/en-us/windows/wsl/install)) or Windows 10 v2004+ (Build 19041+) with [WSL](https://docs.microsoft.com/en-us/windows/wsl/install).
3. Requires (a) [Docker](https://docs.docker.com/engine/install/), (b) [docker-compose v1.29](https://docs.docker.com/compose/install/), and (c) [nvidia-container-toolkit](https://stackoverflow.com/a/58432877).
1. Windows 10/11, or Linux. Experimental support for Mac is coming soon.
2. An NVIDIA graphics card, preferably with 6GB or more of VRAM. But if you don't have a compatible graphics card, you can still use it with a "Use CPU" setting. It'll be very slow, but it should still work.
**Important:** If you're using Windows, please install docker inside your [WSL](https://docs.microsoft.com/en-us/windows/wsl/install)'s Linux. Install docker for the Linux distro in your WSL. **Don't install Docker for Windows.**
You do not need anything else. You do not need WSL, Docker or Conda. The installer will take care of it.
# Installation
1. Clone this repository: `git clone https://github.com/cmdr2/stable-diffusion-ui.git` or [download the zip file](https://github.com/cmdr2/stable-diffusion-ui/archive/refs/heads/main.zip) and unzip.
2. Open your terminal, and in the project directory run: `docker-compose up &` (warning: this will take some time during the first run, since it'll download Stable Diffusion's [docker image](https://replicate.com/stability-ai/stable-diffusion), nearly 17 GiB)
3. Open http://localhost:9000 in your browser. That's it!
1. Download [for Windows](https://drive.google.com/file/d/1MY5gzsQHV_KREbYs3gw33QL4gGIlQRqj/view?usp=sharing) or [for Linux](https://drive.google.com/file/d/1Gwz1LVQUCart8HhCjrmXkS6TWKbTsLsR/view?usp=sharing) (this will be hosted on GitHub in the future).
If you're getting errors, please check the [Troubleshooting](https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting) page.
2. Extract:
- For Windows: After unzipping the file, please move the `stable-diffusion-ui` folder to your `C:` (or any drive like D: at the top root level). For e.g. `C:\stable-diffusion-ui`. This will avoid a common problem with Windows (of file path length limits).
- For Linux: After extracting the .tar.xz file, please open a terminal, and go to the `stable-diffusion-ui` directory.
3. Run:
- For Windows: `Start Stable Diffusion UI.cmd` by double-clicking it.
- For Linux: In the terminal, run `./start.sh` (or `bash start.sh`)
This will automatically install Stable Diffusion, set it up, and start the interface. No additional steps are needed.
To stop the server, please run `docker-compose down`
# Usage
Open http://localhost:9000 in your browser (after running `docker-compose up &` from step 2 previously).
Open http://localhost:9000 in your browser (after running step 3 previously).
## With a text description
1. Enter a text prompt, like `a photograph of an astronaut riding a horse` in the textbox.
@ -44,32 +47,32 @@ Open http://localhost:9000 in your browser (after running `docker-compose up &`
2. An optional text prompt can help you further describe the kind of image you want to generate.
3. Press `Make Image`. See the image generated using your prompt.
You can also set an `Image Mask` for telling Stable Diffusion to draw in only the black areas in your image mask. White areas in your mask will be ignored.
**Pro tip:** You can also click `Use as Input` on a generated image, to use it as the input image for your next generation. This can be useful for sequentially refining the generated image with a single click.
**Another tip:** Images with the same aspect ratio of your generated image work best. E.g. 1:1 if you're generating images sized 512x512.
## Problems?
Please [file an issue](https://github.com/cmdr2/stable-diffusion-ui/issues) if this did not work for you (after trying the common [troubleshooting](#troubleshooting) steps)!
Please ask on the new [discord server](https://discord.com/invite/u9yhsFmEkB), or [file an issue](https://github.com/cmdr2/stable-diffusion-ui/issues) if this did not work for you (after trying the common [troubleshooting](#troubleshooting) steps)!
# Advanced Settings
You can also set the configuration like `seed`, `width`, `height`, `num_outputs`, `num_inference_steps` and `guidance_scale` using the 'show' button next to 'Advanced settings'.
Use the same `seed` number to get the same image for a certain prompt. This is useful for refining a prompt without losing the basic image design. Enable the `random images` checkbox to get random images.
![Screenshot of advanced settings](media/config-v4.jpg?raw=true)
![Screenshot of advanced settings](media/config-v5.jpg?raw=true)
# Troubleshooting
The [Troubleshooting wiki page](https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting) contains some common errors and their solutions. Please check that, and if it doesn't work, feel free to [file an issue](https://github.com/cmdr2/stable-diffusion-ui/issues).
The [Troubleshooting wiki page](https://github.com/cmdr2/stable-diffusion-ui/wiki/Troubleshooting) contains some common errors and their solutions. Please check that, and if it doesn't work, feel free to ask on the [discord server](https://discord.com/invite/u9yhsFmEkB) or [file an issue](https://github.com/cmdr2/stable-diffusion-ui/issues).
# Behind the scenes
This project is a quick way to get started with Stable Diffusion. You do not need to have Stable Diffusion already installed, and do not need any API keys. This project will automatically download Stable Diffusion's docker image, the first time it is run.
# What is this? Why no Docker?
This version is a 1-click installer. You don't need WSL or Docker or anything beyond a working NVIDIA GPU with an updated driver. You don't need to use the command-line at all. Even if you don't have a compatible GPU, you can run it on your CPU (albeit very slowly).
This project runs Stable Diffusion in a docker container behind the scenes, using Stable Diffusion's [Docker image](https://replicate.com/stability-ai/stable-diffusion) on replicate.com.
It'll download the necessary files from the original [Stable Diffusion](https://github.com/CompVis/stable-diffusion) git repository, and set it up. It'll then start the browser-based interface like before.
The NSFW option is currently off (temporarily), so it'll allow NSFW images, for those people who are unable to run their prompts without hitting the NSFW filter incorrectly.
# Bugs reports and code contributions welcome
If there are any problems or suggestions, please feel free to [file an issue](https://github.com/cmdr2/stable-diffusion-ui/issues).
If there are any problems or suggestions, please feel free to ask on the [discord server](https://discord.com/invite/u9yhsFmEkB) or [file an issue](https://github.com/cmdr2/stable-diffusion-ui/issues).
Also, please feel free to submit a pull request, if you have any code contributions in mind. Join the [discord server](https://discord.com/invite/u9yhsFmEkB) for development-related discussions, and for helping other users.

39
build.bat Normal file
View File

@ -0,0 +1,39 @@
@mkdir dist\stable-diffusion-ui
@echo "Downloading components for the installer.."
@call conda env create --prefix installer -f environment.yaml
@call conda activate .\installer
@echo "Setting up startup scripts.."
@mkdir installer\etc\conda\activate.d
@copy scripts\post_activate.bat installer\etc\conda\activate.d\
@echo "Creating a distributable package.."
@call conda install -c conda-forge -y conda-pack
@call conda pack --n-threads -1 --prefix installer --format tar
@cd dist\stable-diffusion-ui
@mkdir installer
@call tar -xf ..\..\installer.tar -C installer
@mkdir scripts
@copy ..\..\scripts\on_env_start.bat scripts\
@copy "..\..\scripts\Start Stable Diffusion UI.cmd" .
@copy ..\..\LICENSE .
@copy "..\..\CreativeML Open RAIL-M License" .
@copy "..\..\How to install and run.txt" .
@echo "Build ready. Zip the 'dist\stable-diffusion-ui' folder."
@echo "Cleaning up.."
@cd ..\..
@rmdir /s /q installer
@del installer.tar

39
build.sh Normal file
View File

@ -0,0 +1,39 @@
#!/bin/bash
mkdir -p dist/stable-diffusion-ui
echo "Downloading components for the installer.."
source ~/miniconda3/etc/profile.d/conda.sh
conda install -c conda-forge -y conda-pack
conda env create --prefix installer -f environment.yaml
conda activate ./installer
echo "Creating a distributable package.."
conda pack --n-threads -1 --prefix installer --format tar
cd dist/stable-diffusion-ui
mkdir installer
tar -xf ../../installer.tar -C installer
mkdir scripts
cp ../../scripts/on_env_start.sh scripts/
cp "../../scripts/start.sh" .
cp ../../LICENSE .
cp "../../CreativeML Open RAIL-M License" .
cp "../../How to install and run.txt" .
echo "Build ready. Zip the 'dist/stable-diffusion-ui' folder."
echo "Cleaning up.."
cd ../..
rm -rf installer
rm installer.tar

View File

@ -1,28 +0,0 @@
version: '3.3'
services:
stability-ai:
container_name: sd
ports:
- '5000:5000'
image: 'r8.im/stability-ai/stable-diffusion@sha256:be04660a5b93ef2aff61e3668dedb4cbeb14941e62a3fd5998364a32d613e35e'
deploy:
resources:
reservations:
devices:
- capabilities: [gpu]
stable-diffusion-ui:
container_name: sd-ui
ports:
- '9000:9000'
build:
context: .
dockerfile: Dockerfile
volumes:
- .:/app
depends_on:
- stability-ai
networks:
default:

7
environment.yaml Normal file
View File

@ -0,0 +1,7 @@
name: stable-diffusion-ui-installer
channels:
- defaults
- conda-forge
dependencies:
- conda
- git

73
main.py
View File

@ -1,73 +0,0 @@
from fastapi import FastAPI, HTTPException
from starlette.responses import FileResponse
from pydantic import BaseModel
import requests
LOCAL_SERVER_URL = 'http://stability-ai:5000'
PREDICT_URL = LOCAL_SERVER_URL + '/predictions'
app = FastAPI()
# defaults from https://huggingface.co/blog/stable_diffusion
class ImageRequest(BaseModel):
prompt: str
init_image: str = None # base64
mask: str = None # base64
num_outputs: str = "1"
num_inference_steps: str = "50"
guidance_scale: str = "7.5"
width: str = "512"
height: str = "512"
seed: str = "30000"
prompt_strength: str = "0.8"
@app.get('/')
def read_root():
return FileResponse('index.html')
@app.get('/ping')
async def ping():
try:
requests.get(LOCAL_SERVER_URL)
return {'OK'}
except:
return {'ERROR'}
@app.post('/image')
async def image(req : ImageRequest):
data = {
"input": {
"prompt": req.prompt,
"num_outputs": req.num_outputs,
"num_inference_steps": req.num_inference_steps,
"width": req.width,
"height": req.height,
"seed": req.seed,
"guidance_scale": req.guidance_scale,
}
}
if req.init_image is not None:
data['input']['init_image'] = req.init_image
data['input']['prompt_strength'] = req.prompt_strength
if req.mask is not None:
data['input']['mask'] = req.mask
if req.seed == "-1":
del data['input']['seed']
res = requests.post(PREDICT_URL, json=data)
if res.status_code != 200:
raise HTTPException(status_code=500, detail=res.text)
return res.json()
@app.get('/media/ding.mp3')
def read_root():
return FileResponse('media/ding.mp3')
@app.get('/modifiers.json')
def read_modifiers():
return FileResponse('modifiers.json')

BIN
media/config-v5.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 55 KiB

View File

@ -1,92 +0,0 @@
[
["Drawing Style", [
"Sketch",
"Doodle",
"Children's Drawing",
"Line Art",
"Dot Art",
"Crosshatch",
"Detailed and Intricate",
"Cel Shading"
]],
["Visual Style", [
"2D",
"Cartoon",
"8-bit",
"16-bit",
"Graphic Novel",
"Visual Novel",
"Street Art",
"Fantasy",
"Realistic",
"Photo",
"Hard Edge Painting",
"Mural",
"Mosaic",
"Hydrodipped",
"Modern Art",
"Concept Art",
"Digital Art",
"CGI",
"Anaglyph",
"Comic Book",
"Lithography"
]],
["Pen", [
"Graphite",
"Colored Pencil",
"Ink",
"Chalk",
"Pastel Art",
"Oil Paint"
]],
["Carving and Etching", [
"Etching",
"Wood-Carving",
"Papercutting",
"Paper-Mache",
"Paper Model",
"Linocut",
"Pyrography"
]],
["Camera", [
"HD",
"Color Grading",
"Film Grain",
"White Balance",
"Golden Hour",
"Glamor Shot",
"War Photography",
"Lens Flare",
"Polaroid",
"Vintage"
]],
["Color", [
"Colorful",
"Electric Colors",
"Warm Color Palette",
"Infrared",
"Beautiful Lighting"
]],
["Emotions", [
"Happy",
"Excited",
"Sad",
"Lonely",
"Angry",
"Good",
"Evil"
]],
["Style of an artist or community", [
"by Andy Warhol",
"Artstation",
"by Asaf Hanuka",
"by Aubrey Beardsley",
"by H.R. Giger",
"by Hayao Mizaki",
"by Salvador Dali",
"by Tivadar Csontváry Kosztka",
"by Lisa Frank",
"by Pablo Piccaso"
]]
]

View File

@ -1,36 +0,0 @@
from fastapi import FastAPI
from fastapi.responses import HTMLResponse
app = FastAPI()
@app.get('/', response_class=HTMLResponse)
def read_root():
return '''
<style>
body {
font-family: Arial;
font-size: 11pt;
}
pre {
display: inline;
background: #aaa;
padding: 2px;
border: 1px solid #777;
border-radius: 3px;
}
@media (prefers-color-scheme: dark) {
body {
background-color: rgb(32, 33, 36);
color: #eee;
}
pre {
background: #444;
}
}
</style>
<h4>The UI has moved to <a href="http://localhost:9000">http://localhost:9000</a>. The current address that you used (ending with :8000) will be removed in the future, so please use <a href="http://localhost:9000">http://localhost:9000</a> going ahead (and in any bookmarks you've saved).</h4>
<h3>Why has the address changed?</h3>
<p>The previously used port (8000) is often used by other servers, which results in port conflicts. So the project's port number has been changed, while the project is still young. Otherwise port-conflicts with 8000 will be a common source of new-user issues in the future.</p>
<p>Sorry about this, and apologies for the inconvenience :)</p>
'''

View File

@ -1,3 +0,0 @@
requests
fastapi==0.80.0
uvicorn==0.18.2

View File

@ -0,0 +1 @@
installer\Scripts\activate.bat

30
scripts/on_env_start.bat Normal file
View File

@ -0,0 +1,30 @@
@echo. & echo "Stable Diffusion UI" & echo.
@cd ..
@>nul grep -c "sd_ui_git_cloned" scripts\install_status.txt
@if "%ERRORLEVEL%" EQU "0" (
@echo "Stable Diffusion UI's git repository was already installed. Updating.."
@cd sd-ui-files
@call git reset --hard
@call git pull
@cd ..
) else (
@echo. & echo "Downloading Stable Diffusion UI.." & echo.
@call git clone https://github.com/cmdr2/stable-diffusion-ui.git sd-ui-files && (
@echo sd_ui_git_cloned >> scripts\install_status.txt
) || (
@echo "Error downloading Stable Diffusion UI. Please try re-running this installer. If it doesn't work, please copy the messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB or file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues"
pause
@exit /b
)
)
@xcopy sd-ui-files\ui ui /s /i /Y
@xcopy sd-ui-files\scripts scripts /s /i /Y
@call scripts\on_sd_start.bat

28
scripts/on_env_start.sh Executable file
View File

@ -0,0 +1,28 @@
printf "\n\nStable Diffusion UI\n\n"
if [ -f "scripts/install_status.txt" ] && [ `grep -c sd_ui_git_cloned scripts/install_status.txt` -gt "0" ]; then
echo "Stable Diffusion UI's git repository was already installed. Updating.."
cd sd-ui-files
git reset --hard
git pull
cd ..
else
printf "\n\nDownloading Stable Diffusion UI..\n\n"
if git clone https://github.com/cmdr2/stable-diffusion-ui.git sd-ui-files ; then
echo sd_ui_git_cloned >> scripts/install_status.txt
else
printf "\n\nError downloading Stable Diffusion UI. Please try re-running this installer. If it doesn't work, please copy the messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB or file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\n\n"
read -p "Press any key to continue"
exit
fi
fi
cp -Rf sd-ui-files/ui ui
cp -Rf sd-ui-files/scripts/* scripts/
cp "scripts/start.sh" .
./scripts/on_sd_start.sh

104
scripts/on_sd_start.bat Normal file
View File

@ -0,0 +1,104 @@
@set cmd_had_error=F
@>nul grep -c "sd_git_cloned" scripts\install_status.txt
@if "%ERRORLEVEL%" EQU "0" (
@echo "Stable Diffusion's git repository was already installed. Updating.."
@cd stable-diffusion
@call git reset --hard
@call git pull
@cd ..
) else (
@echo. & echo "Downloading Stable Diffusion.." & echo.
@call git clone https://github.com/basujindal/stable-diffusion.git && (
@echo sd_git_cloned >> scripts\install_status.txt
) || (
@set cmd_had_error=T
)
if "%ERRORLEVEL%" NEQ "0" (
@set cmd_had_error=T
)
if "%cmd_had_error%"=="T" (
@echo "Error downloading Stable Diffusion. Please try re-running this installer. If it doesn't work, please copy the messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB or file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues"
pause
@exit /b
)
)
@cd stable-diffusion
@>nul grep -c "conda_sd_env_created" ..\scripts\install_status.txt
@if "%ERRORLEVEL%" EQU "0" (
@echo "Packages necessary for Stable Diffusion were already installed"
) else (
@echo. & echo "Downloading packages necessary for Stable Diffusion.." & echo. & echo "***** This will take some time (depending on the speed of the Internet connection) and may appear to be stuck, but please be patient ***** .." & echo.
@rmdir /s /q .\env
@call conda env create --prefix env -f environment.yaml && (
@echo conda_sd_env_created >> ..\scripts\install_status.txt
) || (
@set cmd_had_error=T
)
if "%cmd_had_error%"=="T" (
echo "Error installing the packages necessary for Stable Diffusion. Please try re-running this installer. If it doesn't work, please copy the messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB or file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues"
pause
exit /b
)
)
@call conda activate .\env
@>nul grep -c "conda_sd_ui_deps_installed" ..\scripts\install_status.txt
@if "%ERRORLEVEL%" EQU "0" (
echo "Packages necessary for Stable Diffusion UI were already installed"
) else (
@echo. & echo "Downloading packages necessary for Stable Diffusion UI.." & echo.
@call conda install -c conda-forge -y --prefix env uvicorn fastapi && (
@echo conda_sd_ui_deps_installed >> ..\scripts\install_status.txt
) || (
@set cmd_had_error=T
)
if "%ERRORLEVEL%" NEQ "0" (
@set cmd_had_error=T
)
if "%cmd_had_error%"=="T" (
echo "Error installing the packages necessary for Stable Diffusion UI. Please try re-running this installer. If it doesn't work, please copy the messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB or file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues"
pause
exit /b
)
)
@if exist "sd-v1-4.ckpt" (
echo "Data files (weights) necessary for Stable Diffusion were already downloaded"
) else (
@echo. & echo "Downloading data files (weights) for Stable Diffusion.." & echo.
@call curl -L https://me.cmdr2.org/stable-diffusion-ui/sd-v1-4.ckpt > sd-v1-4.ckpt
@if not exist "sd-v1-4.ckpt" (
echo "Error downloading the data files (weights) for Stable Diffusion. Please try re-running this installer. If it doesn't work, please copy the messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB or file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues"
pause
exit /b
)
@echo sd_weights_downloaded >> ..\scripts\install_status.txt
@echo sd_install_complete >> ..\scripts\install_status.txt
)
@echo. & echo "Stable Diffusion is ready!" & echo.
@set SD_UI_PATH=%cd%\..\ui
@uvicorn server:app --app-dir "%SD_UI_PATH%" --port 9000 --host 0.0.0.0
@pause

80
scripts/on_sd_start.sh Executable file
View File

@ -0,0 +1,80 @@
source installer/etc/profile.d/conda.sh
if [ `grep -c sd_git_cloned scripts/install_status.txt` -gt "0" ]; then
echo "Stable Diffusion's git repository was already installed. Updating.."
cd stable-diffusion
git reset --hard
git pull
cd ..
else
printf "\n\nDownloading Stable Diffusion..\n\n"
if git clone https://github.com/basujindal/stable-diffusion.git ; then
echo sd_git_cloned >> scripts/install_status.txt
else
printf "\n\nError downloading Stable Diffusion. Please try re-running this installer. If it doesn't work, please copy the messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB or file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\n\n"
read -p "Press any key to continue"
exit
fi
fi
cd stable-diffusion
if [ `grep -c conda_sd_env_created ../scripts/install_status.txt` -gt "0" ]; then
echo "Packages necessary for Stable Diffusion were already installed"
else
printf "\n\nDownloading packages necessary for Stable Diffusion..\n"
printf "\n\n***** This will take some time (depending on the speed of the Internet connection) and may appear to be stuck, but please be patient ***** ..\n\n"
if conda env create --prefix env --force -f environment.yaml ; then
echo conda_sd_env_created >> ../scripts/install_status.txt
else
printf "\n\nError installing the packages necessary for Stable Diffusion. Please try re-running this installer. If it doesn't work, please copy the messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB or file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\n\n"
read -p "Press any key to continue"
exit
fi
fi
conda activate ./env
if [ `grep -c conda_sd_ui_deps_installed ../scripts/install_status.txt` -gt "0" ]; then
echo "Packages necessary for Stable Diffusion UI were already installed"
else
printf "\n\nDownloading packages necessary for Stable Diffusion UI..\n\n"
if conda install -c conda-forge --prefix ./env -y uvicorn fastapi ; then
echo conda_sd_ui_deps_installed >> ../scripts/install_status.txt
else
printf "\n\nError installing the packages necessary for Stable Diffusion UI. Please try re-running this installer. If it doesn't work, please copy the messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB or file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\n\n"
read -p "Press any key to continue"
exit
fi
fi
if [ -f "sd-v1-4.ckpt" ]; then
echo "Data files (weights) necessary for Stable Diffusion were already downloaded"
else
echo "Downloading data files (weights) for Stable Diffusion.."
curl -L https://me.cmdr2.org/stable-diffusion-ui/sd-v1-4.ckpt > sd-v1-4.ckpt
if [ ! -f "sd-v1-4.ckpt" ]; then
printf "\n\nError downloading the data files (weights) for Stable Diffusion. Please try re-running this installer. If it doesn't work, please copy the messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB or file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\n\n"
read -p "Press any key to continue"
exit
fi
echo sd_weights_downloaded >> ../scripts/install_status.txt
echo sd_install_complete >> ../scripts/install_status.txt
fi
printf "\n\nStable Diffusion is ready!\n\n"
export SD_UI_PATH=`pwd`/../ui
uvicorn server:app --app-dir "$SD_UI_PATH" --port 9000 --host 0.0.0.0
read -p "Press any key to continue"

View File

@ -0,0 +1,6 @@
@call conda --version
@call git --version
cd %CONDA_PREFIX%\..\scripts
on_env_start.bat

10
scripts/post_activate.sh Executable file
View File

@ -0,0 +1,10 @@
conda-unpack
source $CONDA_PREFIX/etc/profile.d/conda.sh
conda --version
git --version
cd $CONDA_PREFIX/../scripts
./on_env_start.sh

5
scripts/start.sh Normal file
View File

@ -0,0 +1,5 @@
source installer/bin/activate
conda-unpack
scripts/on_env_start.sh

View File

@ -0,0 +1,2 @@
Set-ItemProperty -Path 'HKLM:\SYSTEM\CurrentControlSet\Control\FileSystem' -Name LongPathsEnabled -Type DWord -Value 1
pause

3
server
View File

@ -1,3 +0,0 @@
#!/bin/bash
echo "Please use 'docker-compose up &' to start the server, and 'docker-compose down' to stop the server"

View File

@ -237,9 +237,9 @@
float: right;
}
#server-status-color {
width: 10pt;
height: 10pt;
border-radius: 5pt;
width: 8pt;
height: 8pt;
border-radius: 4pt;
background-color: rgb(128, 87, 0);
/* background-color: rgb(197, 1, 1); */
float: left;
@ -248,6 +248,7 @@
#server-status-msg {
color: rgb(128, 87, 0);
padding-left: 2pt;
font-size: 10pt;
}
#preview-prompt {
font-size: 16pt;
@ -262,9 +263,9 @@
<div id="meta">
<div id="server-status">
<div id="server-status-color">&nbsp;</div>
<span id="server-status-msg">server starting..</span>
<span id="server-status-msg">Stable Diffusion is starting..</span>
</div>
<h1>Stable Diffusion UI <small>v1</small></h1>
<h1>Stable Diffusion UI <small>v2.05 (beta)</small></h1>
</div>
<div id="editor-inputs">
<div id="editor-inputs-prompt" class="row">
@ -278,14 +279,6 @@
<img id="init_image_preview" src="" width="100" height="100" />
<button id="init_image_clear" class="image_clear_btn">X</button>
</div>
<div id="mask_setting">
<label for="mask"><b>Image Mask:</b> (optional) </label> <input id="mask" name="mask" type="file" /> </button><br/>
<div id="mask_preview_container" class="image_preview_container">
<img id="mask_preview" src="" width="100" height="100" />
<button id="mask_clear" class="image_clear_btn">X</button>
</div>
</div>
</div>
<div id="editor-inputs-tags-container" class="row">
@ -303,14 +296,55 @@
<h4 class="collapsible">Advanced Settings</h4>
<ul id="editor-settings-entries" class="collapsible-content">
<li><label for="seed">Seed:</label> <input id="seed" name="seed" size="10" value="30000"> <input id="random_seed" name="random_seed" type="checkbox" checked> <label for="random_seed">Random Image</label></li>
<li><label for="num_outputs_total">Number of outputs:</label> <input id="num_outputs_total" name="num_outputs_total" value="1" size="4"> <label for="num_outputs_parallel">Generate in parallel:</label> <select id="num_outputs_parallel" name="num_outputs_parallel" value="1"><option value="1" selected>1 image at a time</option><option value="4">4 images at a time</option></select></li>
<li><label for="width">Width:</label> <select id="width" name="width" value="512"><option value="128">128</option><option value="256">256</option><option value="512" selected>512</option><option value="768">768</option><option value="1024">1024</option></select></li>
<li><label for="height">Height:</label> <select id="height" name="height" value="512"><option value="128">128</option><option value="256">256</option><option value="512" selected>512</option><option value="768">768</option></select></li>
<li><label for="num_outputs_total">Number of images to make:</label> <input id="num_outputs_total" name="num_outputs_total" value="1" size="4"> <label for="num_outputs_parallel">Generate in parallel:</label> <input id="num_outputs_parallel" name="num_outputs_parallel" value="1" size="4"> (images at once)</li>
<li><label for="width">Width:</label>
<select id="width" name="width" value="512">
<option value="128">128 (*)</option>
<option value="192">192</option>
<option value="256">256 (*)</option>
<option value="320">320</option>
<option value="384">384</option>
<option value="448">448</option>
<option value="512" selected>512 (*)</option>
<option value="576">576</option>
<option value="640">640</option>
<option value="704">704</option>
<option value="768">768 (*)</option>
<option value="832">832</option>
<option value="896">896</option>
<option value="960">960</option>
<option value="1024">1024 (*)</option>
</select>
</li>
<li><label for="height">Height:</label>
<select id="height" name="height" value="512">
<option value="128">128 (*)</option>
<option value="192">192</option>
<option value="256">256 (*)</option>
<option value="320">320</option>
<option value="384">384</option>
<option value="448">448</option>
<option value="512" selected>512 (*)</option>
<option value="576">576</option>
<option value="640">640</option>
<option value="704">704</option>
<option value="768">768 (*)</option>
<option value="832">832</option>
<option value="896">896</option>
<option value="960">960</option>
<option value="1024">1024 (*)</option>
</select>
</li>
<li><label for="num_inference_steps">Number of inference steps:</label> <input id="num_inference_steps" name="num_inference_steps" size="4" value="50"></li>
<li><label for="guidance_scale">Guidance Scale:</label> <input id="guidance_scale" name="guidance_scale" value="75" type="range" min="10" max="200"> <span id="guidance_scale_value"></span></li>
<li><span id="prompt_strength_container"><label for="prompt_strength">Prompt Strength:</label> <input id="prompt_strength" name="prompt_strength" value="8" type="range" min="0" max="10"> <span id="prompt_strength_value"></span><br/></span></li>
<li>&nbsp;</li>
<li><input id="save_to_disk" name="save_to_disk" type="checkbox"> <label for="save_to_disk">Automatically save to disk <span id="diskPath"></span></label></li>
<li><input id="sound_toggle" name="sound_toggle" type="checkbox" checked> <label for="sound_toggle">Play sound on task completion</label></li>
<li><input id="turbo" name="turbo" type="checkbox" checked> <label for="turbo">Turbo mode (generates images faster, but uses an additional 1 GB of GPU memory)</label></li>
<li><input id="use_cpu" name="use_cpu" type="checkbox"> <label for="use_cpu">Use CPU instead of GPU (warning: this will be *very* slow)</label></li>
<li><input id="use_full_precision" name="use_full_precision" type="checkbox"> <label for="use_full_precision">Use full precision (for GPU-only. warning: this will consume more VRAM. Use this for NVIDIA 1650 and 1660)</label></li>
<!-- <li><input id="allow_nsfw" name="allow_nsfw" type="checkbox"> <label for="allow_nsfw">Allow NSFW Content (You confirm you are above 18 years of age)</label></li> -->
</ul>
</div>
@ -322,7 +356,7 @@
</div>
<div id="preview" class="col-50">
<div id="preview-prompt">Type a prompt and press the "Make Image" button.<br/><br/>You can also add modifiers like "Realistic", "Pencil Sketch", "ArtStation" etc by browsing through the "Image Modifiers" section and selecting the desired modifiers.<br/><br/>Click "Advanced Settings" for additional settings like seed, image size, number of images to generate etc.<br/><br/>Enjoy! :)</div>
<div id="preview-prompt">Type a prompt and press the "Make Image" button.<br/><br/>You can set an "Initial Image" if you want to guide the AI.<br/><br/>You can also add modifiers like "Realistic", "Pencil Sketch", "ArtStation" etc by browsing through the "Image Modifiers" section and selecting the desired modifiers.<br/><br/>Click "Advanced Settings" for additional settings like seed, image size, number of images to generate etc.<br/><br/>Enjoy! :)</div>
<div id="outputMsg"></div>
<div id="current-images" class="img-preview">
@ -345,6 +379,9 @@
<script>
const SOUND_ENABLED_KEY = "soundEnabled"
const USE_CPU_KEY = "useCPU"
const USE_FULL_PRECISION_KEY = "useFullPrecision"
const USE_TURBO_MODE_KEY = "useTurboMode"
const HEALTH_PING_INTERVAL = 5 // seconds
let promptField = document.querySelector('#prompt')
@ -359,8 +396,13 @@ let widthField = document.querySelector('#width')
let heightField = document.querySelector('#height')
let initImageSelector = document.querySelector("#init_image")
let initImagePreview = document.querySelector("#init_image_preview")
let maskImageSelector = document.querySelector("#mask")
let maskImagePreview = document.querySelector("#mask_preview")
// let maskImageSelector = document.querySelector("#mask")
// let maskImagePreview = document.querySelector("#mask_preview")
let turboField = document.querySelector('#turbo')
let useCPUField = document.querySelector('#use_cpu')
let useFullPrecisionField = document.querySelector('#use_full_precision')
let saveToDiskField = document.querySelector('#save_to_disk')
// let allowNSFWField = document.querySelector("#allow_nsfw")
let promptStrengthField = document.querySelector('#prompt_strength')
let promptStrengthValueLabel = document.querySelector('#prompt_strength_value')
@ -371,9 +413,9 @@ let initImagePreviewContainer = document.querySelector('#init_image_preview_cont
let initImageClearBtn = document.querySelector('#init_image_clear')
let promptStrengthContainer = document.querySelector('#prompt_strength_container')
let maskSetting = document.querySelector('#mask_setting')
let maskImagePreviewContainer = document.querySelector('#mask_preview_container')
let maskImageClearBtn = document.querySelector('#mask_clear')
// let maskSetting = document.querySelector('#mask_setting')
// let maskImagePreviewContainer = document.querySelector('#mask_preview_container')
// let maskImageClearBtn = document.querySelector('#mask_clear')
let editorModifierEntries = document.querySelector('#editor-modifiers-entries')
let editorModifierTagsList = document.querySelector('#editor-inputs-tags-list')
@ -392,12 +434,46 @@ let serverStatusMsg = document.querySelector('#server-status-msg')
let serverStatus = 'offline'
let activeTags = []
let lastPromptUsed = ''
function getLocalStorageItem(key, fallback) {
let item = localStorage.getItem(key)
if (item === null) {
return fallback
}
return item
}
function getLocalStorageBoolItem(key, fallback) {
let item = localStorage.getItem(key)
if (item === null) {
return fallback
}
return (item === 'true' ? true : false)
}
function handleBoolSettingChange(key) {
return function(e) {
localStorage.setItem(key, e.target.checked.toString())
}
}
function isSoundEnabled() {
if (localStorage.getItem(SOUND_ENABLED_KEY) === 'false') {
return false
}
return true
return getLocalStorageBoolItem(SOUND_ENABLED_KEY, true)
}
function isUseCPUEnabled() {
return getLocalStorageBoolItem(USE_CPU_KEY, false)
}
function isUseFullPrecisionEnabled() {
return getLocalStorageBoolItem(USE_FULL_PRECISION_KEY, false)
}
function isUseTurboModeEnabled() {
return getLocalStorageBoolItem(USE_TURBO_MODE_KEY, true)
}
function setStatus(statusType, msg, msgType) {
@ -409,12 +485,12 @@ function setStatus(statusType, msg, msgType) {
// msg = '<span style="color: red">' + msg + '<span>'
serverStatusColor.style.backgroundColor = 'red'
serverStatusMsg.style.color = 'red'
serverStatusMsg.innerHTML = 'server offline'
serverStatusMsg.innerHTML = 'Stable Diffusion has stopped'
} else if (msgType == 'success') {
// msg = '<span style="color: green">' + msg + '<span>'
serverStatusColor.style.backgroundColor = 'green'
serverStatusMsg.style.color = 'green'
serverStatusMsg.innerHTML = 'server online'
serverStatusMsg.innerHTML = 'Stable Diffusion is ready'
serverStatus = 'online'
}
}
@ -473,7 +549,7 @@ async function doMakeImage(reqBody) {
if (res.status !== 'succeeded') {
let msg = ''
if (res.detail !== undefined) {
msg = res.detail[0].msg + " in " + JSON.stringify(res.detail[0].loc)
msg = res.detail
} else {
msg = res
}
@ -490,11 +566,14 @@ async function doMakeImage(reqBody) {
return false
}
lastPromptUsed = reqBody['prompt']
for (let idx in res.output) {
let imgBody = ''
try {
imgBody = res.output[idx]
let imgData = res.output[idx]
imgBody = imgData.data
} catch (e) {
console.log(imgBody)
setStatus('request', 'invalid image', 'error')
@ -538,7 +617,7 @@ async function doMakeImage(reqBody) {
initImagePreviewContainer.style.display = 'block'
promptStrengthContainer.style.display = 'block'
maskSetting.style.display = 'block'
// maskSetting.style.display = 'block'
randomSeedField.checked = false
seedField.value = seed
@ -547,7 +626,7 @@ async function doMakeImage(reqBody) {
imgSaveBtn.addEventListener('click', function() {
let imgDownload = document.createElement('a')
imgDownload.download = generateUUID() + '.png'
imgDownload.download = createFileName();
imgDownload.href = imgBody
imgDownload.click()
})
@ -598,16 +677,21 @@ async function makeImage() {
num_inference_steps: numInferenceStepsField.value,
guidance_scale: parseInt(guidanceScaleField.value) / 10,
width: widthField.value,
height: heightField.value
height: heightField.value,
// allow_nsfw: allowNSFWField.checked,
save_to_disk: saveToDiskField.checked,
turbo: turboField.checked,
use_cpu: useCPUField.checked,
use_full_precision: useFullPrecisionField.checked
}
if (imageRegex.test(initImagePreview.src)) {
reqBody['init_image'] = initImagePreview.src
reqBody['prompt_strength'] = parseInt(promptStrengthField.value) / 10
if (imageRegex.test(maskImagePreview.src)) {
reqBody['mask'] = maskImagePreview.src
}
// if (imageRegex.test(maskImagePreview.src)) {
// reqBody['mask'] = maskImagePreview.src
// }
}
let time = new Date().getTime()
@ -647,38 +731,56 @@ async function makeImage() {
}
}
function generateUUID() { // Public Domain/MIT
var d = new Date().getTime();//Timestamp
var d2 = ((typeof performance !== 'undefined') && performance.now && (performance.now()*1000)) || 0;//Time in microseconds since page-load or 0 if unsupported
return 'xxxxxxxx-xxxx-4xxx-yxxx-xxxxxxxxxxxx'.replace(/[xy]/g, function(c) {
var r = Math.random() * 16;//random number between 0 and 16
if(d > 0){//Use timestamp until depleted
r = (d + r)%16 | 0;
d = Math.floor(d/16);
} else {//Use microseconds since page-load if supported
r = (d2 + r)%16 | 0;
d2 = Math.floor(d2/16);
}
return (c === 'x' ? r : (r & 0x3 | 0x8)).toString(16);
});
// create a file name with embedded prompt and metadata
// for easier cateloging and comparison
function createFileName() {
// Most important information is the prompt
const underscoreName = lastPromptUsed.replace(/[^a-zA-Z0-9]/g, '_');
const seed = seedField.value;
const steps = numInferenceStepsField.value;
const guidance = guidanceScaleField.value;
// name and the top level metadata
let fileName = `sd_${underscoreName}_Seed-${seed}_Steps-${steps}_Guidance-${guidance}`;
// add the tags
// let tags = [];
// let tagString = '';
// document.querySelectorAll(modifyTagsSelector).forEach(function(tag) {
// tags.push(tag.innerHTML);
// })
// join the tags with a pipe
// if (activeTags.length > 0) {
// tagString = '_Tags-';
// tagString += tags.join('|');
// }
// // append empty or populated tags
// fileName += `${tagString}`;
// add the file extension
fileName += `.png`;
return fileName;
}
function handleAudioEnabledChange(e) {
localStorage.setItem(SOUND_ENABLED_KEY, e.target.checked.toString())
}
soundToggle.addEventListener('click', handleAudioEnabledChange)
soundToggle.checked = isSoundEnabled();
soundToggle.addEventListener('click', handleBoolSettingChange(SOUND_ENABLED_KEY))
soundToggle.checked = isSoundEnabled()
useCPUField.addEventListener('click', handleBoolSettingChange(USE_CPU_KEY))
useCPUField.checked = isUseCPUEnabled()
useFullPrecisionField.addEventListener('click', handleBoolSettingChange(USE_FULL_PRECISION_KEY))
useFullPrecisionField.checked = isUseFullPrecisionEnabled()
turboField.addEventListener('click', handleBoolSettingChange(USE_TURBO_MODE_KEY))
turboField.checked = isUseTurboModeEnabled()
makeImageBtn.addEventListener('click', makeImage)
// configBox.style.display = 'none'
// showConfigToggle.addEventListener('click', function() {
// configBox.style.display = (configBox.style.display === 'none' ? 'block' : 'none')
// showConfigToggle.innerHTML = (configBox.style.display === 'none' ? 'show' : 'hide')
// return false
// })
function updateGuidanceScale() {
guidanceScaleValueLabel.innerHTML = guidanceScaleField.value / 10
@ -709,7 +811,7 @@ function showInitImagePreview() {
if (initImageSelector.files.length === 0) {
initImagePreviewContainer.style.display = 'none'
promptStrengthContainer.style.display = 'none'
maskSetting.style.display = 'none'
// maskSetting.style.display = 'none'
return
}
@ -722,7 +824,7 @@ function showInitImagePreview() {
initImagePreviewContainer.style.display = 'block'
promptStrengthContainer.style.display = 'block'
maskSetting.style.display = 'block'
// maskSetting.style.display = 'block'
})
if (file) {
@ -734,45 +836,45 @@ showInitImagePreview()
initImageClearBtn.addEventListener('click', function() {
initImageSelector.value = null
maskImageSelector.value = null
// maskImageSelector.value = null
initImagePreview.src = ''
maskImagePreview.src = ''
// maskImagePreview.src = ''
initImagePreviewContainer.style.display = 'none'
maskImagePreviewContainer.style.display = 'none'
// maskImagePreviewContainer.style.display = 'none'
maskSetting.style.display = 'none'
// maskSetting.style.display = 'none'
promptStrengthContainer.style.display = 'none'
})
function showMaskImagePreview() {
if (maskImageSelector.files.length === 0) {
maskImagePreviewContainer.style.display = 'none'
return
}
// function showMaskImagePreview() {
// if (maskImageSelector.files.length === 0) {
// maskImagePreviewContainer.style.display = 'none'
// return
// }
let reader = new FileReader()
let file = maskImageSelector.files[0]
// let reader = new FileReader()
// let file = maskImageSelector.files[0]
reader.addEventListener('load', function() {
maskImagePreview.src = reader.result
maskImagePreviewContainer.style.display = 'block'
})
// reader.addEventListener('load', function() {
// maskImagePreview.src = reader.result
// maskImagePreviewContainer.style.display = 'block'
// })
if (file) {
reader.readAsDataURL(file)
}
}
maskImageSelector.addEventListener('change', showMaskImagePreview)
showMaskImagePreview()
// if (file) {
// reader.readAsDataURL(file)
// }
// }
// maskImageSelector.addEventListener('change', showMaskImagePreview)
// showMaskImagePreview()
maskImageClearBtn.addEventListener('click', function() {
maskImageSelector.value = null
maskImagePreview.src = ''
maskImagePreviewContainer.style.display = 'none'
})
// maskImageClearBtn.addEventListener('click', function() {
// maskImageSelector.value = null
// maskImagePreview.src = ''
// maskImagePreviewContainer.style.display = 'none'
// })
</script>
<script>
function createCollapsibles(node) {
@ -833,6 +935,20 @@ function refreshTagsList() {
editorModifierTagsList.appendChild(brk)
}
async function getDiskPath() {
try {
let res = await fetch('/output_dir')
if (res.status === 200) {
res = await res.json()
res = res[0]
document.querySelector('#diskPath').innerHTML = '(to ' + res + ')'
}
} catch (e) {
console.log('error fetching output dir path', e)
}
}
async function loadModifiers() {
try {
let res = await fetch('/modifiers.json')
@ -886,6 +1002,7 @@ async function loadModifiers() {
async function init() {
await loadModifiers()
await getDiskPath()
setInterval(healthCheck, HEALTH_PING_INTERVAL * 1000)
healthCheck()

BIN
ui/media/ding.mp3 Normal file

Binary file not shown.

140
ui/modifiers.json Normal file
View File

@ -0,0 +1,140 @@
[
[
"Drawing Style",
[
"Cel Shading",
"Children's Drawing",
"Crosshatch",
"Detailed and Intricate",
"Doodle",
"Dot Art",
"Line Art",
"Sketch"
]
],
[
"Visual Style",
[
"2D",
"8-bit",
"16-bit",
"Anaglyph",
"Anime",
"Cartoon",
"CGI",
"Comic Book",
"Concept Art",
"Digital Art",
"Fantasy",
"Graphic Novel",
"Hard Edge Painting",
"Hydrodipped",
"Lithography",
"Modern Art",
"Mosaic",
"Mural",
"Photo",
"Realistic",
"Street Art",
"Visual Novel",
"Watercolor"
]
],
[
"Pen",
[
"Chalk",
"Colored Pencil",
"Graphite",
"Ink",
"Oil Paint",
"Pastel Art"
]
],
[
"Carving and Etching",
[
"Etching",
"Linocut",
"Paper Model",
"Paper-Mache",
"Papercutting",
"Pyrography",
"Wood-Carving"
]
],
[
"Camera",
[
"Aerial View",
"Cinematic",
"Color Grading",
"Dramatic",
"Film Grain",
"Fisheye Lens",
"Glamor Shot",
"Golden Hour",
"HD",
"Lens Flare",
"Macro",
"Polaroid",
"Vintage",
"War Photography",
"White Balance",
"Wildlife Photography"
]
],
[
"Color",
[
"Beautiful Lighting",
"Colorful",
"Electric Colors",
"Infrared",
"Synthwave",
"Warm Color Palette"
]
],
[
"Emotions",
[
"Angry",
"Evil",
"Excited",
"Good",
"Happy",
"Lonely",
"Sad"
]
],
[
"Style of an artist or community",
[
"Artstation",
"by Agnes Lawrence Pelton",
"by Akihito Yoshida",
"by Andy Warhol",
"by Asaf Hanuka",
"by Aubrey Beardsley",
"by Ben Enwonwu",
"by Caravaggio Michelangelo Merisi",
"by David Mann",
"by Frida Kahlo",
"by H.R. Giger",
"by Hayao Mizaki",
"by Ivan Shishkin",
"by Johannes Vermeer",
"by Katsushika Hokusai",
"by Ko Young Hoon",
"by Leonardo Da Vinci",
"by Lisa Frank",
"by Mahmoud Sai",
"by Mark Brooks",
"by Pablo Picasso",
"by Richard Dadd",
"by Salvador Dali",
"by Tivadar Csontváry Kosztka",
"by Yoshitaka Amano"
]
]
]

View File

@ -0,0 +1,62 @@
import json
class Request:
prompt: str = ""
init_image: str = None # base64
mask: str = None # base64
num_outputs: int = 1
num_inference_steps: int = 50
guidance_scale: float = 7.5
width: int = 512
height: int = 512
seed: int = 42
prompt_strength: float = 0.8
# allow_nsfw: bool = False
precision: str = "autocast" # or "full"
save_to_disk_path: str = None
turbo: bool = True
use_cpu: bool = False
use_full_precision: bool = False
def to_string(self):
return f'''
prompt: {self.prompt}
seed: {self.seed}
num_inference_steps: {self.num_inference_steps}
guidance_scale: {self.guidance_scale}
w: {self.width}
h: {self.height}
precision: {self.precision}
save_to_disk_path: {self.save_to_disk_path}
turbo: {self.turbo}
use_cpu: {self.use_cpu}
use_full_precision: {self.use_full_precision}'''
class Image:
data: str # base64
seed: int
is_nsfw: bool
def __init__(self, data, seed):
self.data = data
self.seed = seed
def json(self):
return {
"data": self.data,
"seed": self.seed,
}
class Response:
images: list
def json(self):
res = {
"status": 'succeeded',
"output": [],
}
for image in self.images:
res["output"].append(image.json())
return res

359
ui/sd_internal/runtime.py Normal file
View File

@ -0,0 +1,359 @@
import os, re
import torch
import numpy as np
from omegaconf import OmegaConf
from PIL import Image
from tqdm import tqdm, trange
from itertools import islice
from einops import rearrange
import time
from pytorch_lightning import seed_everything
from torch import autocast
from contextlib import nullcontext
from einops import rearrange, repeat
from ldm.util import instantiate_from_config
from optimizedSD.optimUtils import split_weighted_subprompts
from transformers import logging
import uuid
logging.set_verbosity_error()
# consts
config_yaml = "optimizedSD/v1-inference.yaml"
# api stuff
from . import Request, Response, Image as ResponseImage
import base64
from io import BytesIO
# local
session_id = str(uuid.uuid4())
ckpt = None
model = None
modelCS = None
modelFS = None
model_is_half = False
model_fs_is_half = False
device = None
unet_bs = 1
precision = 'autocast'
sampler_plms = None
sampler_ddim = None
# api
def load_model(ckpt_to_use, device_to_use='cuda', turbo=False, unet_bs_to_use=1, precision_to_use='autocast', half_model_fs=False):
global ckpt, model, modelCS, modelFS, model_is_half, device, unet_bs, precision, model_fs_is_half
ckpt = ckpt_to_use
device = device_to_use
precision = precision_to_use
unet_bs = unet_bs_to_use
sd = load_model_from_config(f"{ckpt}")
li, lo = [], []
for key, value in sd.items():
sp = key.split(".")
if (sp[0]) == "model":
if "input_blocks" in sp:
li.append(key)
elif "middle_block" in sp:
li.append(key)
elif "time_embed" in sp:
li.append(key)
else:
lo.append(key)
for key in li:
sd["model1." + key[6:]] = sd.pop(key)
for key in lo:
sd["model2." + key[6:]] = sd.pop(key)
config = OmegaConf.load(f"{config_yaml}")
model = instantiate_from_config(config.modelUNet)
_, _ = model.load_state_dict(sd, strict=False)
model.eval()
model.cdevice = device
model.unet_bs = unet_bs
model.turbo = turbo
modelCS = instantiate_from_config(config.modelCondStage)
_, _ = modelCS.load_state_dict(sd, strict=False)
modelCS.eval()
modelCS.cond_stage_model.device = device
modelFS = instantiate_from_config(config.modelFirstStage)
_, _ = modelFS.load_state_dict(sd, strict=False)
modelFS.eval()
del sd
if device != "cpu" and precision == "autocast":
model.half()
modelCS.half()
model_is_half = True
else:
model_is_half = False
if half_model_fs:
modelFS.half()
model_fs_is_half = True
else:
model_fs_is_half = False
def mk_img(req: Request):
global modelFS, device
res = Response()
res.images = []
model.turbo = req.turbo
if req.use_cpu:
device = 'cpu'
if model_is_half:
print('reloading model for cpu')
load_model(ckpt, device)
else:
device = 'cuda'
if (precision == 'autocast' and (req.use_full_precision or not model_is_half)) or \
(precision == 'full' and not req.use_full_precision) or \
(req.init_image is None and model_fs_is_half) or \
(req.init_image is not None and not model_fs_is_half):
print('reloading model for cuda')
load_model(ckpt, device, model.turbo, unet_bs, ('full' if req.use_full_precision else 'autocast'), half_model_fs=(req.init_image is not None and not req.use_full_precision))
model.cdevice = device
modelCS.cond_stage_model.device = device
opt_prompt = req.prompt
opt_seed = req.seed
opt_n_samples = req.num_outputs
opt_n_iter = 1
opt_scale = req.guidance_scale
opt_C = 4
opt_H = req.height
opt_W = req.width
opt_f = 8
opt_ddim_steps = req.num_inference_steps
opt_ddim_eta = 0.0
opt_strength = req.prompt_strength
opt_save_to_disk_path = req.save_to_disk_path
opt_init_img = req.init_image
opt_format = 'png'
print(req.to_string(), '\n device', device)
seed_everything(opt_seed)
batch_size = opt_n_samples
prompt = opt_prompt
assert prompt is not None
data = [batch_size * [prompt]]
if precision == "autocast" and device != "cpu":
precision_scope = autocast
else:
precision_scope = nullcontext
if req.init_image is None:
handler = _txt2img
init_latent = None
t_enc = None
else:
handler = _img2img
init_image = load_img(req.init_image)
init_image = init_image.to(device)
if device != "cpu" and precision == "autocast":
init_image = init_image.half()
modelFS.to(device)
init_image = repeat(init_image, '1 ... -> b ...', b=batch_size)
init_latent = modelFS.get_first_stage_encoding(modelFS.encode_first_stage(init_image)) # move to latent space
if device != "cpu":
mem = torch.cuda.memory_allocated() / 1e6
modelFS.to("cpu")
while torch.cuda.memory_allocated() / 1e6 >= mem:
time.sleep(1)
assert 0. <= opt_strength <= 1., 'can only work with strength in [0.0, 1.0]'
t_enc = int(opt_strength * opt_ddim_steps)
print(f"target t_enc is {t_enc} steps")
if opt_save_to_disk_path is not None:
session_out_path = os.path.join(opt_save_to_disk_path, 'session-' + session_id)
os.makedirs(session_out_path, exist_ok=True)
else:
session_out_path = None
seeds = ""
with torch.no_grad():
for n in trange(opt_n_iter, desc="Sampling"):
for prompts in tqdm(data, desc="data"):
if opt_save_to_disk_path is not None:
base_count = len(os.listdir(session_out_path))
with precision_scope("cuda"):
modelCS.to(device)
uc = None
if opt_scale != 1.0:
uc = modelCS.get_learned_conditioning(batch_size * [""])
if isinstance(prompts, tuple):
prompts = list(prompts)
subprompts, weights = split_weighted_subprompts(prompts[0])
if len(subprompts) > 1:
c = torch.zeros_like(uc)
totalWeight = sum(weights)
# normalize each "sub prompt" and add it
for i in range(len(subprompts)):
weight = weights[i]
# if not skip_normalize:
weight = weight / totalWeight
c = torch.add(c, modelCS.get_learned_conditioning(subprompts[i]), alpha=weight)
else:
c = modelCS.get_learned_conditioning(prompts)
# run the handler
if handler == _txt2img:
x_samples = _txt2img(opt_W, opt_H, opt_n_samples, opt_ddim_steps, opt_scale, None, opt_C, opt_f, opt_ddim_eta, c, uc, opt_seed)
else:
x_samples = _img2img(init_latent, t_enc, batch_size, opt_scale, c, uc, opt_ddim_steps, opt_ddim_eta, opt_seed)
modelFS.to(device)
print("saving images")
for i in range(batch_size):
x_samples_ddim = modelFS.decode_first_stage(x_samples[i].unsqueeze(0))
x_sample = torch.clamp((x_samples_ddim + 1.0) / 2.0, min=0.0, max=1.0)
x_sample = 255.0 * rearrange(x_sample[0].cpu().numpy(), "c h w -> h w c")
img = Image.fromarray(x_sample.astype(np.uint8))
img_data = img_to_base64_str(img)
res.images.append(ResponseImage(data=img_data, seed=opt_seed))
if opt_save_to_disk_path is not None:
prompt_flattened = "_".join(re.split(":| ", prompts[0]))
prompt_flattened = prompt_flattened[:150]
file_path = f"sd_{prompt_flattened}_Seed-{opt_seed}_Steps-{opt_ddim_steps}_Guidance-{opt_scale}_{base_count:05}"
img_out_path = os.path.join(session_out_path, f"{file_path}.{opt_format}")
meta_out_path = os.path.join(session_out_path, f"{file_path}.txt")
metadata = f"{prompts[0]}\nSeed: {opt_seed}\nSteps: {opt_ddim_steps}\nGuidance Scale: {opt_scale}"
img.save(img_out_path)
with open(meta_out_path, 'w') as f:
f.write(metadata)
base_count += 1
seeds += str(opt_seed) + ","
opt_seed += 1
if device != "cpu":
mem = torch.cuda.memory_allocated() / 1e6
modelFS.to("cpu")
while torch.cuda.memory_allocated() / 1e6 >= mem:
time.sleep(1)
del x_samples
print("memory_final = ", torch.cuda.memory_allocated() / 1e6)
return res
def _txt2img(opt_W, opt_H, opt_n_samples, opt_ddim_steps, opt_scale, start_code, opt_C, opt_f, opt_ddim_eta, c, uc, opt_seed):
shape = [opt_n_samples, opt_C, opt_H // opt_f, opt_W // opt_f]
if device != "cpu":
mem = torch.cuda.memory_allocated() / 1e6
modelCS.to("cpu")
while torch.cuda.memory_allocated() / 1e6 >= mem:
time.sleep(1)
samples_ddim = model.sample(
S=opt_ddim_steps,
conditioning=c,
seed=opt_seed,
shape=shape,
verbose=False,
unconditional_guidance_scale=opt_scale,
unconditional_conditioning=uc,
eta=opt_ddim_eta,
x_T=start_code,
sampler = 'plms',
)
return samples_ddim
def _img2img(init_latent, t_enc, batch_size, opt_scale, c, uc, opt_ddim_steps, opt_ddim_eta, opt_seed):
# encode (scaled latent)
z_enc = model.stochastic_encode(
init_latent,
torch.tensor([t_enc] * batch_size).to(device),
opt_seed,
opt_ddim_eta,
opt_ddim_steps,
)
# decode it
samples_ddim = model.sample(
t_enc,
c,
z_enc,
unconditional_guidance_scale=opt_scale,
unconditional_conditioning=uc,
sampler = 'ddim'
)
return samples_ddim
# internal
def chunk(it, size):
it = iter(it)
return iter(lambda: tuple(islice(it, size)), ())
def load_model_from_config(ckpt, verbose=False):
print(f"Loading model from {ckpt}")
pl_sd = torch.load(ckpt, map_location="cpu")
if "global_step" in pl_sd:
print(f"Global Step: {pl_sd['global_step']}")
sd = pl_sd["state_dict"]
return sd
# utils
def load_img(img_str):
image = base64_str_to_img(img_str).convert("RGB")
w, h = image.size
print(f"loaded input image of size ({w}, {h}) from base64")
w, h = map(lambda x: x - x % 64, (w, h)) # resize to integer multiple of 64
image = image.resize((w, h), resample=Image.LANCZOS)
image = np.array(image).astype(np.float32) / 255.0
image = image[None].transpose(0, 3, 1, 2)
image = torch.from_numpy(image)
return 2.*image - 1.
# https://stackoverflow.com/a/61114178
def img_to_base64_str(img):
buffered = BytesIO()
img.save(buffered, format="PNG")
buffered.seek(0)
img_byte = buffered.getvalue()
img_str = "data:image/png;base64," + base64.b64encode(img_byte).decode()
return img_str
def base64_str_to_img(img_str):
img_str = img_str[len("data:image/png;base64,"):]
data = base64.b64decode(img_str)
buffered = BytesIO(data)
img = Image.open(buffered)
return img

118
ui/server.py Normal file
View File

@ -0,0 +1,118 @@
import traceback
import sys
import os
SCRIPT_DIR = os.getcwd()
print('started in ', SCRIPT_DIR)
SD_UI_DIR = os.getenv('SD_UI_PATH', None)
sys.path.append(os.path.dirname(SD_UI_DIR))
OUTPUT_DIRNAME = "Stable Diffusion UI" # in the user's home folder
from fastapi import FastAPI, HTTPException
from starlette.responses import FileResponse
from pydantic import BaseModel
from sd_internal import Request, Response
app = FastAPI()
model_loaded = False
model_is_loading = False
modifiers_cache = None
outpath = os.path.join(os.path.expanduser("~"), OUTPUT_DIRNAME)
# defaults from https://huggingface.co/blog/stable_diffusion
class ImageRequest(BaseModel):
prompt: str = ""
init_image: str = None # base64
mask: str = None # base64
num_outputs: int = 1
num_inference_steps: int = 50
guidance_scale: float = 7.5
width: int = 512
height: int = 512
seed: int = 42
prompt_strength: float = 0.8
# allow_nsfw: bool = False
save_to_disk: bool = False
turbo: bool = True
use_cpu: bool = False
use_full_precision: bool = False
@app.get('/')
def read_root():
return FileResponse(os.path.join(SD_UI_DIR, 'index.html'))
@app.get('/ping')
async def ping():
global model_loaded, model_is_loading
try:
if model_loaded:
return {'OK'}
if model_is_loading:
return {'ERROR'}
model_is_loading = True
from sd_internal import runtime
runtime.load_model(ckpt_to_use="sd-v1-4.ckpt")
model_loaded = True
model_is_loading = False
return {'OK'}
except Exception as e:
print(traceback.format_exc())
return HTTPException(status_code=500, detail=str(e))
@app.post('/image')
async def image(req : ImageRequest):
from sd_internal import runtime
r = Request()
r.prompt = req.prompt
r.init_image = req.init_image
r.mask = req.mask
r.num_outputs = req.num_outputs
r.num_inference_steps = req.num_inference_steps
r.guidance_scale = req.guidance_scale
r.width = req.width
r.height = req.height
r.seed = req.seed
r.prompt_strength = req.prompt_strength
# r.allow_nsfw = req.allow_nsfw
r.turbo = req.turbo
r.use_cpu = req.use_cpu
r.use_full_precision = req.use_full_precision
if req.save_to_disk:
r.save_to_disk_path = outpath
try:
res: Response = runtime.mk_img(r)
return res.json()
except Exception as e:
print(traceback.format_exc())
return HTTPException(status_code=500, detail=str(e))
@app.get('/media/ding.mp3')
def read_ding():
return FileResponse(os.path.join(SD_UI_DIR, 'media/ding.mp3'))
@app.get('/modifiers.json')
def read_modifiers():
return FileResponse(os.path.join(SD_UI_DIR, 'modifiers.json'))
@app.get('/output_dir')
def read_home_dir():
return {outpath}
# start the browser ui
import webbrowser; webbrowser.open('http://localhost:9000')