Easiest 1-click way to create beautiful artwork on your PC using AI, with no tech knowledge. Provides a browser UI for generating images from text prompts and images. Just enter your text prompt, and see the generated image.
Go to file
2022-08-31 12:16:25 +05:30
media Update screenshot for the new num_outputs config 2022-08-27 10:55:59 +05:30
.gitignore Initial commit 2022-08-24 01:58:18 +05:30
CreativeML Open RAIL-M License Update license to mention Restricted uses, and include the CreativeML Open RAIL-M License 2022-08-25 16:14:08 +05:30
docker-compose.yml Use port 9000 2022-08-29 09:29:08 +05:30
Dockerfile Switch to port 9000, and post a redirection notice at port 8000. This avoids a port conflicts since port 8000 is often used by other servers, and hopefully avoids a common source of new-user issues in the future 2022-08-26 21:04:55 +05:30
index.html New UI; A library of image modifier tags like 'realistic', 'artstation', 'pencil sketch' etc 2022-08-31 12:16:25 +05:30
LICENSE Update license to mention Restricted uses, and include the CreativeML Open RAIL-M License 2022-08-25 16:14:08 +05:30
main.py New UI; A library of image modifier tags like 'realistic', 'artstation', 'pencil sketch' etc 2022-08-31 12:16:25 +05:30
modifiers.json New UI; A library of image modifier tags like 'realistic', 'artstation', 'pencil sketch' etc 2022-08-31 12:16:25 +05:30
old_port_main.py Revert to using docker-compose commands, since ./server wasn't working reliably on some platforms 2022-08-27 14:14:24 +05:30
OldPortDockerfile Switch to port 9000, and post a redirection notice at port 8000. This avoids a port conflicts since port 8000 is often used by other servers, and hopefully avoids a common source of new-user issues in the future 2022-08-26 21:04:55 +05:30
README.md Update README.md 2022-08-31 08:06:02 +05:30
requirements.txt Create docker-compose for ui and ai 2022-08-24 09:51:56 -04:00
server Revert to using docker-compose commands, since ./server wasn't working reliably on some platforms 2022-08-27 14:14:24 +05:30

Stable Diffusion UI

A simple way to install and use Stable Diffusion on your own computer


What does this do?

Two things:

  1. Automatically downloads and installs Stable Diffusion on your own computer (no need to mess with conda or environments)
  2. Gives you a simple browser-based UI to talk to your local Stable Diffusion. Enter text prompts and view the generated image. No API keys required.

All the processing will happen on your computer locally, it does not transmit your prompts or process on any remote server.

System Requirements

  1. Computer capable of running Stable Diffusion.
  2. Linux or Windows 11 (with WSL) or Windows 10 v2004+ (Build 19041+) with WSL.
  3. Requires (a) Docker, (b) docker-compose v1.29, and (c) nvidia-container-toolkit.

Important: If you're using Windows, please install docker inside your WSL's Linux. Install docker for the Linux distro in your WSL. Don't install Docker for Windows.

Installation

  1. Clone this repository: git clone https://github.com/cmdr2/stable-diffusion-ui.git or download the zip file and unzip.
  2. Open your terminal, and in the project directory run: docker-compose up & (warning: this will take some time during the first run, since it'll download Stable Diffusion's docker image, nearly 17 GiB)
  3. Open http://localhost:9000 in your browser. That's it!

If you're getting errors, please check the Troubleshooting page.

To stop the server, please run docker-compose down

Usage

Open http://localhost:9000 in your browser (after running docker-compose up & from step 2 previously).

With a text description

  1. Enter a text prompt, like a photograph of an astronaut riding a horse in the textbox.
  2. Press Make Image. This will take some time, depending on your system's processing power.
  3. See the image generated using your prompt.

With an image

  1. Click Browse.. next to Initial Image. Select your desired image.
  2. An optional text prompt can help you further describe the kind of image you want to generate.
  3. Press Make Image. See the image generated using your prompt.

You can also set an Image Mask for telling Stable Diffusion to draw in only the black areas in your image mask. White areas in your mask will be ignored.

Pro tip: You can also click Use as Input on a generated image, to use it as the input image for your next generation. This can be useful for sequentially refining the generated image with a single click.

Another tip: Images with the same aspect ratio of your generated image work best. E.g. 1:1 if you're generating images sized 512x512.

Problems?

Please file an issue if this did not work for you (after trying the common troubleshooting steps)!

Advanced Settings

You can also set the configuration like seed, width, height, num_outputs, num_inference_steps and guidance_scale using the 'show' button next to 'Advanced settings'.

Use the same seed number to get the same image for a certain prompt. This is useful for refining a prompt without losing the basic image design. Enable the random images checkbox to get random images.

Screenshot of advanced settings

Troubleshooting

The Troubleshooting wiki page contains some common errors and their solutions. Please check that, and if it doesn't work, feel free to file an issue.

Behind the scenes

This project is a quick way to get started with Stable Diffusion. You do not need to have Stable Diffusion already installed, and do not need any API keys. This project will automatically download Stable Diffusion's docker image, the first time it is run.

This project runs Stable Diffusion in a docker container behind the scenes, using Stable Diffusion's Docker image on replicate.com.

Bugs reports and code contributions welcome

If there are any problems or suggestions, please feel free to file an issue.

Also, please feel free to submit a pull request, if you have any code contributions in mind.

Disclaimer

The authors of this project are not responsible for any content generated using this interface.

This license of this software forbids you from sharing any content that violates any laws, produce any harm to a person, disseminate any personal information that would be meant for harm, spread misinformation and target vulnerable groups. For the full list of restrictions please read the license.