Merge branch 'main' into beta

This commit is contained in:
cmdr2 2022-09-12 11:12:41 +05:30
commit 246ceebe0e
3 changed files with 22 additions and 1 deletions

View File

@ -18,7 +18,21 @@ This is in-flux, but one way to get a development environment running for editin
3) `cd /projects/stable-diffusion-ui-archive` and run the script to set up and start the project, e.g. `start.sh`
4) Check you can view and generate images on `localhost:9000`
5) Close the server, and edit `/projects/stable-diffusion-ui-archive/scripts/on_env_start.sh`
6) Comment out the line near the bottom that copies the `files/ui` folder, e.g. `cp -Rf sd-ui-files/ui ui` for `.sh` or `@xcopy sd-ui-files\ui ui /s /i /Y` for `.bat`
6) Comment out the lines near the bottom that copies the `files/ui` folder, e.g:
for `.sh`
```
# rm -rf ui
# cp -Rf sd-ui-files/ui .
# cp sd-ui-files/scripts/on_sd_start.sh scripts/
# cp sd-ui-files/scripts/start.sh .
```
for `.bat`
```
REM @xcopy sd-ui-files\ui ui /s /i /Y
REM @copy sd-ui-files\scripts\on_sd_start.bat scripts\ /Y
REM @copy "sd-ui-files\scripts\Start Stable Diffusion UI.cmd" . /Y
```
7) Comment out the line at the top of `/projects/stable-diffusion-ui-archive/scripts/on_sd_start.sh` that copies `on_env_start`. For e.g. `@copy sd-ui-files\scripts\on_env_start.bat scripts\ /Y`
8) Delete the current `ui` folder at `/projects/stable-diffusion-ui-archive/ui`
9) Now make a symlink between the repository clone (where you will be making changes) and this archive (where you will be running stable diffusion):

View File

@ -8,8 +8,11 @@
[![Discord Server](https://badgen.net/badge/icon/discord?icon=discord&label)](https://discord.com/invite/u9yhsFmEkB) (for support, and development discussion) | [Troubleshooting guide for common problems](Troubleshooting.md)
️‍🔥🎉 **New!** Face Correction (GFPGAN) and Upscaling (RealESRGAN) have been added!
# Features in the new v2 Version:
- **No Dependencies or Technical Knowledge Required**: 1-click install for Windows 10/11 and Linux. *No dependencies*, no need for WSL or Docker or Conda or technical setup. Just download and run!
- **Face Correction (GFPGAN) and Upscaling (RealESRGAN)**
- **Image Modifiers**: A library of *modifier tags* like *"Realistic"*, *"Pencil Sketch"*, *"ArtStation"* etc. Experiment with various styles quickly.
- **New UI**: with cleaner design
- Supports "*Text to Image*" and "*Image to Image*"
@ -54,6 +57,8 @@ Open http://localhost:9000 in your browser (after running step 3 previously).
2. An optional text prompt can help you further describe the kind of image you want to generate.
3. Press `Make Image`. See the image generated using your prompt.
You can use Face Correction or Upscaling to improve the image further.
**Pro tip:** You can also click `Use as Input` on a generated image, to use it as the input image for your next generation. This can be useful for sequentially refining the generated image with a single click.
**Another tip:** Images with the same aspect ratio of your generated image work best. E.g. 1:1 if you're generating images sized 512x512.

View File

@ -59,6 +59,8 @@ force_full_precision = False
try:
gpu = torch.cuda.current_device()
gpu_name = torch.cuda.get_device_name(gpu)
print('GPU detected: ', gpu_name)
force_full_precision = ('nvidia' in gpu_name.lower() or 'geforce' in gpu_name.lower()) and (' 1660' in gpu_name or ' 1650' in gpu_name) # otherwise these NVIDIA cards create green images
if force_full_precision:
print('forcing full precision on NVIDIA 16xx cards, to avoid green images. GPU detected: ', gpu_name)