Merge branch 'react' into beta-react

This commit is contained in:
cmdr2 2022-09-21 23:17:33 +05:30 committed by GitHub
commit e2545b3d34
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
17 changed files with 356 additions and 49 deletions

38
.github/ISSUE_TEMPLATE/bug_report.md vendored Normal file
View File

@ -0,0 +1,38 @@
---
name: Bug report
about: Create a report to help us improve
title: ''
labels: bug
assignees: ''
---
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Desktop (please complete the following information):**
- OS:
- Browser:
- Version:
**Smartphone (please complete the following information):**
- Device:
- OS:
- Browser
- Version
**Additional context**
Add any other context about the problem here.

View File

@ -0,0 +1,20 @@
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: enhancement
assignees: ''
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.

1
.gitignore vendored
View File

@ -7,4 +7,5 @@ dist
!/ui/frontend/dist !/ui/frontend/dist
ui/frontend/.idea/* ui/frontend/.idea/*
ui/frontend/build_src/.idea/* ui/frontend/build_src/.idea/*
.idea/* .idea/*

View File

@ -73,7 +73,7 @@ You can also set the configuration like `seed`, `width`, `height`, `num_outputs`
Use the same `seed` number to get the same image for a certain prompt. This is useful for refining a prompt without losing the basic image design. Enable the `random images` checkbox to get random images. Use the same `seed` number to get the same image for a certain prompt. This is useful for refining a prompt without losing the basic image design. Enable the `random images` checkbox to get random images.
![Screenshot of advanced settings](media/config-v5.jpg?raw=true) ![Screenshot of advanced settings](media/config-v6.png?raw=true)
# What is this? Why no Docker? # What is this? Why no Docker?
This version is a 1-click installer. You don't need WSL or Docker or anything beyond a working NVIDIA GPU with an updated driver. You don't need to use the command-line at all. Even if you don't have a compatible GPU, you can run it on your CPU (albeit very slowly). This version is a 1-click installer. You don't need WSL or Docker or anything beyond a working NVIDIA GPU with an updated driver. You don't need to use the command-line at all. Even if you don't have a compatible GPU, you can run it on your CPU (albeit very slowly).

View File

@ -16,6 +16,9 @@ This error can also be caused if you already have conda/miniconda/anaconda insta
If nothing works, this could be due to a corrupted installation. Please try reinstalling this, by deleting the installed folder, and unzipping from the downloaded zip file. If nothing works, this could be due to a corrupted installation. Please try reinstalling this, by deleting the installed folder, and unzipping from the downloaded zip file.
## Killed uvicorn server:app --app-dir ... --port 9000 --host 0.0.0.0
This happens if your PC ran out of RAM. Stable Diffusion requires a lot of RAM, and requires atleast 10 GB of RAM to work well. You can also try closing all other applications before running Stable Diffusion UI.
## Green image generated ## Green image generated
This usually happens if you're running NVIDIA 1650 or 1660 Super. To solve this, please close and run the Stable Diffusion command on your computer. If you're using the older Docker-based solution (v1), please upgrade to v2: https://github.com/cmdr2/stable-diffusion-ui/tree/v2#installation This usually happens if you're running NVIDIA 1650 or 1660 Super. To solve this, please close and run the Stable Diffusion command on your computer. If you're using the older Docker-based solution (v1), please upgrade to v2: https://github.com/cmdr2/stable-diffusion-ui/tree/v2#installation

BIN
media/config-v6.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 45 KiB

View File

@ -5,7 +5,7 @@
@REM Caution, this file will make your eyes and brain bleed. It's such an unholy mess. @REM Caution, this file will make your eyes and brain bleed. It's such an unholy mess.
@REM Note to self: Please rewrite this in Python. For the sake of your own sanity. @REM Note to self: Please rewrite this in Python. For the sake of your own sanity.
@call python -c "import os; import shutil; frm = 'sd-ui-files\\ui\\hotfix\\9c24e6cd9f499d02c4f21a033736dabd365962dc80fe3aeb57a8f85ea45a20a3.26fead7ea4f0f843f6eb4055dfd25693f1a71f3c6871b184042d4b126244e142'; dst = os.path.join(os.path.expanduser('~'), '.cache', 'huggingface', 'transformers', '9c24e6cd9f499d02c4f21a033736dabd365962dc80fe3aeb57a8f85ea45a20a3.26fead7ea4f0f843f6eb4055dfd25693f1a71f3c6871b184042d4b126244e142'); shutil.copyfile(frm, dst); print('Hotfixed broken JSON file from OpenAI');" @call python -c "import os; import shutil; frm = 'sd-ui-files\\ui\\hotfix\\9c24e6cd9f499d02c4f21a033736dabd365962dc80fe3aeb57a8f85ea45a20a3.26fead7ea4f0f843f6eb4055dfd25693f1a71f3c6871b184042d4b126244e142'; dst = os.path.join(os.path.expanduser('~'), '.cache', 'huggingface', 'transformers', '9c24e6cd9f499d02c4f21a033736dabd365962dc80fe3aeb57a8f85ea45a20a3.26fead7ea4f0f843f6eb4055dfd25693f1a71f3c6871b184042d4b126244e142'); shutil.copyfile(frm, dst) if os.path.exists(dst) else print(''); print('Hotfixed broken JSON file from OpenAI');"
@>nul grep -c "sd_git_cloned" scripts\install_status.txt @>nul grep -c "sd_git_cloned" scripts\install_status.txt
@if "%ERRORLEVEL%" EQU "0" ( @if "%ERRORLEVEL%" EQU "0" (
@ -62,6 +62,12 @@
@call conda activate .\env @call conda activate .\env
@call conda install -c conda-forge -y --prefix env antlr4-python3-runtime=4.8 || (
@echo. & echo "Error installing antlr4-python3-runtime for Stable Diffusion. Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo.
pause
exit /b
)
for /f "tokens=*" %%a in ('python -c "import torch; import ldm; import transformers; import numpy; import antlr4; print(42)"') do if "%%a" NEQ "42" ( for /f "tokens=*" %%a in ('python -c "import torch; import ldm; import transformers; import numpy; import antlr4; print(42)"') do if "%%a" NEQ "42" (
@echo. & echo "Dependency test failed! Error installing the packages necessary for Stable Diffusion. Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo. @echo. & echo "Dependency test failed! Error installing the packages necessary for Stable Diffusion. Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo.
pause pause

View File

@ -4,7 +4,7 @@ cp sd-ui-files/scripts/on_env_start.sh scripts/
source installer/etc/profile.d/conda.sh source installer/etc/profile.d/conda.sh
python -c "import os; import shutil; frm = 'sd-ui-files/ui/hotfix/9c24e6cd9f499d02c4f21a033736dabd365962dc80fe3aeb57a8f85ea45a20a3.26fead7ea4f0f843f6eb4055dfd25693f1a71f3c6871b184042d4b126244e142'; dst = os.path.join(os.path.expanduser('~'), '.cache', 'huggingface', 'transformers', '9c24e6cd9f499d02c4f21a033736dabd365962dc80fe3aeb57a8f85ea45a20a3.26fead7ea4f0f843f6eb4055dfd25693f1a71f3c6871b184042d4b126244e142'); shutil.copyfile(frm, dst); print('Hotfixed broken JSON file from OpenAI');" python -c "import os; import shutil; frm = 'sd-ui-files/ui/hotfix/9c24e6cd9f499d02c4f21a033736dabd365962dc80fe3aeb57a8f85ea45a20a3.26fead7ea4f0f843f6eb4055dfd25693f1a71f3c6871b184042d4b126244e142'; dst = os.path.join(os.path.expanduser('~'), '.cache', 'huggingface', 'transformers', '9c24e6cd9f499d02c4f21a033736dabd365962dc80fe3aeb57a8f85ea45a20a3.26fead7ea4f0f843f6eb4055dfd25693f1a71f3c6871b184042d4b126244e142'); shutil.copyfile(frm, dst) if os.path.exists(dst) else print(''); print('Hotfixed broken JSON file from OpenAI');"
# Caution, this file will make your eyes and brain bleed. It's such an unholy mess. # Caution, this file will make your eyes and brain bleed. It's such an unholy mess.
# Note to self: Please rewrite this in Python. For the sake of your own sanity. # Note to self: Please rewrite this in Python. For the sake of your own sanity.
@ -63,6 +63,14 @@ else
conda activate ./env conda activate ./env
if conda install -c conda-forge --prefix ./env -y antlr4-python3-runtime=4.8 ; then
echo "Installed. Testing.."
else
printf "\n\nError installing antlr4-python3-runtime for Stable Diffusion. Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n"
read -p "Press any key to continue"
exit
fi
out_test=`python -c "import torch; import ldm; import transformers; import numpy; import antlr4; print(42)"` out_test=`python -c "import torch; import ldm; import transformers; import numpy; import antlr4; print(42)"`
if [ "$out_test" != "42" ]; then if [ "$out_test" != "42" ]; then
printf "\n\nDependency test failed! Error installing the packages necessary for Stable Diffusion. Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n" printf "\n\nDependency test failed! Error installing the packages necessary for Stable Diffusion. Sorry about that, please try to:\n 1. Run this installer again.\n 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md\n 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB\n 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues\nThanks!\n\n"

View File

@ -5,6 +5,8 @@
<!DOCTYPE html> <!DOCTYPE html>
<html> <html>
<meta name="viewport" content="width=device-width, initial-scale=1.0"> <meta name="viewport" content="width=device-width, initial-scale=1.0">
<link rel="icon" type="image/png" href="/media/favicon-16x16.png" sizes="16x16">
<link rel="icon" type="image/png" href="/media/favicon-32x32.png" sizes="32x32">
<style> <style>
body { body {
font-family: Arial, Helvetica, sans-serif; font-family: Arial, Helvetica, sans-serif;
@ -31,7 +33,7 @@
} }
} }
.image_preview_container { .image_preview_container {
display: none; /* display: none; */
margin-top: 10pt; margin-top: 10pt;
} }
.image_clear_btn { .image_clear_btn {
@ -278,7 +280,24 @@
height: 23px; height: 23px;
transform: translateY(25%); transform: translateY(25%);
} }
#inpaintingEditor {
width: 300pt;
height: 300pt;
margin-top: 5pt;
}
.drawing-board-canvas-wrapper {
background-size: 100% 100%;
}
#inpaintingEditor canvas {
opacity: 0.6;
}
#enable_mask {
margin-top: 8pt;
}
</style> </style>
<link rel="stylesheet" href="/media/drawingboard.min.css">
<script src="/media/jquery-3.6.1.min.js"></script>
<script src="/media/drawingboard.min.js"></script>
</html> </html>
<body> <body>
<div id="container"> <div id="container">
@ -289,7 +308,7 @@
<div id="server-status-color">&nbsp;</div> <div id="server-status-color">&nbsp;</div>
<span id="server-status-msg">Stable Diffusion is starting..</span> <span id="server-status-msg">Stable Diffusion is starting..</span>
</div> </div>
<h1>Stable Diffusion UI <small>v2.1 <span id="updateBranchLabel"></span></small></h1> <h1>Stable Diffusion UI <small>v2.13 <span id="updateBranchLabel"></span></small></h1>
</div> </div>
<div id="editor-inputs"> <div id="editor-inputs">
<div id="editor-inputs-prompt" class="row"> <div id="editor-inputs-prompt" class="row">
@ -299,9 +318,14 @@
<div id="editor-inputs-init-image" class="row"> <div id="editor-inputs-init-image" class="row">
<label for="init_image"><b>Initial Image:</b> (optional) </label> <input id="init_image" name="init_image" type="file" /><br/> <label for="init_image"><b>Initial Image:</b> (optional) </label> <input id="init_image" name="init_image" type="file" /><br/>
<div id="init_image_preview_container" class="image_preview_container"> <div id="init_image_preview_container" class="image_preview_container">
<img id="init_image_preview" src="" width="100" height="100" /> <img id="init_image_preview" src="" width="100" height="100" />
<button id="init_image_clear" class="image_clear_btn">X</button> <button class="init_image_clear image_clear_btn">X</button>
<br/>
<input id="enable_mask" name="enable_mask" type="checkbox"> <label for="enable_mask">In-Painting (select the area which the AI will paint into)</label>
<div id="inpaintingEditor"></div>
</div> </div>
</div> </div>
@ -320,6 +344,7 @@
<div id="editor-settings" class="panel-box"> <div id="editor-settings" class="panel-box">
<h4 class="collapsible">Advanced Settings</h4> <h4 class="collapsible">Advanced Settings</h4>
<ul id="editor-settings-entries" class="collapsible-content"> <ul id="editor-settings-entries" class="collapsible-content">
<li><input id="stream_image_progress" name="stream_image_progress" type="checkbox"> <label for="stream_image_progress">Show a live preview of the image (disable this for faster image generation)</label></li>
<li><input id="use_face_correction" name="use_face_correction" type="checkbox" checked> <label for="use_face_correction">Fix incorrect faces and eyes (uses GFPGAN)</label></li> <li><input id="use_face_correction" name="use_face_correction" type="checkbox" checked> <label for="use_face_correction">Fix incorrect faces and eyes (uses GFPGAN)</label></li>
<li> <li>
<input id="use_upscale" name="use_upscale" type="checkbox"> <label for="use_upscale">Upscale the image to 4x resolution using </label> <input id="use_upscale" name="use_upscale" type="checkbox"> <label for="use_upscale">Upscale the image to 4x resolution using </label>
@ -436,11 +461,14 @@ const MODIFIERS_PANEL_OPEN_KEY = "modifiersPanelOpen"
const USE_FACE_CORRECTION_KEY = "useFaceCorrection" const USE_FACE_CORRECTION_KEY = "useFaceCorrection"
const USE_UPSCALING_KEY = "useUpscaling" const USE_UPSCALING_KEY = "useUpscaling"
const SHOW_ONLY_FILTERED_IMAGE_KEY = "showOnlyFilteredImage" const SHOW_ONLY_FILTERED_IMAGE_KEY = "showOnlyFilteredImage"
const STREAM_IMAGE_PROGRESS_KEY = "streamImageProgress"
const HEALTH_PING_INTERVAL = 5 // seconds const HEALTH_PING_INTERVAL = 5 // seconds
const MAX_INIT_IMAGE_DIMENSION = 768 const MAX_INIT_IMAGE_DIMENSION = 768
const IMAGE_REGEX = new RegExp('data:image/[A-Za-z]+;base64') const IMAGE_REGEX = new RegExp('data:image/[A-Za-z]+;base64')
let sessionId = new Date().getTime()
let promptField = document.querySelector('#prompt') let promptField = document.querySelector('#prompt')
let numOutputsTotalField = document.querySelector('#num_outputs_total') let numOutputsTotalField = document.querySelector('#num_outputs_total')
let numOutputsParallelField = document.querySelector('#num_outputs_parallel') let numOutputsParallelField = document.querySelector('#num_outputs_parallel')
@ -453,8 +481,8 @@ let widthField = document.querySelector('#width')
let heightField = document.querySelector('#height') let heightField = document.querySelector('#height')
let initImageSelector = document.querySelector("#init_image") let initImageSelector = document.querySelector("#init_image")
let initImagePreview = document.querySelector("#init_image_preview") let initImagePreview = document.querySelector("#init_image_preview")
// let maskImageSelector = document.querySelector("#mask") let maskImageSelector = document.querySelector("#mask")
// let maskImagePreview = document.querySelector("#mask_preview") let maskImagePreview = document.querySelector("#mask_preview")
let turboField = document.querySelector('#turbo') let turboField = document.querySelector('#turbo')
let useCPUField = document.querySelector('#use_cpu') let useCPUField = document.querySelector('#use_cpu')
let useFullPrecisionField = document.querySelector('#use_full_precision') let useFullPrecisionField = document.querySelector('#use_full_precision')
@ -469,18 +497,20 @@ let useUpscalingField = document.querySelector("#use_upscale")
let upscaleModelField = document.querySelector("#upscale_model") let upscaleModelField = document.querySelector("#upscale_model")
let showOnlyFilteredImageField = document.querySelector("#show_only_filtered_image") let showOnlyFilteredImageField = document.querySelector("#show_only_filtered_image")
let updateBranchLabel = document.querySelector("#updateBranchLabel") let updateBranchLabel = document.querySelector("#updateBranchLabel")
let streamImageProgressField = document.querySelector("#stream_image_progress")
let makeImageBtn = document.querySelector('#makeImage') let makeImageBtn = document.querySelector('#makeImage')
let stopImageBtn = document.querySelector('#stopImage') let stopImageBtn = document.querySelector('#stopImage')
let imagesContainer = document.querySelector('#current-images') let imagesContainer = document.querySelector('#current-images')
let initImagePreviewContainer = document.querySelector('#init_image_preview_container') let initImagePreviewContainer = document.querySelector('#init_image_preview_container')
let initImageClearBtn = document.querySelector('#init_image_clear') let initImageClearBtn = document.querySelector('.init_image_clear')
let promptStrengthContainer = document.querySelector('#prompt_strength_container') let promptStrengthContainer = document.querySelector('#prompt_strength_container')
// let maskSetting = document.querySelector('#mask_setting') // let maskSetting = document.querySelector('#editor-inputs-mask_setting')
// let maskImagePreviewContainer = document.querySelector('#mask_preview_container') // let maskImagePreviewContainer = document.querySelector('#mask_preview_container')
// let maskImageClearBtn = document.querySelector('#mask_clear') // let maskImageClearBtn = document.querySelector('#mask_clear')
let maskSetting = document.querySelector('#enable_mask')
let editorModifierEntries = document.querySelector('#editor-modifiers-entries') let editorModifierEntries = document.querySelector('#editor-modifiers-entries')
let editorModifierTagsList = document.querySelector('#editor-inputs-tags-list') let editorModifierTagsList = document.querySelector('#editor-inputs-tags-list')
@ -500,6 +530,29 @@ let serverStatusMsg = document.querySelector('#server-status-msg')
let advancedPanelHandle = document.querySelector("#editor-settings .collapsible") let advancedPanelHandle = document.querySelector("#editor-settings .collapsible")
let modifiersPanelHandle = document.querySelector("#editor-modifiers .collapsible") let modifiersPanelHandle = document.querySelector("#editor-modifiers .collapsible")
let inpaintingEditorContainer = document.querySelector('#inpaintingEditor')
let inpaintingEditor = new DrawingBoard.Board('inpaintingEditor', {
color: "#ffffff",
background: false,
size: 30,
webStorage: false,
controls: [{'DrawingMode': {'filler': false}}, 'Size', 'Navigation']
})
let inpaintingEditorCanvasBackground = document.querySelector('.drawing-board-canvas-wrapper')
// let inpaintingEditorControls = document.querySelector('.drawing-board-controls')
// let inpaintingEditorMetaControl = document.createElement('div')
// inpaintingEditorMetaControl.className = 'drawing-board-control'
// let initImageClearBtnToolbar = document.createElement('button')
// initImageClearBtnToolbar.className = 'init_image_clear'
// initImageClearBtnToolbar.innerHTML = 'Remove Image'
// inpaintingEditorMetaControl.appendChild(initImageClearBtnToolbar)
// inpaintingEditorControls.appendChild(inpaintingEditorMetaControl)
let maskResetButton = document.querySelector('.drawing-board-control-navigation-reset')
maskResetButton.innerHTML = 'Clear'
maskResetButton.style.fontWeight = 'normal'
maskResetButton.style.fontSize = '10pt'
let serverStatus = 'offline' let serverStatus = 'offline'
let activeTags = [] let activeTags = []
@ -581,6 +634,10 @@ function isModifiersPanelOpenEnabled() {
return getLocalStorageBoolItem(MODIFIERS_PANEL_OPEN_KEY, false) return getLocalStorageBoolItem(MODIFIERS_PANEL_OPEN_KEY, false)
} }
function isStreamImageProgressEnabled() {
return getLocalStorageBoolItem(STREAM_IMAGE_PROGRESS_KEY, false)
}
function setStatus(statusType, msg, msgType) { function setStatus(statusType, msg, msgType) {
if (statusType !== 'server') { if (statusType !== 'server') {
return; return;
@ -640,6 +697,20 @@ async function healthCheck() {
} }
} }
function makeImageElement(width, height) {
let imgItem = document.createElement('div')
imgItem.className = 'imgItem'
let img = document.createElement('img')
img.width = parseInt(width)
img.height = parseInt(height)
imgItem.appendChild(img)
imagesContainer.appendChild(imgItem)
return imgItem
}
// makes a single image. don't call this directly, use makeImage() instead // makes a single image. don't call this directly, use makeImage() instead
async function doMakeImage(reqBody, batchCount) { async function doMakeImage(reqBody, batchCount) {
if (taskStopped) { if (taskStopped) {
@ -648,6 +719,15 @@ async function doMakeImage(reqBody, batchCount) {
let res = '' let res = ''
let seed = reqBody['seed'] let seed = reqBody['seed']
let numOutputs = parseInt(reqBody['num_outputs'])
let images = []
function makeImageContainers(numImages) {
for (let i = images.length; i < numImages; i++) {
images.push(makeImageElement(reqBody.width, reqBody.height))
}
}
try { try {
res = await fetch('/image', { res = await fetch('/image', {
@ -685,9 +765,11 @@ async function doMakeImage(reqBody, batchCount) {
let overallStepCount = stepUpdate.step + batchesDone * batchSize let overallStepCount = stepUpdate.step + batchesDone * batchSize
let totalSteps = batchCount * batchSize let totalSteps = batchCount * batchSize
let percent = 100 * (overallStepCount / totalSteps) let percent = 100 * (overallStepCount / totalSteps)
percent = (percent > 100 ? 100 : percent)
percent = percent.toFixed(0) percent = percent.toFixed(0)
stepsRemaining = totalSteps - overallStepCount stepsRemaining = totalSteps - overallStepCount
stepsRemaining = (stepsRemaining < 0 ? 0 : stepsRemaining)
timeRemaining = (timeTaken === -1 ? '' : stepsRemaining * timeTaken) // ms timeRemaining = (timeTaken === -1 ? '' : stepsRemaining * timeTaken) // ms
outputMsg.innerHTML = `Batch ${batchesDone+1} of ${batchCount}` outputMsg.innerHTML = `Batch ${batchesDone+1} of ${batchCount}`
@ -697,6 +779,17 @@ async function doMakeImage(reqBody, batchCount) {
progressBar.innerHTML += `<br>Time remaining (approx): ${millisecondsToStr(timeRemaining)}` progressBar.innerHTML += `<br>Time remaining (approx): ${millisecondsToStr(timeRemaining)}`
} }
progressBar.style.display = 'block' progressBar.style.display = 'block'
if (stepUpdate.output !== undefined) {
makeImageContainers(numOutputs)
for (idx in stepUpdate.output) {
let imgItem = images[idx]
let img = imgItem.firstChild
let tmpImageData = stepUpdate.output[idx]
img.src = tmpImageData['path'] + '?t=' + new Date().getTime()
}
}
} }
} catch (e) { } catch (e) {
finalJSON += jsonStr finalJSON += jsonStr
@ -755,6 +848,8 @@ async function doMakeImage(reqBody, batchCount) {
lastPromptUsed = reqBody['prompt'] lastPromptUsed = reqBody['prompt']
makeImageContainers(res.output.length)
for (let idx in res.output) { for (let idx in res.output) {
let imgBody = '' let imgBody = ''
let seed = 0 let seed = 0
@ -769,12 +864,9 @@ async function doMakeImage(reqBody, batchCount) {
continue continue
} }
let imgItem = document.createElement('div') let imgItem = images[idx]
imgItem.className = 'imgItem' let img = imgItem.firstChild
let img = document.createElement('img')
img.width = parseInt(reqBody.width)
img.height = parseInt(reqBody.height)
img.src = imgBody img.src = imgBody
let imgItemInfo = document.createElement('span') let imgItemInfo = document.createElement('span')
@ -792,19 +884,19 @@ async function doMakeImage(reqBody, batchCount) {
imgSaveBtn.className = 'imgSaveBtn' imgSaveBtn.className = 'imgSaveBtn'
imgSaveBtn.innerHTML = 'Download' imgSaveBtn.innerHTML = 'Download'
imgItem.appendChild(img)
imgItem.appendChild(imgItemInfo) imgItem.appendChild(imgItemInfo)
imgItemInfo.appendChild(imgSeedLabel) imgItemInfo.appendChild(imgSeedLabel)
imgItemInfo.appendChild(imgUseBtn) imgItemInfo.appendChild(imgUseBtn)
imgItemInfo.appendChild(imgSaveBtn) imgItemInfo.appendChild(imgSaveBtn)
imagesContainer.appendChild(imgItem)
imgUseBtn.addEventListener('click', function() { imgUseBtn.addEventListener('click', function() {
initImageSelector.value = null initImageSelector.value = null
initImagePreview.src = imgBody initImagePreview.src = imgBody
initImagePreviewContainer.style.display = 'block' initImagePreviewContainer.style.display = 'block'
inpaintingEditorContainer.style.display = 'none'
promptStrengthContainer.style.display = 'block' promptStrengthContainer.style.display = 'block'
maskSetting.checked = false
// maskSetting.style.display = 'block' // maskSetting.style.display = 'block'
@ -868,6 +960,7 @@ async function makeImage() {
setStatus('request', 'fetching..') setStatus('request', 'fetching..')
makeImageBtn.innerHTML = 'Processing..' makeImageBtn.innerHTML = 'Processing..'
makeImageBtn.disabled = true
makeImageBtn.style.display = 'none' makeImageBtn.style.display = 'none'
stopImageBtn.style.display = 'block' stopImageBtn.style.display = 'block'
@ -880,6 +973,8 @@ async function makeImage() {
let batchCount = Math.ceil(numOutputsTotal / numOutputsParallel) let batchCount = Math.ceil(numOutputsTotal / numOutputsParallel)
let batchSize = numOutputsParallel let batchSize = numOutputsParallel
let streamImageProgress = (numOutputsTotal > 50 ? false : streamImageProgressField.checked)
let prompt = promptField.value let prompt = promptField.value
if (activeTags.length > 0) { if (activeTags.length > 0) {
let promptTags = activeTags.join(", ") let promptTags = activeTags.join(", ")
@ -889,6 +984,7 @@ async function makeImage() {
previewPrompt.innerHTML = prompt previewPrompt.innerHTML = prompt
let reqBody = { let reqBody = {
session_id: sessionId,
prompt: prompt, prompt: prompt,
num_outputs: batchSize, num_outputs: batchSize,
num_inference_steps: numInferenceStepsField.value, num_inference_steps: numInferenceStepsField.value,
@ -899,7 +995,9 @@ async function makeImage() {
turbo: turboField.checked, turbo: turboField.checked,
use_cpu: useCPUField.checked, use_cpu: useCPUField.checked,
use_full_precision: useFullPrecisionField.checked, use_full_precision: useFullPrecisionField.checked,
stream_progress_updates: true stream_progress_updates: true,
stream_image_progress: streamImageProgress,
show_only_filtered_image: showOnlyFilteredImageField.checked
} }
if (IMAGE_REGEX.test(initImagePreview.src)) { if (IMAGE_REGEX.test(initImagePreview.src)) {
@ -909,6 +1007,9 @@ async function makeImage() {
// if (IMAGE_REGEX.test(maskImagePreview.src)) { // if (IMAGE_REGEX.test(maskImagePreview.src)) {
// reqBody['mask'] = maskImagePreview.src // reqBody['mask'] = maskImagePreview.src
// } // }
if (maskSetting.checked) {
reqBody['mask'] = inpaintingEditor.getImg()
}
} }
if (saveToDiskField.checked && diskPathField.value.trim() !== '') { if (saveToDiskField.checked && diskPathField.value.trim() !== '') {
@ -923,10 +1024,6 @@ async function makeImage() {
reqBody['use_upscale'] = upscaleModelField.value reqBody['use_upscale'] = upscaleModelField.value
} }
if (showOnlyFilteredImageField.checked && (useUpscalingField.checked || useFaceCorrectionField.checked)) {
reqBody['show_only_filtered_image'] = showOnlyFilteredImageField.checked
}
let time = new Date().getTime() let time = new Date().getTime()
imagesContainer.innerHTML = '' imagesContainer.innerHTML = ''
@ -1040,6 +1137,9 @@ useFullPrecisionField.checked = isUseFullPrecisionEnabled()
turboField.addEventListener('click', handleBoolSettingChange(USE_TURBO_MODE_KEY)) turboField.addEventListener('click', handleBoolSettingChange(USE_TURBO_MODE_KEY))
turboField.checked = isUseTurboModeEnabled() turboField.checked = isUseTurboModeEnabled()
streamImageProgressField.addEventListener('click', handleBoolSettingChange(STREAM_IMAGE_PROGRESS_KEY))
streamImageProgressField.checked = isStreamImageProgressEnabled()
diskPathField.addEventListener('change', handleStringSettingChange(DISK_PATH_KEY)) diskPathField.addEventListener('change', handleStringSettingChange(DISK_PATH_KEY))
saveToDiskField.addEventListener('click', function(e) { saveToDiskField.addEventListener('click', function(e) {
@ -1163,6 +1263,7 @@ checkRandomSeed()
function showInitImagePreview() { function showInitImagePreview() {
if (initImageSelector.files.length === 0) { if (initImageSelector.files.length === 0) {
initImagePreviewContainer.style.display = 'none' initImagePreviewContainer.style.display = 'none'
// inpaintingEditorContainer.style.display = 'none'
promptStrengthContainer.style.display = 'none' promptStrengthContainer.style.display = 'none'
// maskSetting.style.display = 'none' // maskSetting.style.display = 'none'
return return
@ -1174,7 +1275,9 @@ function showInitImagePreview() {
reader.addEventListener('load', function() { reader.addEventListener('load', function() {
initImagePreview.src = reader.result initImagePreview.src = reader.result
initImagePreviewContainer.style.display = 'block' initImagePreviewContainer.style.display = 'block'
inpaintingEditorContainer.style.display = 'none'
promptStrengthContainer.style.display = 'block' promptStrengthContainer.style.display = 'block'
// maskSetting.checked = false
}) })
if (file) { if (file) {
@ -1184,14 +1287,22 @@ function showInitImagePreview() {
initImageSelector.addEventListener('change', showInitImagePreview) initImageSelector.addEventListener('change', showInitImagePreview)
showInitImagePreview() showInitImagePreview()
initImagePreview.addEventListener('load', function() {
inpaintingEditorCanvasBackground.style.backgroundImage = "url('" + this.src + "')"
// maskSetting.style.display = 'block'
// inpaintingEditorContainer.style.display = 'block'
})
initImageClearBtn.addEventListener('click', function() { initImageClearBtn.addEventListener('click', function() {
initImageSelector.value = null initImageSelector.value = null
// maskImageSelector.value = null // maskImageSelector.value = null
initImagePreview.src = '' initImagePreview.src = ''
// maskImagePreview.src = '' // maskImagePreview.src = ''
maskSetting.checked = false
initImagePreviewContainer.style.display = 'none' initImagePreviewContainer.style.display = 'none'
// inpaintingEditorContainer.style.display = 'none'
// maskImagePreviewContainer.style.display = 'none' // maskImagePreviewContainer.style.display = 'none'
// maskSetting.style.display = 'none' // maskSetting.style.display = 'none'
@ -1199,9 +1310,13 @@ initImageClearBtn.addEventListener('click', function() {
promptStrengthContainer.style.display = 'none' promptStrengthContainer.style.display = 'none'
}) })
maskSetting.addEventListener('click', function() {
inpaintingEditorContainer.style.display = (this.checked ? 'block' : 'none')
})
// function showMaskImagePreview() { // function showMaskImagePreview() {
// if (maskImageSelector.files.length === 0) { // if (maskImageSelector.files.length === 0) {
// maskImagePreviewContainer.style.display = 'none' // // maskImagePreviewContainer.style.display = 'none'
// return // return
// } // }
@ -1209,8 +1324,8 @@ initImageClearBtn.addEventListener('click', function() {
// let file = maskImageSelector.files[0] // let file = maskImageSelector.files[0]
// reader.addEventListener('load', function() { // reader.addEventListener('load', function() {
// maskImagePreview.src = reader.result // // maskImagePreview.src = reader.result
// maskImagePreviewContainer.style.display = 'block' // // maskImagePreviewContainer.style.display = 'block'
// }) // })
// if (file) { // if (file) {
@ -1223,7 +1338,7 @@ initImageClearBtn.addEventListener('click', function() {
// maskImageClearBtn.addEventListener('click', function() { // maskImageClearBtn.addEventListener('click', function() {
// maskImageSelector.value = null // maskImageSelector.value = null
// maskImagePreview.src = '' // maskImagePreview.src = ''
// maskImagePreviewContainer.style.display = 'none' // // maskImagePreviewContainer.style.display = 'none'
// }) // })
// https://stackoverflow.com/a/8212878 // https://stackoverflow.com/a/8212878
@ -1407,5 +1522,4 @@ async function init() {
init() init()
</script> </script>
</html> </html>

5
ui/media/drawingboard.min.css vendored Normal file

File diff suppressed because one or more lines are too long

4
ui/media/drawingboard.min.js vendored Normal file

File diff suppressed because one or more lines are too long

BIN
ui/media/favicon-16x16.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 466 B

BIN
ui/media/favicon-32x32.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 973 B

2
ui/media/jquery-3.6.1.min.js vendored Normal file

File diff suppressed because one or more lines are too long

View File

@ -1,6 +1,7 @@
import json import json
class Request: class Request:
session_id: str = "session"
prompt: str = "" prompt: str = ""
init_image: str = None # base64 init_image: str = None # base64
mask: str = None # base64 mask: str = None # base64
@ -22,9 +23,11 @@ class Request:
show_only_filtered_image: bool = False show_only_filtered_image: bool = False
stream_progress_updates: bool = False stream_progress_updates: bool = False
stream_image_progress: bool = False
def json(self): def json(self):
return { return {
"session_id": self.session_id,
"prompt": self.prompt, "prompt": self.prompt,
"num_outputs": self.num_outputs, "num_outputs": self.num_outputs,
"num_inference_steps": self.num_inference_steps, "num_inference_steps": self.num_inference_steps,
@ -39,6 +42,7 @@ class Request:
def to_string(self): def to_string(self):
return f''' return f'''
session_id: {self.session_id}
prompt: {self.prompt} prompt: {self.prompt}
seed: {self.seed} seed: {self.seed}
num_inference_steps: {self.num_inference_steps} num_inference_steps: {self.num_inference_steps}
@ -54,7 +58,8 @@ class Request:
use_upscale: {self.use_upscale} use_upscale: {self.use_upscale}
show_only_filtered_image: {self.show_only_filtered_image} show_only_filtered_image: {self.show_only_filtered_image}
stream_progress_updates: {self.stream_progress_updates}''' stream_progress_updates: {self.stream_progress_updates}
stream_image_progress: {self.stream_image_progress}'''
class Image: class Image:
data: str # base64 data: str # base64
@ -75,13 +80,11 @@ class Image:
class Response: class Response:
request: Request request: Request
session_id: str
images: list images: list
def json(self): def json(self):
res = { res = {
"status": 'succeeded', "status": 'succeeded',
"session_id": self.session_id,
"request": self.request.json(), "request": self.request.json(),
"output": [], "output": [],
} }

View File

@ -4,7 +4,7 @@ import traceback
import torch import torch
import numpy as np import numpy as np
from omegaconf import OmegaConf from omegaconf import OmegaConf
from PIL import Image from PIL import Image, ImageOps
from tqdm import tqdm, trange from tqdm import tqdm, trange
from itertools import islice from itertools import islice
from einops import rearrange from einops import rearrange
@ -33,10 +33,11 @@ filename_regex = re.compile('[^a-zA-Z0-9]')
from . import Request, Response, Image as ResponseImage from . import Request, Response, Image as ResponseImage
import base64 import base64
from io import BytesIO from io import BytesIO
#from colorama import Fore
# local # local
session_id = str(uuid.uuid4())[-8:]
stop_processing = False stop_processing = False
temp_images = {}
ckpt_file = None ckpt_file = None
gfpgan_file = None gfpgan_file = None
@ -185,23 +186,32 @@ def load_model_real_esrgan(real_esrgan_to_use):
print('loaded ', real_esrgan_to_use, 'to', device, 'precision', precision) print('loaded ', real_esrgan_to_use, 'to', device, 'precision', precision)
def mk_img(req: Request): def mk_img(req: Request):
global modelFS, device try:
yield from do_mk_img(req)
except Exception as e:
gc()
raise e
def do_mk_img(req: Request):
global model, modelCS, modelFS, device
global model_gfpgan, model_real_esrgan global model_gfpgan, model_real_esrgan
global stop_processing global stop_processing
stop_processing = False stop_processing = False
res = Response() res = Response()
res.session_id = session_id
res.request = req res.request = req
res.images = [] res.images = []
temp_images.clear()
model.turbo = req.turbo model.turbo = req.turbo
if req.use_cpu: if req.use_cpu:
if device != 'cpu': if device != 'cpu':
device = 'cpu' device = 'cpu'
if model_is_half: if model_is_half:
del model, modelCS, modelFS
load_model_ckpt(ckpt_file, device) load_model_ckpt(ckpt_file, device)
load_model_gfpgan(gfpgan_file) load_model_gfpgan(gfpgan_file)
@ -216,7 +226,8 @@ def mk_img(req: Request):
(req.init_image is None and model_fs_is_half) or \ (req.init_image is None and model_fs_is_half) or \
(req.init_image is not None and not model_fs_is_half and not force_full_precision): (req.init_image is not None and not model_fs_is_half and not force_full_precision):
load_model_ckpt(ckpt_file, device, model.turbo, unet_bs, ('full' if req.use_full_precision else 'autocast'), half_model_fs=(req.init_image is not None and not req.use_full_precision)) del model, modelCS, modelFS
load_model_ckpt(ckpt_file, device, req.turbo, unet_bs, ('full' if req.use_full_precision else 'autocast'), half_model_fs=(req.init_image is not None and not req.use_full_precision))
if prev_device != device: if prev_device != device:
load_model_gfpgan(gfpgan_file) load_model_gfpgan(gfpgan_file)
@ -266,6 +277,8 @@ def mk_img(req: Request):
else: else:
precision_scope = nullcontext precision_scope = nullcontext
mask = None
if req.init_image is None: if req.init_image is None:
handler = _txt2img handler = _txt2img
@ -285,6 +298,14 @@ def mk_img(req: Request):
init_image = repeat(init_image, '1 ... -> b ...', b=batch_size) init_image = repeat(init_image, '1 ... -> b ...', b=batch_size)
init_latent = modelFS.get_first_stage_encoding(modelFS.encode_first_stage(init_image)) # move to latent space init_latent = modelFS.get_first_stage_encoding(modelFS.encode_first_stage(init_image)) # move to latent space
if req.mask is not None:
mask = load_mask(req.mask, opt_W, opt_H, init_latent.shape[2], init_latent.shape[3], True).to(device)
mask = mask[0][0].unsqueeze(0).repeat(4, 1, 1).unsqueeze(0)
mask = repeat(mask, '1 ... -> b ...', b=batch_size)
if device != "cpu" and precision == "autocast":
mask = mask.half()
if device != "cpu": if device != "cpu":
mem = torch.cuda.memory_allocated() / 1e6 mem = torch.cuda.memory_allocated() / 1e6
modelFS.to("cpu") modelFS.to("cpu")
@ -296,7 +317,7 @@ def mk_img(req: Request):
print(f"target t_enc is {t_enc} steps") print(f"target t_enc is {t_enc} steps")
if opt_save_to_disk_path is not None: if opt_save_to_disk_path is not None:
session_out_path = os.path.join(opt_save_to_disk_path, session_id) session_out_path = os.path.join(opt_save_to_disk_path, req.session_id)
os.makedirs(session_out_path, exist_ok=True) os.makedirs(session_out_path, exist_ok=True)
else: else:
session_out_path = None session_out_path = None
@ -327,6 +348,8 @@ def mk_img(req: Request):
else: else:
c = modelCS.get_learned_conditioning(prompts) c = modelCS.get_learned_conditioning(prompts)
modelFS.to(device)
partial_x_samples = None partial_x_samples = None
def img_callback(x_samples, i): def img_callback(x_samples, i):
nonlocal partial_x_samples nonlocal partial_x_samples
@ -334,7 +357,30 @@ def mk_img(req: Request):
partial_x_samples = x_samples partial_x_samples = x_samples
if req.stream_progress_updates: if req.stream_progress_updates:
yield json.dumps({"step": i, "total_steps": opt_ddim_steps}) progress = {"step": i, "total_steps": opt_ddim_steps}
if req.stream_image_progress:
partial_images = []
for i in range(batch_size):
x_samples_ddim = modelFS.decode_first_stage(x_samples[i].unsqueeze(0))
x_sample = torch.clamp((x_samples_ddim + 1.0) / 2.0, min=0.0, max=1.0)
x_sample = 255.0 * rearrange(x_sample[0].cpu().numpy(), "c h w -> h w c")
x_sample = x_sample.astype(np.uint8)
img = Image.fromarray(x_sample)
buf = BytesIO()
img.save(buf, format='JPEG')
buf.seek(0)
del img, x_sample, x_samples_ddim
# don't delete x_samples, it is used in the code that called this callback
temp_images[str(req.session_id) + '/' + str(i)] = buf
partial_images.append({'path': f'/image/tmp/{req.session_id}/{i}'})
progress['output'] = partial_images
yield json.dumps(progress)
if stop_processing: if stop_processing:
raise UserInitiatedStop("User requested that we stop processing") raise UserInitiatedStop("User requested that we stop processing")
@ -342,9 +388,9 @@ def mk_img(req: Request):
# run the handler # run the handler
try: try:
if handler == _txt2img: if handler == _txt2img:
x_samples = _txt2img(opt_W, opt_H, opt_n_samples, opt_ddim_steps, opt_scale, None, opt_C, opt_f, opt_ddim_eta, c, uc, opt_seed, img_callback, req.stream_progress_updates) x_samples = _txt2img(opt_W, opt_H, opt_n_samples, opt_ddim_steps, opt_scale, None, opt_C, opt_f, opt_ddim_eta, c, uc, opt_seed, img_callback, req.stream_progress_updates, mask)
else: else:
x_samples = _img2img(init_latent, t_enc, batch_size, opt_scale, c, uc, opt_ddim_steps, opt_ddim_eta, opt_seed, img_callback, req.stream_progress_updates) x_samples = _img2img(init_latent, t_enc, batch_size, opt_scale, c, uc, opt_ddim_steps, opt_ddim_eta, opt_seed, img_callback, req.stream_progress_updates, mask)
if req.stream_progress_updates: if req.stream_progress_updates:
yield from x_samples yield from x_samples
@ -356,8 +402,6 @@ def mk_img(req: Request):
x_samples = partial_x_samples x_samples = partial_x_samples
modelFS.to(device)
print("saving images") print("saving images")
for i in range(batch_size): for i in range(batch_size):
@ -367,6 +411,14 @@ def mk_img(req: Request):
x_sample = x_sample.astype(np.uint8) x_sample = x_sample.astype(np.uint8)
img = Image.fromarray(x_sample) img = Image.fromarray(x_sample)
has_filters = (opt_use_face_correction is not None and opt_use_face_correction.startswith('GFPGAN')) or \
(opt_use_upscale is not None and opt_use_upscale.startswith('RealESRGAN'))
return_orig_img = not has_filters or not opt_show_only_filtered
if stop_processing:
return_orig_img = True
if opt_save_to_disk_path is not None: if opt_save_to_disk_path is not None:
prompt_flattened = filename_regex.sub('_', prompts[0]) prompt_flattened = filename_regex.sub('_', prompts[0])
prompt_flattened = prompt_flattened[:50] prompt_flattened = prompt_flattened[:50]
@ -377,12 +429,12 @@ def mk_img(req: Request):
img_out_path = os.path.join(session_out_path, f"{file_path}.{opt_format}") img_out_path = os.path.join(session_out_path, f"{file_path}.{opt_format}")
meta_out_path = os.path.join(session_out_path, f"{file_path}.txt") meta_out_path = os.path.join(session_out_path, f"{file_path}.txt")
if not opt_show_only_filtered: if return_orig_img:
save_image(img, img_out_path) save_image(img, img_out_path)
save_metadata(meta_out_path, prompts, opt_seed, opt_W, opt_H, opt_ddim_steps, opt_scale, opt_strength, opt_use_face_correction, opt_use_upscale) save_metadata(meta_out_path, prompts, opt_seed, opt_W, opt_H, opt_ddim_steps, opt_scale, opt_strength, opt_use_face_correction, opt_use_upscale)
if not opt_show_only_filtered: if return_orig_img:
img_data = img_to_base64_str(img) img_data = img_to_base64_str(img)
res_image_orig = ResponseImage(data=img_data, seed=opt_seed) res_image_orig = ResponseImage(data=img_data, seed=opt_seed)
res.images.append(res_image_orig) res.images.append(res_image_orig)
@ -390,8 +442,10 @@ def mk_img(req: Request):
if opt_save_to_disk_path is not None: if opt_save_to_disk_path is not None:
res_image_orig.path_abs = img_out_path res_image_orig.path_abs = img_out_path
if (opt_use_face_correction is not None and opt_use_face_correction.startswith('GFPGAN')) or \ del img
(opt_use_upscale is not None and opt_use_upscale.startswith('RealESRGAN')):
if has_filters and not stop_processing:
print('Applying filters..')
gc() gc()
filters_applied = [] filters_applied = []
@ -419,17 +473,22 @@ def mk_img(req: Request):
save_image(filtered_image, filtered_img_out_path) save_image(filtered_image, filtered_img_out_path)
res_image_filtered.path_abs = filtered_img_out_path res_image_filtered.path_abs = filtered_img_out_path
del filtered_image
seeds += str(opt_seed) + "," seeds += str(opt_seed) + ","
opt_seed += 1 opt_seed += 1
gc()
if device != "cpu": if device != "cpu":
mem = torch.cuda.memory_allocated() / 1e6 mem = torch.cuda.memory_allocated() / 1e6
modelFS.to("cpu") modelFS.to("cpu")
while torch.cuda.memory_allocated() / 1e6 >= mem: while torch.cuda.memory_allocated() / 1e6 >= mem:
time.sleep(1) time.sleep(1)
del x_samples del x_samples, x_samples_ddim, x_sample
print("memory_final = ", torch.cuda.memory_allocated() / 1e6) print("memory_final = ", torch.cuda.memory_allocated() / 1e6)
print('Task completed')
if req.stream_progress_updates: if req.stream_progress_updates:
yield json.dumps(res.json()) yield json.dumps(res.json())
else: else:
@ -450,7 +509,7 @@ def save_metadata(meta_out_path, prompts, opt_seed, opt_W, opt_H, opt_ddim_steps
except: except:
print('could not save the file', traceback.format_exc()) print('could not save the file', traceback.format_exc())
def _txt2img(opt_W, opt_H, opt_n_samples, opt_ddim_steps, opt_scale, start_code, opt_C, opt_f, opt_ddim_eta, c, uc, opt_seed, img_callback, streaming_callbacks): def _txt2img(opt_W, opt_H, opt_n_samples, opt_ddim_steps, opt_scale, start_code, opt_C, opt_f, opt_ddim_eta, c, uc, opt_seed, img_callback, streaming_callbacks, mask):
shape = [opt_n_samples, opt_C, opt_H // opt_f, opt_W // opt_f] shape = [opt_n_samples, opt_C, opt_H // opt_f, opt_W // opt_f]
if device != "cpu": if device != "cpu":
@ -471,6 +530,7 @@ def _txt2img(opt_W, opt_H, opt_n_samples, opt_ddim_steps, opt_scale, start_code,
x_T=start_code, x_T=start_code,
img_callback=img_callback, img_callback=img_callback,
streaming_callbacks=streaming_callbacks, streaming_callbacks=streaming_callbacks,
mask=mask,
sampler = 'plms', sampler = 'plms',
) )
@ -479,7 +539,7 @@ def _txt2img(opt_W, opt_H, opt_n_samples, opt_ddim_steps, opt_scale, start_code,
else: else:
return samples_ddim return samples_ddim
def _img2img(init_latent, t_enc, batch_size, opt_scale, c, uc, opt_ddim_steps, opt_ddim_eta, opt_seed, img_callback, streaming_callbacks): def _img2img(init_latent, t_enc, batch_size, opt_scale, c, uc, opt_ddim_steps, opt_ddim_eta, opt_seed, img_callback, streaming_callbacks, mask):
# encode (scaled latent) # encode (scaled latent)
z_enc = model.stochastic_encode( z_enc = model.stochastic_encode(
init_latent, init_latent,
@ -488,6 +548,8 @@ def _img2img(init_latent, t_enc, batch_size, opt_scale, c, uc, opt_ddim_steps, o
opt_ddim_eta, opt_ddim_eta,
opt_ddim_steps, opt_ddim_steps,
) )
x_T = None if mask is None else init_latent
# decode it # decode it
samples_ddim = model.sample( samples_ddim = model.sample(
t_enc, t_enc,
@ -497,6 +559,8 @@ def _img2img(init_latent, t_enc, batch_size, opt_scale, c, uc, opt_ddim_steps, o
unconditional_conditioning=uc, unconditional_conditioning=uc,
img_callback=img_callback, img_callback=img_callback,
streaming_callbacks=streaming_callbacks, streaming_callbacks=streaming_callbacks,
mask=mask,
x_T=x_T,
sampler = 'ddim' sampler = 'ddim'
) )
@ -545,6 +609,31 @@ def load_img(img_str, w0, h0):
image = torch.from_numpy(image) image = torch.from_numpy(image)
return 2.*image - 1. return 2.*image - 1.
def load_mask(mask_str, h0, w0, newH, newW, invert=False):
image = base64_str_to_img(mask_str).convert("RGB")
w, h = image.size
print(f"loaded input mask of size ({w}, {h})")
if invert:
print("inverted")
image = ImageOps.invert(image)
# where_0, where_1 = np.where(image == 0), np.where(image == 255)
# image[where_0], image[where_1] = 255, 0
if h0 is not None and w0 is not None:
h, w = h0, w0
w, h = map(lambda x: x - x % 64, (w, h)) # resize to integer multiple of 64
print(f"New mask size ({w}, {h})")
image = image.resize((newW, newH), resample=Image.Resampling.LANCZOS)
image = np.array(image)
image = image.astype(np.float32) / 255.0
image = image[None].transpose(0, 3, 1, 2)
image = torch.from_numpy(image)
return image
# https://stackoverflow.com/a/61114178 # https://stackoverflow.com/a/61114178
def img_to_base64_str(img): def img_to_base64_str(img):
buffered = BytesIO() buffered = BytesIO()

View File

@ -15,6 +15,7 @@ CONFIG_DIR = os.path.join(SD_UI_DIR, '..', 'scripts')
OUTPUT_DIRNAME = "Stable Diffusion UI" # in the user's home folder OUTPUT_DIRNAME = "Stable Diffusion UI" # in the user's home folder
from fastapi import FastAPI, HTTPException from fastapi import FastAPI, HTTPException
from fastapi.staticfiles import StaticFiles
from starlette.responses import FileResponse, StreamingResponse from starlette.responses import FileResponse, StreamingResponse
from pydantic import BaseModel from pydantic import BaseModel
# this is needed for development. # this is needed for development.
@ -45,6 +46,7 @@ outpath = os.path.join(os.path.expanduser("~"), OUTPUT_DIRNAME)
# defaults from https://huggingface.co/blog/stable_diffusion # defaults from https://huggingface.co/blog/stable_diffusion
class ImageRequest(BaseModel): class ImageRequest(BaseModel):
session_id: str = "session"
prompt: str = "" prompt: str = ""
init_image: str = None # base64 init_image: str = None # base64
mask: str = None # base64 mask: str = None # base64
@ -65,10 +67,13 @@ class ImageRequest(BaseModel):
show_only_filtered_image: bool = False show_only_filtered_image: bool = False
stream_progress_updates: bool = False stream_progress_updates: bool = False
stream_image_progress: bool = False
class SetAppConfigRequest(BaseModel): class SetAppConfigRequest(BaseModel):
update_branch: str = "main" update_branch: str = "main"
app.mount('/media', StaticFiles(directory=os.path.join(SD_UI_DIR, 'media/')), name="media")
@app.get('/') @app.get('/')
def read_root(): def read_root():
headers = {"Cache-Control": "no-cache, no-store, must-revalidate", "Pragma": "no-cache", "Expires": "0"} headers = {"Cache-Control": "no-cache, no-store, must-revalidate", "Pragma": "no-cache", "Expires": "0"}
@ -114,6 +119,7 @@ def image(req : ImageRequest):
from sd_internal import runtime from sd_internal import runtime
r = Request() r = Request()
r.session_id = req.session_id
r.prompt = req.prompt r.prompt = req.prompt
r.init_image = req.init_image r.init_image = req.init_image
r.mask = req.mask r.mask = req.mask
@ -134,6 +140,7 @@ def image(req : ImageRequest):
r.show_only_filtered_image = req.show_only_filtered_image r.show_only_filtered_image = req.show_only_filtered_image
r.stream_progress_updates = req.stream_progress_updates r.stream_progress_updates = req.stream_progress_updates
r.stream_image_progress = req.stream_image_progress
try: try:
res = runtime.mk_img(r) res = runtime.mk_img(r)
@ -160,6 +167,13 @@ def stop():
print(traceback.format_exc()) print(traceback.format_exc())
return HTTPException(status_code=500, detail=str(e)) return HTTPException(status_code=500, detail=str(e))
@app.get('/image/tmp/{session_id}/{img_id}')
def get_image(session_id, img_id):
from sd_internal import runtime
buf = runtime.temp_images[session_id + '/' + img_id]
buf.seek(0)
return StreamingResponse(buf, media_type='image/jpeg')
@app.post('/app_config') @app.post('/app_config')
async def setAppConfig(req : SetAppConfigRequest): async def setAppConfig(req : SetAppConfigRequest):
try: try:
@ -212,7 +226,7 @@ def read_ding():
return FileResponse(os.path.join(SD_UI_DIR, 'frontend/assets/ding.mp3')) return FileResponse(os.path.join(SD_UI_DIR, 'frontend/assets/ding.mp3'))
@app.get('/kofi.png') @app.get('/kofi.png')
def read_modifiers(): def read_kofi():
return FileResponse(os.path.join(SD_UI_DIR, 'frontend/assets/kofi.png')) return FileResponse(os.path.join(SD_UI_DIR, 'frontend/assets/kofi.png'))
@app.get('/modifiers.json') @app.get('/modifiers.json')