mirror of
https://github.com/easydiffusion/easydiffusion.git
synced 2024-11-21 15:53:17 +01:00
Page:
SDXL
Pages
AMD on Linux
Clip Skip
Config settings
ControlNet
Copy settings
Custom Models
Custom Modifiers
Embeddings
Home
How to Use
Image Modifiers
Inpainting
Installation
LoRA
Model Merging
Prompt Syntax
Rendering performance
Run on Multiple GPUs
SDXL
Samplers
Seamless Tiling
The beta channel
Troubleshooting
UI Overview
UI Plugins
VAE Variational Auto Encoder
Writing prompts
xFormers
12
SDXL
cmdr2 edited this page 2023-08-31 21:34:12 +05:30
Table of Contents
SDXL models are trained to create larger images with better image quality. They can also make good images at 512x512 resolution, so they're often a good replacement for SD 1 or 2 models, in terms of image quality.
They do, however, consume more GPU and system RAM, so please experiment with your image sizes and VRAM usage level setting (in the Settings tab) while using SDXL models.
How to use SDXL:
- Download whichever SDXL model you want:
- Base model (works for text-to-image, image-to-image, and inpainting): base model download link
- Refiner model (image-to-image only): refiner model download link
- Save the model to the
models/stable-diffusion
folder.
Finally, in the Easy Diffusion UI, click the "refresh" icon next to the Models dropdown, and select the downloaded model and generate the image.
Usage tips:
- If you get "Out of Memory" errors, try setting the
VRAM Usage Level
setting to"low"
in the Settings tab, and pressSave
. - Use small values for
Prompt Strength
when using the "Refiner" model. E.g. 0.2 - The "Refiner" model supports text prompt, along with an image. You can use text to describe how your image should be refined.