Updated ControlNet (markdown)

cmdr2
2023-08-01 19:01:31 +05:30
parent 63d634f1da
commit d10a2e637a

@ -4,7 +4,7 @@ While this sounds similar to image-to-image, ControlNets allow the AI to extract
# Quick guide to using ControlNets in Easy Diffusion # Quick guide to using ControlNets in Easy Diffusion
1. Enable beta and diffusers: https://github.com/easydiffusion/easydiffusion/wiki/The-beta-channel#test-diffusers-in-beta 1. Enable beta and diffusers: https://github.com/easydiffusion/easydiffusion/wiki/The-beta-channel#test-diffusers-in-beta
2. In the `Image Settings` panel, set a `Control Image`. This can be any image that you want the AI to follow. For e.g. download the attached painting (of a girl) and set that as the control image. 2. In the `Image Settings` panel, set a `Control Image`. This can be any image that you want the AI to follow. For e.g. download [this painting](https://user-images.githubusercontent.com/844287/257520525-517c43a6-2253-4f92-a75b-b7f18a1e8581.png) and set that as the control image.
3. Then set `Filter to apply` to `Canny`. This will automatically select `Canny` as the controlnet model as well. 3. Then set `Filter to apply` to `Canny`. This will automatically select `Canny` as the controlnet model as well.
4. Type `Emma Watson` in the prompt box (at the top), and use `1808629740` as the seed, and `euler_a` with 25 steps and `SD 1.4` model (or any other Stable Diffusion model). 4. Type `Emma Watson` in the prompt box (at the top), and use `1808629740` as the seed, and `euler_a` with 25 steps and `SD 1.4` model (or any other Stable Diffusion model).
5. Make an image. 5. Make an image.