Updated ControlNet (markdown)

cmdr2
2023-08-01 18:28:11 +05:30
parent 9b2f197eea
commit 63d634f1da

@ -2,7 +2,7 @@ ControlNets allow you to select a image to guide the AI, to make it follow your
While this sounds similar to image-to-image, ControlNets allow the AI to extract meaningful information from the image and make completely different images in the same style. For e.g. following the same body posture as your initial image, or the same color style or image composition.
**Quick guide to using ControlNets in Easy Diffusion**
# Quick guide to using ControlNets in Easy Diffusion
1. Enable beta and diffusers: https://github.com/easydiffusion/easydiffusion/wiki/The-beta-channel#test-diffusers-in-beta
2. In the `Image Settings` panel, set a `Control Image`. This can be any image that you want the AI to follow. For e.g. download the attached painting (of a girl) and set that as the control image.
3. Then set `Filter to apply` to `Canny`. This will automatically select `Canny` as the controlnet model as well.
@ -11,14 +11,14 @@ While this sounds similar to image-to-image, ControlNets allow the AI to extract
It'll take a while during the first image, since it'll download about 1.3 GB of models for the canny controlnet.
**Pose from images:**
## Pose from images:
You can also set a photo of a person in a particular pose, and make the AI follow that pose. For e.g.
1. Download the attached photo of a man, and set that as the control image
2. Set `Filter to apply` to `Open Pose` (the first one). This will automatically select `OpenPose` as the controlnet model.
3. Type `Knight in black armor` in the prompt box (at the top), and use `1873330527` as the seed, and `euler_a` with 25 steps, and `SD 1.4` model (or any other SD model).
5. Make an image.
**Custom pose images:**
## Custom pose images:
You can also use images generated in the OpenPose format, e.g. from the civitai poses category. For e.g.
1. Download the attached openpose photo (stick figure), and set that as the control image.
2. Set `Filter to apply` to `None`.