mirror of
https://github.com/easydiffusion/easydiffusion.git
synced 2024-11-28 11:13:13 +01:00
Updated ControlNet (markdown)
parent
c95fa1bbf8
commit
243c8eadd2
@ -2,6 +2,9 @@ ControlNets allow you to select a image to guide the AI, to make it follow your
|
||||
|
||||
While this sounds similar to image-to-image, ControlNets allow the AI to extract meaningful information from the image and make completely different images in the same style. For e.g. following the same body posture as your initial image, or the same color style or image composition.
|
||||
|
||||
![image](https://github.com/easydiffusion/easydiffusion/assets/844287/517c43a6-2253-4f92-a75b-b7f18a1e8581)
|
||||
![image](https://github.com/easydiffusion/easydiffusion/assets/844287/d871616f-a3fc-470e-84b6-219443494db2)
|
||||
|
||||
# Quick guide to using ControlNets in Easy Diffusion
|
||||
1. Enable beta and diffusers: https://github.com/easydiffusion/easydiffusion/wiki/The-beta-channel#test-diffusers-in-beta
|
||||
2. In the `Image Settings` panel, set a `Control Image`. This can be any image that you want the AI to follow. For e.g. download [this painting](https://user-images.githubusercontent.com/844287/257520525-517c43a6-2253-4f92-a75b-b7f18a1e8581.png) and set that as the control image.
|
||||
|
Loading…
Reference in New Issue
Block a user