From 243c8eadd22f9d1f9aacff84263620abbae11fa1 Mon Sep 17 00:00:00 2001 From: cmdr2 Date: Tue, 1 Aug 2023 19:05:29 +0530 Subject: [PATCH] Updated ControlNet (markdown) --- ControlNet.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/ControlNet.md b/ControlNet.md index 24c228b..6c51af8 100644 --- a/ControlNet.md +++ b/ControlNet.md @@ -2,6 +2,9 @@ ControlNets allow you to select a image to guide the AI, to make it follow your While this sounds similar to image-to-image, ControlNets allow the AI to extract meaningful information from the image and make completely different images in the same style. For e.g. following the same body posture as your initial image, or the same color style or image composition. +![image](https://github.com/easydiffusion/easydiffusion/assets/844287/517c43a6-2253-4f92-a75b-b7f18a1e8581) +![image](https://github.com/easydiffusion/easydiffusion/assets/844287/d871616f-a3fc-470e-84b6-219443494db2) + # Quick guide to using ControlNets in Easy Diffusion 1. Enable beta and diffusers: https://github.com/easydiffusion/easydiffusion/wiki/The-beta-channel#test-diffusers-in-beta 2. In the `Image Settings` panel, set a `Control Image`. This can be any image that you want the AI to follow. For e.g. download [this painting](https://user-images.githubusercontent.com/844287/257520525-517c43a6-2253-4f92-a75b-b7f18a1e8581.png) and set that as the control image.