From d10a2e637ae8dfcca59e15e838e58ec9098aff25 Mon Sep 17 00:00:00 2001 From: cmdr2 Date: Tue, 1 Aug 2023 19:01:31 +0530 Subject: [PATCH] Updated ControlNet (markdown) --- ControlNet.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/ControlNet.md b/ControlNet.md index 1cdd235..c5480ef 100644 --- a/ControlNet.md +++ b/ControlNet.md @@ -4,7 +4,7 @@ While this sounds similar to image-to-image, ControlNets allow the AI to extract # Quick guide to using ControlNets in Easy Diffusion 1. Enable beta and diffusers: https://github.com/easydiffusion/easydiffusion/wiki/The-beta-channel#test-diffusers-in-beta -2. In the `Image Settings` panel, set a `Control Image`. This can be any image that you want the AI to follow. For e.g. download the attached painting (of a girl) and set that as the control image. +2. In the `Image Settings` panel, set a `Control Image`. This can be any image that you want the AI to follow. For e.g. download [this painting](https://user-images.githubusercontent.com/844287/257520525-517c43a6-2253-4f92-a75b-b7f18a1e8581.png) and set that as the control image. 3. Then set `Filter to apply` to `Canny`. This will automatically select `Canny` as the controlnet model as well. 4. Type `Emma Watson` in the prompt box (at the top), and use `1808629740` as the seed, and `euler_a` with 25 steps and `SD 1.4` model (or any other Stable Diffusion model). 5. Make an image.