Nodes Browser
ComfyDeploy: How ComfyUI-layerdiffuse (layerdiffusion) works in ComfyUI?
What is ComfyUI-layerdiffuse (layerdiffusion)?
ComfyUI implementation of [a/LayerDiffusion](https://github.com/layerdiffusion/LayerDiffusion).
Check out the examples!
How to install it in ComfyDeploy?
Head over to the machine page
- Click on the "Create a new machine" button
- Select the
Edit
build steps - Add a new step -> Custom Node
- Search for
ComfyUI-layerdiffuse (layerdiffusion)
and select it - Close the build step dialig and then click on the "Save" button to rebuild the machine
ComfyUI-layerdiffuse
ComfyUI implementation of https://github.com/layerdiffusion/LayerDiffuse.
Installation
Download the repository and unpack it into the custom_nodes folder in the ComfyUI installation directory.
Or clone via GIT, starting from ComfyUI installation directory:
cd custom_nodes
git clone git@github.com:huchenlei/ComfyUI-layerdiffuse.git
Run pip install -r requirements.txt
to install python dependencies. You might experience version conflict on diffusers if you have other extensions that depend on other versions of diffusers. In this case, it is recommended to set up separate Python venvs.
Workflows
Generate foreground
Generate foreground (RGB + alpha)
If you want more control of getting RGB images and alpha channel mask separately, you can use this workflow.
Blending (FG/BG)
Blending given FG
Blending given BG
Extract FG from Blended + BG
Extract BG from Blended + FG
Forge impl's sanity check sets Stop at
to 0.5 to get better quality BG.
This workflow might be inferior compared to other object removal workflows.
Extract BG from Blended + FG (Stop at 0.5)
In SD Forge impl, there is a stop at
param that determines when
layer diffuse should stop in the denoising process. In the background, what this param does is unapply the LoRA and c_concat cond after a certain step
threshold. This is hard/risky to implement directly in ComfyUI as it requires manually loading a model that has every change except the layer diffusion
change applied. A workaround in ComfyUI is to have another img2img pass on the layer diffuse result to simulate the effect of stop at
param.
Generate FG from BG combined
Combines previous workflows to generate blended and FG given BG. We found that there are some color variations in the extracted FG. Need to confirm with layer diffusion authors whether this is expected.
[2024-3-9] Generate FG + Blended given BG
Need batch size = 2N. Currently only for SD15.
[2024-3-9] Generate BG + Blended given FG
Need batch size = 2N. Currently only for SD15.
[2024-3-9] Generate BG + FG + Blended together
Need batch size = 3N. Currently only for SD15.
Note
- Currently only SDXL/SD15 are supported. See https://github.com/layerdiffuse/sd-forge-layerdiffuse#model-notes for more details.
- To decode RGBA result, the generation dimension must be multiple of 64. Otherwise, you will get decode error: