Nodes Browser
ComfyDeploy: How AnimateDiff works in ComfyUI?
What is AnimateDiff?
AnimateDiff integration for ComfyUI, adapts from sd-webui-animatediff. [w/You only need to download one of [a/mm_sd_v14.ckpt](https://huggingface.co/guoyww/animatediff/resolve/main/mm_sd_v14.ckpt) | [a/mm_sd_v15.ckpt](https://huggingface.co/guoyww/animatediff/resolve/main/mm_sd_v15.ckpt). Put the model weights under %%ComfyUI/custom_nodes/comfyui-animatediff/models%%. DO NOT change model filename.]
How to install it in ComfyDeploy?
Head over to the machine page
- Click on the "Create a new machine" button
- Select the
Edit
build steps - Add a new step -> Custom Node
- Search for
AnimateDiff
and select it - Close the build step dialig and then click on the "Save" button to rebuild the machine
AnimateDiff for ComfyUI
AnimateDiff integration for ComfyUI, adapts from sd-webui-animatediff. Please read the original repo README for more information.
How to Use
- Clone this repo into
custom_nodes
folder. - Download motion modules and put them under
comfyui-animatediff/models/
.
- Original modules: Google Drive | HuggingFace | CivitAI | Baidu NetDisk
- Community modules: manshoety/AD_Stabilized_Motion | CiaraRowles/TemporalDiff
- AnimateDiff v2 mm_sd_v15_v2.ckpt
Update 2023/09/25
Motion LoRA is now supported!
Download motion LoRAs and put them under comfyui-animatediff/loras/
folder.
Note: LoRAs only work with AnimateDiff v2 mm_sd_v15_v2.ckpt module.
New node: AnimateDiffLoraLoader
<img width="370" alt="image" src="https://github.com/ArtVentureX/comfyui-animatediff/assets/133728487/7a9f62f7-702e-48a4-934c-bbfe1e23aff2">
Example workflow: <img width="1280" alt="image" src="https://github.com/ArtVentureX/comfyui-animatediff/assets/133728487/93e7550f-4648-4482-9961-6cece5132dc9">
Workflow: lora.json
Samples:
<table> <tr> <td> <img width="512" alt="image" src="https://github.com/ArtVentureX/comfyui-animatediff/assets/133728487/2c5aa25e-0682-481f-8842-066c5b988864"> </td> </tr> <tr> <td> <img width="512" alt="image" src="https://github.com/ArtVentureX/comfyui-animatediff/assets/133728487/adfbad45-3ba5-42e3-9bee-d2b83f43989c"> </td> </tr> <tr> <td> <img width="512" alt="image" src="https://github.com/ArtVentureX/comfyui-animatediff/assets/133728487/8e484c74-c691-4d1c-9514-719dbfe3a0b5"> </td> </tr> <tr> <td> <img width="512" alt="image" src="https://github.com/ArtVentureX/comfyui-animatediff/assets/133728487/4921a335-9207-4a7b-9d66-61a5d76e3179"> </td> </tr> </table>Update 2023/09/21
Sliding Window is now available!
The sliding window feature enables you to generate GIFs without a frame length limit. It divides frames into smaller batches with a slight overlap. This feature is activated automatically when generating more than 16 frames. To modify the trigger number and other settings, utilize the SlidingWindowOptions
node. See the sample workflow bellow.
Nodes
AnimateDiffLoader
<img width="370" alt="image" src="https://github.com/ArtVentureX/comfyui-animatediff/assets/133728487/9d756d01-ea45-4d1c-8e48-56f2725c7ca1">AnimateDiffSampler
- Mostly the same with
KSampler
motion_module
: useAnimateDiffLoader
to load the motion moduleinject_method
: should left defaultframe_number
: animation lengthlatent_image
: You can pass anEmptyLatentImage
sliding_window_opts
: custom sliding window options
AnimateDiffCombine
- Combine GIF frames and produce the GIF image
frame_rate
: number of frame per secondloop_count
: use 0 for infinite loopsave_image
: should GIF be saved to diskformat
: supportsimage/gif
,image/webp
(better compression),video/webm
,video/h264-mp4
,video/h265-mp4
. To use video formats, you'll need ffmpeg installed and available inPATH
SlidingWindowOptions
Custom sliding window options
context_length
: number of frame per window. Use 16 to get the best results. Reduce it if you have low VRAM.context_stride
:- 1: sampling every frame
- 2: sampling every frame then every second frame
- 3: sampling every frame then every second frame then every third frames
- ...
context_overlap
: overlap frames between each window sliceclosed_loop
: make the GIF a closed loop, will add more sampling step
LoadVideo
Load GIF or video as images. Usefull to load a GIF as ControlNet input.
frame_start
: Skip some begining frames and start atframe_start
frame_limit
: Only takeframe_limit
frames
Workflows
Simple txt2gif
<img width="1280" alt="image" src="https://github.com/ArtVentureX/comfyui-animatediff/assets/133728487/b7164539-bc58-4ef9-b178-d914e833805e">Workflow: simple.json
Samples:
Long duration with sliding window
<img width="1280" alt="image" src="https://github.com/ArtVentureX/comfyui-animatediff/assets/133728487/0f8bfb87-83cb-4119-9777-e3948ec0cb5c">Workflow: sliding-window.json
Samples:
<table> <tr> <td> <img width="512" alt="image" src="https://github.com/ArtVentureX/comfyui-animatediff/assets/133728487/e1da7a66-e615-475d-9400-41eff484ad49"> </td> </tr> <tr> <td> <img width="768" alt="image" src="https://github.com/ArtVentureX/comfyui-animatediff/assets/133728487/4faa7e5e-cdaa-49da-8759-46d779c0e0b6"> </td> </tr> </table>Latent upscale
Upscale latent output using LatentUpscale
then do a 2nd pass with AnimateDiffSampler
.
Workflow: latent-upscale.json
Samples:
Using with ControlNet
You will need following additional nodes:
- Kosinkadink/ComfyUI-Advanced-ControlNet: Apply different weight for each latent in batch
- Fannovel16/comfyui_controlnet_aux: ControlNet preprocessors
Animate with starting and ending images
- Use
LatentKeyframe
andTimestampKeyframe
from ComfyUI-Advanced-ControlNet to apply diffrent weights for each latent index. - Use 2 controlnet modules for two images with weights reverted.
Workflow: cn-2images.json
Samples:
<table> <tr> <td> <img src="https://github.com/ArtVentureX/comfyui-animatediff/assets/133728487/e73fc3cd-a590-40a9-8b33-11358b54f0cd"> </td> <td> <img src="https://github.com/ArtVentureX/comfyui-animatediff/assets/133728487/96c2ee92-d457-4862-94d3-d675b7fa2d1f"> </td> </tr> <tr> <td> <img src="https://github.com/ArtVentureX/comfyui-animatediff/assets/133728487/46338853-1ae0-433e-925c-2a41e0382e68"> </td> <td> <img src="https://github.com/ArtVentureX/comfyui-animatediff/assets/133728487/707e4ce3-3594-4ff5-9a5f-f9596eb2bcf4"> </td> </tr> </table>Using GIF as ControlNet input
Using a GIF (or video, or a list of images) as ControlNet input.
Workflow: cn-vid2vid.json
Samples:
<table> <tr> <td> <img src="https://github.com/ArtVentureX/comfyui-animatediff/assets/133728487/bf926f52-da97-4fb4-b86a-8b26ef5fab04"> </td> <td> <img src="https://github.com/ArtVentureX/comfyui-animatediff/assets/133728487/f6472c8c-9b92-47c2-8f28-638726f21be7"> </td> </tr> </table>Known Issues
CUDA error: invalid configuration argument
It's an xformers
bug accidentally triggered by the way the original AnimateDiff CrossAttention is passed in. The current workaround is to disable xformers with --disable-xformers
when booting ComfyUI.
GIF split into multiple scenes
Work around:
- Shorter your prompt and negative prompt
- Reduce resolution. AnimateDiff is trained on 512x512 images so it works best with 512x512 output.
- Disable xformers with
--disable-xformers
GIF has Wartermark (especially when using mm_sd_v15)
See: https://github.com/continue-revolution/sd-webui-animatediff/issues/31
Training data used by the authors of the AnimateDiff paper contained Shutterstock watermarks. Since mm_sd_v15 was finetuned on finer, less drastic movement, the motion module attempts to replicate the transparency of that watermark and does not get blurred away like mm_sd_v14. Try other community finetuned modules.