Nodes Browser
ComfyDeploy: How ComfyUI_StableAnimator works in ComfyUI?
What is ComfyUI_StableAnimator?
ComfyUI nodes for StableAnimator
How to install it in ComfyDeploy?
Head over to the machine page
- Click on the "Create a new machine" button
- Select the
Edit
build steps - Add a new step -> Custom Node
- Search for
ComfyUI_StableAnimator
and select it - Close the build step dialig and then click on the "Save" button to rebuild the machine
English | 中文版
ComfyUI_StableAnimator
Custom nodes for ComfyUI of StableAnimator.
Visit the original project at https://github.com/Francis-Rings/StableAnimator.
Features
- The model loading node has been made independent, which complies with the ComfyUI caching mechanism.
- A node for exporting bone maps from video frames of StableAnimator has been created. You can also use the DWPose Estimator of comfyui_controlnet_aux to generate bone maps.
- A node for reading bone maps from a directory has been created.
- The nodes can now be used normally.
- A StableAnimator directory is preset in the root directory, and an
__init__.py
file has been added. Do not remove it to ensure the correct reference of sub-packages. - Workflow examples will be provided later...
Suggestions
- It is recommended to use ComfyUI - VideoHelperSuite to export video frames and synthesize videos. Refer to: https://github.com/Kosinkadink/ComfyUI-VideoHelperSuite.
- On personal devices (devices with smaller video memory), it is recommended to run the processes of exporting bone maps and generating action videos separately.
Installation
- Pull this project to ComfyUI/custom_nodes.
- Pull StableAnimator to ComfyUI/custom_nodes/ComfyUI_StableAnimator/StableAnimator.
- Install the dependencies according to the steps in the README of the StableAnimator project. Refer to: https://github.com/Francis-Rings/StableAnimator.
Models Local Folder
stable_animator
/Animation
face_encoder.pth
pose_net.pth
unet.pth
/antelopev2
1k3d68.onnx
2d106det.onnx
genderage.onnx
glintr100.onnx
scrfd_10g_bnkps.onnx
/DWPose(ControlNet DW PreProcessor nodes using Contrelnet can leave this out)
dw-ll_ucoco_384.onnx
yolox_l.onnx
/stable-video-diffusion-img2vid-xt
svd_xt_image_decoder.safetensors
svd_xt.safetensors
model_index.json
/feature_extractor
preprocessor_config.json
/image_encoder
config.json
model.fp16.safetensors
model.safetensors
/scheduler
scheduler_config.json
/unet
config.json
diffusion_pytorch_model.fp16.safetensors
diffusion_pytorch_model.safetensors
/vae
config.json
diffusion_pytorch_model.fp16.safetensors
diffusion_pytorch_model.safetensors
Add stable_animator: your stable_animator model storage root directory entry to comfyui extra_model_paths.yam
Reward
Our team's reward code:
<img src="images/20250219-203952.png" alt="Out team's reward code" width="300">