Nodes Browser

ComfyDeploy: How ComfyUI-AnimateAnyone-Evolved works in ComfyUI?

What is ComfyUI-AnimateAnyone-Evolved?

Improved AnimateAnyone implementation that allows you to use the opse image sequence and reference image to generate stylized video. The current goal of this project is to achieve desired pose2video result with 1+FPS on GPUs that are equal to or better than RTX 3080!πŸš€ [w/The torch environment may be compromised due to version issues as some torch-related packages are being reinstalled.]

How to install it in ComfyDeploy?

Head over to the machine page

  1. Click on the "Create a new machine" button
  2. Select the Edit build steps
  3. Add a new step -> Custom Node
  4. Search for ComfyUI-AnimateAnyone-Evolved and select it
  5. Close the build step dialig and then click on the "Save" button to rebuild the machine

ComfyUI-AnimateAnyone-Evolved

Improved AnimateAnyone implementation that allows you to use the opse image sequence and reference image to generate stylized video.<br> The current goal of this project is to achieve desired pose2video result with 1+FPS on GPUs that are equal to or better than RTX 3080!πŸš€

<video controls autoplay loop src="https://github.com/MrForExample/ComfyUI-AnimateAnyone-Evolved/assets/62230687/572eaa8d-6011-42dc-9ac5-9bbd86e4ac9d" muted="false"></video>

Currently Support

Roadmap

  • βœ… Implement the compoents (Residual CFG) proposed in StreamDiffusion (Estimated speed up: 2X)
    • Result:
      Generated result is not good enough when using DDIM Scheduler togather with RCFG, even though it speed up the generating process by about 4X.<br> In StreamDiffusion, RCFG works with LCM, could also be the case here, so keep it in another branch for now.
  • ⬜ Incorporate the implementation & Pre-trained Models from Open-AnimateAnyone & AnimateAnyone once they released
  • ⬜ Convert Model using stable-fast (Estimated speed up: 2X)
  • ⬜ Train a LCM Lora for denoise unet (Estimated speed up: 5X)
  • ⬜ Training a new Model using better dataset to improve results quality (Optional, we'll see if there is any need for me to do it ;)
  • Continuous research, always moving towards something better & fasterπŸš€

Install (You can also use ComfyUI Manager)

  1. Clone this repo into the Your ComfyUI root directory\ComfyUI\custom_nodes\ and install dependent Python packages:
    cd Your_ComfyUI_root_directory\ComfyUI\custom_nodes\
    
    git clone https://github.com/MrForExample/ComfyUI-AnimateAnyone-Evolved.git
    
    pip install -r requirements.txt
    
    # If you got error regards diffusers then run:
    pip install --force-reinstall diffusers>=0.26.1
    
  2. Download pre-trained models:
    ./pretrained_weights/
    |-- denoising_unet.pth
    |-- motion_module.pth
    |-- pose_guider.pth
    |-- reference_unet.pth
    `-- stable-diffusion-v1-5
        |-- feature_extractor
        |   `-- preprocessor_config.json
        |-- model_index.json
        |-- unet
        |   |-- config.json
        |   `-- diffusion_pytorch_model.bin
        `-- v1-inference.yaml
    
    • Download clip image encoder (e.g. sd-image-variations-diffusers ) and put it under Your_ComfyUI_root_directory\ComfyUI\models\clip_vision
    • Download vae (e.g. sd-vae-ft-mse) and put it under Your_ComfyUI_root_directory\ComfyUI\models\vae