Nodes Browser

ComfyDeploy: How ComfyUI_HelloMeme works in ComfyUI?

What is ComfyUI_HelloMeme?

This repository is the official implementation of the [a/HelloMeme](https://arxiv.org/pdf/2410.22901) ComfyUI interface, featuring both image and video generation functionalities. Example workflow files can be found in the ComfyUI_HelloMeme/workflows directory. Test images and videos are saved in the ComfyUI_HelloMeme/examples directory. Below are screenshots of the interfaces for image and video generation. NOTE: 'HelloMeme: Integrating Spatial Knitting Attentions to Embed High-Level and Fidelity-Rich Conditions in Diffusion Models'

How to install it in ComfyDeploy?

Head over to the machine page

  1. Click on the "Create a new machine" button
  2. Select the Edit build steps
  3. Add a new step -> Custom Node
  4. Search for ComfyUI_HelloMeme and select it
  5. Close the build step dialig and then click on the "Save" button to rebuild the machine
<h1 align='center'>HelloMeme: Integrating Spatial Knitting Attentions to Embed High-Level and Fidelity-Rich Conditions in Diffusion Models</h1> <div align='center'> <a href='https://github.com/songkey' target='_blank'>Shengkai Zhang</a>, <a href='https://github.com/RhythmJnh' target='_blank'>Nianhong Jiao</a>, <a href='https://github.com/Shelton0215' target='_blank'>Tian Li</a>, <a href='https://github.com/chaojie12131243' target='_blank'>Chaojie Yang</a>, <a href='https://github.com/xchgit' target='_blank'>Chenhui Xue</a><sup>*</sup>, <a href='https://github.com/boya34' target='_blank'>Boya Niu</a><sup>*</sup>, <a href='https://github.com/HelloVision/HelloMeme' target='_blank'>Jun Gao</a> </div> <div align='center'> HelloVision | HelloGroup Inc. </div> <div align='center'> <small><sup>*</sup> Intern</small> </div> <br> <div align='center'> <a href='https://songkey.github.io/hellomeme/'><img src='https://img.shields.io/badge/Project-HomePage-Green'></a> <a href='https://arxiv.org/pdf/2410.22901'><img src='https://img.shields.io/badge/Paper-Arxiv-red'></a> <a href='https://huggingface.co/songkey'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20HuggingFace-Model-yellow'></a> <a href='https://github.com/HelloVision/HelloMeme'><img src='https://img.shields.io/badge/GitHub-Code-blue'></a> </div>

πŸ”† New Features/Updates

  • βœ… 11/14/2024 Added the HMControlNet2 module, which uses the PD-FGC motion module to extract facial expression information (drive_exp2); restructured the ComfyUI interface; and optimized VRAM usage.

    YouTube Demo

  • βœ… 11/12/2024 Added a newly fine-tuned version of Animatediff with a patch size of 12, which uses less VRAM (Tested on 2080Ti).

  • βœ… 11/11/2024 ~~Optimized VRAM usage and added HMVideoSimplePipeline (workflows/hellomeme_video_simple_workflow.json), which doesn’t use Animatediff and can run on machines with less than 12G VRAM.~~

  • βœ… 11/6/2024 The face proportion in the reference image significantly affects the generation quality. We have encapsulated the recommended image cropping method used during training into a CropReferenceImage Node. Refer to the workflows in the ComfyUI_HelloMeme/workflows directory: hellomeme_video_cropref_workflow.json and hellomeme_image_cropref_workflow.json.

Introduction

This repository is the official implementation of the HelloMeme ComfyUI interface, featuring both image and video generation functionalities. Example workflow files can be found in the ComfyUI_HelloMeme/workflows directory. Test images and videos are saved in the ComfyUI_HelloMeme/examples directory. Below are screenshots of the interfaces for image and video generation.

[!Note] Custom models should be placed in the directories listed below.

Checkpoints under: ComfyUI/models/checkpoints

Loras under: ComfyUI/models/checkpoints

Image Generation Interface

<p align="center"> <img src="workflows/hellomeme_image_example.png" alt="image_generation_interface"> </p>

Video Generation Interface

<p align="center"> <img src="workflows/hellomeme_video_example.png" alt="video_generation_interface"> </p>