Nodes Browser
ComfyDeploy: How ComfyUI_HelloMeme works in ComfyUI?
What is ComfyUI_HelloMeme?
This repository is the official implementation of the [a/HelloMeme](https://arxiv.org/pdf/2410.22901) ComfyUI interface, featuring both image and video generation functionalities. Example workflow files can be found in the ComfyUI_HelloMeme/workflows directory. Test images and videos are saved in the ComfyUI_HelloMeme/examples directory. Below are screenshots of the interfaces for image and video generation. NOTE: 'HelloMeme: Integrating Spatial Knitting Attentions to Embed High-Level and Fidelity-Rich Conditions in Diffusion Models'
How to install it in ComfyDeploy?
Head over to the machine page
- Click on the "Create a new machine" button
- Select the
Edit
build steps - Add a new step -> Custom Node
- Search for
ComfyUI_HelloMeme
and select it - Close the build step dialig and then click on the "Save" button to rebuild the machine
π New Features/Updates
-
β
12/17/2024
Support modelscope (Modelscope Demo). -
β
12/08/2024
Added HelloMemeV2 (select "v2" in the version option of the LoadHelloMemeImage/Video Node). Its features include: a. Improved expression consistency between the generated video and the driving video. b. Better compatibility with third-party checkpoints (we will continuously collect compatible free third-party checkpoints and share them on this page. If you'd like to recommend one, please post it on the issue page or email us at songkey@pku.edu.cn). c. Reduced VRAM usage. -
β
11/29/2024
a.Optimize the algorithm; b.Add VAE selection functionality; c.Introduce a super-resolution feature. YouTube Demo -
β
11/14/2024
Added theHMControlNet2
module, which uses thePD-FGC
motion module to extract facial expression information (drive_exp2
); restructured the ComfyUI interface; and optimized VRAM usage. YouTube Demo -
β
11/12/2024
Added a newly fine-tuned version ofAnimatediff
with a patch size of 12, which uses less VRAM (Tested on 2080Ti). -
β
11/11/2024
~~Optimized VRAM usage and addedHMVideoSimplePipeline
(workflows/hellomeme_video_simple_workflow.json
), which doesnβt use Animatediff and can run on machines with less than 12G VRAM.~~ -
β
11/6/2024
The face proportion in the reference image significantly affects the generation quality. We have encapsulated the recommended image cropping method used during training into aCropReferenceImage
Node. Refer to the workflows in theComfyUI_HelloMeme/workflows directory
:hellomeme_video_cropref_workflow.json
andhellomeme_image_cropref_workflow.json
.
Introduction
This repository is the official implementation of the HelloMeme
ComfyUI interface, featuring both image and video generation functionalities. Example workflow files can be found in the ComfyUI_HelloMeme/workflows
directory. Test images and videos are saved in the ComfyUI_HelloMeme/examples
directory. Below are screenshots of the interfaces for image and video generation.
[!Note] Custom models should be placed in the directories listed below.
Checkpoints under:
ComfyUI/models/checkpoints
Loras under:
ComfyUI/models/checkpoints
Recommended Third-party Checkpoints/Loras
| Name | Checkpoints/Loras | Recommenders | |------|-------------------|--------------| | realisticVisionV60B1_v51VAE | Checkpoints | | | disneyPixarCartoon_v10 | Checkpoints | |
Workflows
| workflow file | Video Generation | Image Generation | HMControlNet | HMControlNet2 | Super-Resolution | |---------------|------------------|------------------|-----------|---------------|-----------------| | hellomeme_image_workflow.json | | β | β | | | | hellomeme_video_workflow.json | β | | β | | | | hellomeme_image_sr_workflow.json | | β | β | | β | | hellomeme_video_sr_workflow.json | β | | β | | β | | hellomeme_image_v2_workflow.json | | β | | β | | | hellomeme_video_v2_workflow.json | β | | | β | | | hellomeme_image_v2_sr_workflow.json | | β | | β | β | | hellomeme_video_v2_sr_workflow.json | β | | | β | β |