Nodes Browser

ComfyDeploy: How ComfyUI_MagicClothing works in ComfyUI?

What is ComfyUI_MagicClothing?

implementation of MagicClothing with garment and prompt in ComfyUI

How to install it in ComfyDeploy?

Head over to the machine page

  1. Click on the "Create a new machine" button
  2. Select the Edit build steps
  3. Add a new step -> Custom Node
  4. Search for ComfyUI_MagicClothing and select it
  5. Close the build step dialig and then click on the "Save" button to rebuild the machine

Updates:

  • ✅ [2024/04/17] only cloth image with prompt generation
  • ✅ [2024/04/18] IPAdapter FaceID with human face detection and synthesize with cloth image generation
  • ✅ [2024/04/18] IPAdapter FaceID with controlnet openpose and synthesize with cloth image generation
  • ✅ [2024/04/19] lower-body and full-body models for preliminary experiment
  • ✅ [2024/04/26] AnimateDiff and cloth inpaiting have been supported

U can contact me thr twitter_1twitter wechat_1 Weixin:GalaticKing

the main workflow

1713496499658

IPAdapater FaceID workflow

1713496598257

IPAdapater FaceID chained with controlnet openpose workflow

IPadapter_faceid_openpose

lower-body full-body workflow

lower_body dress

full-body workflow with IPadapterFaceid

fullbody_ipadapter

cloth inpainting workflow

cloth_inpainting

AnimateDiff workflow

<div align="left"> <img src="https://github.com/frankchieng/ComfyUI_MagicClothing/assets/130369523/680f55a0-d4b3-4e85-9c07-86a81e2e5fc9" width="15%"> <img src="https://github.com/frankchieng/ComfyUI_MagicClothing/assets/130369523/2b5580bc-afe9-40e8-8a08-43a2629fbf2d" width="15%"> </div>

you should run under custom_nodes directory of ComfyUI

git clone https://github.com/frankchieng/ComfyUI_MagicClothing.git

then run

pip install -r requirements.txt

download the models of cloth_segm.pth , magic_clothing_768_vitonhd_joint.safetensors(upper-body model),OMS_1024_VTHD+DressCode_200000.safetensors(lower-body and full-body model) from 🤗Huggingface and place them at the checkpoints directory, If u wanna to run AnimateDiff you should place garment_extractor.safetensors and ip_layer.pth in checkpoints/stable_ckpt directory

you should try the combination of miscellaneous hyperparameters especially when you inference with the lower and full body model,just for experiment now

install the ComfyUI_IPAdapter_plus custom node at first if you wanna to experience the ipadapterfaceid.Then download the IPAdapter FaceID models from IP-Adapter-FaceID and place them as the following placement structure For cloth inpainting, i just installed the Segment anything node,you can utilize other SOTA model to seg out the cloth from background.

tips:If you wanna to run the controlnet openpose part,you have to install the comfyui_controlnet_aux custome code as well as download the body_pose_model.pth, facenet.pth and hand_pose_model.pth at openpose models and place them in custom_nodes/comfyui_controlnet_aux/ckpts/lllyasviel/Annotators
ComfyUI
|-- models
|   |-- ipadapter
|   |   |-- ip-adapter-faceid-plus_sd15.bin
|   |   |-- ip-adapter-faceid-plusv2_sd15.bin
|   |   |-- ip-adapter-faceid_sd15.bin
|   |-- loras
|   |   |-- ip-adapter-faceid-plus_sd15_lora.safetensors
|   |   |-- ip-adapter-faceid-plusv2_sd15_lora.safetensors
|   |   |-- ip-adapter-faceid_sd15_lora.safetensors
|-- custom_nodes
|   |-- ComfyUI_MagicClothing
|   |   |-- checkpoints
|   |   |   |-- cloth_segm.pth
|   |   |   |-- magic_clothing_768_vitonhd_joint.safetensors
|   |   |   |-- OMS_1024_VTHD+DressCode_200000.safetensors
|   |   |   |-- stable_ckpt
|   |   |   |   |-- garment_extractor.safetensors
|   |   |   |   |-- ip_layer.pth