Nodes Browser
ComfyDeploy: How ComfyUI_Pic2Story works in ComfyUI?
What is ComfyUI_Pic2Story?
you can using pic2story in comfyUI
How to install it in ComfyDeploy?
Head over to the machine page
- Click on the "Create a new machine" button
- Select the
Edit
build steps - Add a new step -> Custom Node
- Search for
ComfyUI_Pic2Story
and select it - Close the build step dialig and then click on the "Save" button to rebuild the machine
ComfyUI_Pic2Story
ComfyUI simple node based on BLIP method, with the function of "Image to Txt " .
Original model: link
Using model: link
1.Installation
1.1 In the .\ComfyUI \ custom_node directory, run the following:
git clone https://github.com/smthemex/ComfyUI_Pic2Story.git
1.2 using repo_id or offline
repo_id: abhijit2111/Pic2Story
offline download : link
2.Example
Prompt is not necessary! 提示词不是必须的,可以去掉.
3 My ComfyUI node list:
1、ParlerTTS node:ComfyUI_ParlerTTS
2、Llama3_8B node:ComfyUI_Llama3_8B
3、HiDiffusion node:ComfyUI_HiDiffusion_Pro
4、ID_Animator node: ComfyUI_ID_Animator
5、StoryDiffusion node:ComfyUI_StoryDiffusion
6、Pops node:ComfyUI_Pops
7、stable-audio-open-1.0 node :ComfyUI_StableAudio_Open
8、GLM4 node:ComfyUI_ChatGLM_API
9、CustomNet node:ComfyUI_CustomNet
10、Pipeline_Tool node :ComfyUI_Pipeline_Tool
11、Pic2Story node :ComfyUI_Pic2Story
12、PBR_Maker node:ComfyUI_PBR_Maker
13、ComfyUI_Streamv2v_Plus node:ComfyUI_Streamv2v_Plus
14、ComfyUI_MS_Diffusion node:ComfyUI_MS_Diffusion
15、ComfyUI_AnyDoor node: ComfyUI_AnyDoor
16、ComfyUI_Stable_Makeup node: ComfyUI_Stable_Makeup
17、ComfyUI_EchoMimic node: ComfyUI_EchoMimic
18、ComfyUI_FollowYourEmoji node: ComfyUI_FollowYourEmoji
19、ComfyUI_Diffree node: ComfyUI_Diffree
20、ComfyUI_FoleyCrafter node: ComfyUI_FoleyCrafter
4.Citation
@misc{https://doi.org/10.48550/arxiv.2201.12086,
doi = {10.48550/ARXIV.2201.12086},
url = {https://arxiv.org/abs/2201.12086},
author = {Li, Junnan and Li, Dongxu and Xiong, Caiming and Hoi, Steven},
keywords = {Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}