Nodes Browser
ComfyDeploy: How ComfyUI-Prompt-MZ works in ComfyUI?
What is ComfyUI-Prompt-MZ?
Use llama.cpp to help generate some nodes for prompt word related work
How to install it in ComfyDeploy?
Head over to the machine page
- Click on the "Create a new machine" button
- Select the
Edit
build steps - Add a new step -> Custom Node
- Search for
ComfyUI-Prompt-MZ
and select it - Close the build step dialig and then click on the "Save" button to rebuild the machine
ComfyUI-Prompt-MZ
基于llama.cpp的一些和提示词相关的节点,目前包括美化提示词和类似clip-interrogator的图片反推
Use llama.cpp to assist in generating some nodes related to prompt words, including beautifying prompt words and image recognition similar to clip-interrogator
Recent changes
- [2024-06-22] 新增Florence-2-large图片反推模型节点 (Added Florence-2-large image interrogation model node)
- [2024-06-20] 新增选择本机ollama模型的节点 (Added nodes to select local ollama models)
- [2024-06-05] 新增千问2.0预设模型 (Added Qianwen 2.0 preset model)
- [2024-06-05] 可选chat_format,图片反推后处理 (Optional chat_format, post-processing after image interrogation)
- [2024-06-04] 新增了一些预设模型 (Added some preset models)
- [2024-06-04] 新增通用节点,支持手动选择模型 (Add universal node, support manual selection of models)
- [2024-05-30] 添加ImageCaptionerConfig节点来支持批量生成提示词 (Add ImageCaptionerConfig node to support batch generation of prompt words)
- [2024-05-24] 运行后在当前节点显示生成的提示词 (Display the generated prompt words in the current node after running)
- [2024-05-24] 兼容清华智谱API (Compatible with Zhipu API)
- [2024-05-24] 使用A1111权重缩放,感谢ComfyUI_ADV_CLIP_emb (Use A1111 weight scaling, thanks to ComfyUI_ADV_CLIP_emb)
- [2024-05-13] 新增OpenAI API节点 (add OpenAI API node)
- [2024-04-30] 支持自定义指令 (Support for custom instructions)
- [2024-04-30] 添加llava-v1.6-vicuna-13b (add llava-v1.6-vicuna-13b)
- [2024-04-30] 添加翻译
- [2024-04-28] 新增Phi-3-mini节点 (add Phi-3-mini node)
Installation
- Clone this repo into
custom_nodes
folder. - Restart ComfyUI.
Nodes
- MZ_Florence2CLIPTextEncode
- ModelConfigManualSelect (Ollama)
- CLIPTextEncode (LLamaCPP Universal)
- ModelConfigManualSelect(LLamaCPP)
- ModelConfigDownloaderSelect(LLamaCPP)
- CLIPTextEncode (ImageInterrogator)
- ModelConfigManualSelect(ImageInterrogator)
- ModelConfigDownloaderSelect(ImageInterrogator)
-
CLIPTextEncode (OpenAI API)
-
CLIPTextEncode (Phi-3)
-
CLIPTextEncode (LLama3)
-
ImageInterrogator (LLava)
Enable parameter sd_format
-
ImageCaptionerConfig
-
LLamaCPPOptions
-
CustomizeInstruct
-
BaseLLamaCPPCLIPTextEncode (可以手动传入模型路径/You can directly pass in the model path)
-
BaseLLavaImageInterrogator (可以手动传入模型路径/You can directly pass in the model path)
FAQ
moudle 'llama_cpp' has no attribute 'LLAMA_SPLIT_MODE_LAYER'
升级llama_cpp_python的版本到最新版本,前往 https://github.com/abetlen/llama-cpp-python/releases 下载安装
LLama.dll 无法加载 (Failed to load shared library LLama.dll)
CUDA版本切换到12.1,如果你使用秋叶启动器,高级设置->环境维护->安装PyTorch->选择版本中选择CUDA 12.1的版本
...llama_cpp_python-0,2.63-cp310-cp310-win_and64.whl returned nonzero exit status
保持网络畅通,该上魔法上魔法,或者手动安装llama_cpp_python
Credits
- https://github.com/comfyanonymous/ComfyUI
- https://github.com/ggerganov/llama.cpp
- https://github.com/BlenderNeko/ComfyUI_ADV_CLIP_emb
Star History
<a href="https://star-history.com/#MinusZoneAI/ComfyUI-Prompt-MZ&Date"> <picture> <source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/svg?repos=MinusZoneAI/ComfyUI-Prompt-MZ&type=Date&theme=dark" /> <source media="(prefers-color-scheme: light)" srcset="https://api.star-history.com/svg?repos=MinusZoneAI/ComfyUI-Prompt-MZ&type=Date" /> <img alt="Star History Chart" src="https://api.star-history.com/svg?repos=MinusZoneAI/ComfyUI-Prompt-MZ&type=Date" /> </picture> </a>Contact
- 绿泡泡: minrszone
- Bilibili: minus_zone
- 小红书: MinusZoneAI
- 爱发电: MinusZoneAI