Nodes Browser
ComfyDeploy: How ComfyUI Ollama works in ComfyUI?
What is ComfyUI Ollama?
Custom ComfyUI Nodes for interacting with [a/Ollama](https://ollama.com/) using the [a/ollama python client](https://github.com/ollama/ollama-python). Integrate the power of LLMs into CompfyUI workflows easily.
How to install it in ComfyDeploy?
Head over to the machine page
- Click on the "Create a new machine" button
- Select the
Edit
build steps - Add a new step -> Custom Node
- Search for
ComfyUI Ollama
and select it - Close the build step dialig and then click on the "Save" button to rebuild the machine
ComfyUI Ollama
Custom ComfyUI Nodes for interacting with Ollama using the ollama python client.
Integrate the power of LLMs into ComfyUI workflows easily or just experiment with LLM inference.
To use this properly, you would need a running Ollama server reachable from the host that is running ComfyUI.
<a href="https://www.buymeacoffee.com/stavsapq" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/default-orange.png" alt="Buy Me A Coffee" height="40" width="174"></a>
Installation
Install ollama server on the desired host
<a href="https://ollama.com/" target="_blank"> <img src="https://img.shields.io/badge/v0.4.2-green.svg?style=for-the-badge&labelColor=gray&label=Ollama&color=blue" alt=""/> </a><a href="https://ollama.com/download/Ollama-darwin.zip" target="_blank">Download for macOS</a>
<a href="https://ollama.com/download/OllamaSetup.exe" target="_blank">Download for Windows</a>
Install on Linux
curl -fsSL https://ollama.com/install.sh | sh
<a href="https://hub.docker.com/r/ollama/ollama" target="_blank">Docker Installation</a>
CPU only
docker run -d -p 11434:11434 -v ollama:/root/.ollama --name ollama ollama/ollama
NVIDIA GPU
docker run -d -p 11434:11434 --gpus=all -v ollama:/root/.ollama --name ollama ollama/ollama
Use the compfyui manager "Custom Node Manager":
Search ollama
and select the one by stavsap
Or
- git clone into the
custom_nodes
folder inside your ComfyUI installation or download as zip and unzip the contents tocustom_nodes/compfyui-ollama
. pip install -r requirements.txt
- Start/restart ComfyUI
Nodes
OllamaVision
A node that gives an ability to query input images.
A model name should be model with Vision abilities, for example: https://ollama.com/library/llava.
OllamaGenerate
A node that gives an ability to query an LLM via given prompt.
OllamaGenerateAdvance
A node that gives an ability to query an LLM via given prompt with fine tune parameters and an ability to preserve context for generate chaining.
Check ollama api docs to get info on the parameters.
More params info
Usage Example
Consider the following workflow of vision an image, and perform additional text processing with desired LLM. In the OllamaGenerate node set the prompt as input.
The custom Text Nodes in the examples can be found here: https://github.com/pythongosssss/ComfyUI-Custom-Scripts