Nodes Browser

ComfyDeploy: How Custom nodes for llm chat with optional image input works in ComfyUI?

What is Custom nodes for llm chat with optional image input?

A custom node for ComfyUI that enables Large Language Model (LLM) chat interactions with optional image input support.

How to install it in ComfyDeploy?

Head over to the machine page

  1. Click on the "Create a new machine" button
  2. Select the Edit build steps
  3. Add a new step -> Custom Node
  4. Search for Custom nodes for llm chat with optional image input and select it
  5. Close the build step dialig and then click on the "Save" button to rebuild the machine

ComfyUI LLM Chat Node

A custom node for ComfyUI that enables Large Language Model (LLM) chat interactions with optional image input support. This node allows you to interact with various text and vision language models directly within your ComfyUI workflows.
<img src="assets/node.jpg" width="400"/>

Features

  • Support for both text-only and vision-enabled language models
  • Configurable model parameters (temperature, max tokens)
  • Optional image input support
  • Adjustable random seed for consistent outputs
  • Easy integration with existing ComfyUI workflows

Installation and Usage

  1. Navigate to ComfyUI/custom_nodes folder in terminal or command prompt.
  2. Clone the repo using the following command: git clone https://github.com/ComfyUI-Workflow/ComfyUI-OpenAI
  3. Go to custom_nodes/COMFYUI-LLM-API and install depedencies by running pip install -r requirements.txt
  4. Restart ComfyUI

Example use cases:

node Image