ComfyDeploy: How Customizable API Call Nodes by BillBum works in ComfyUI?

What is Customizable API Call Nodes by BillBum?

API call node for Third-party platforms both official and local. Support VLMs LLMs Dalle3 Flux-Pro SD3 etc. And some little tools: img to b64 url, b64 url to img, b64 url to b64 data, reg text to word and ',' only, etc.

How to install it in ComfyDeploy?

Head over to the machine page

  1. Click on the "Create a new machine" button
  2. Select the Edit build steps
  3. Add a new step -> Custom Node
  4. Search for Customizable API Call Nodes by BillBum and select it
  5. Close the build step dialig and then click on the "Save" button to rebuild the machine

Introduction

BillBum Modified Comfyui Nodes is a set of nodes for myself to use api in comfyui. including DALL-E, OpenAI's LLMs, other LLMs api platform, also other image generation api. screenshot

Features

  • Text Generation: Use API to ask llm text generation, structured responses (not work yep).
  • Image Generation: Use API to Generate images, support dalle and flux1.1-pro etc.
  • Vision LM: Use API to caption or describe image, need models vision supported.
  • little tools: base64 url to base64 data, base64 url to IMAGE, IMAGE to base64 url, regular llm text to word and "," only. etc.

Update

  • Add use_jailbreak option for VisionLM api node If your caption task rejected due to nsfw content, you can try to take on use_jailbreak. tested models:
    • llama-3.2-11b
    • llama-3.2-90b
    • gemini-1.5-flash
    • gemini-1.5-pro
    • pixtral-12b-latest
  • Add Image API Call Node Theoretically you can request and test any t2i model api in this node image

Installation

In ComfyUI Manager Menu "Install via Git URL"

https://github.com/AhBumm/ComfyUI_BillBum_Nodes.git

Or search "billbum" in ComfyUI Manager image

install requirements with comfyui embeded python

pip install -r requirements.txt