Nodes Browser
ComfyDeploy: How Comfyui-calbenodes works in ComfyUI?
What is Comfyui-calbenodes?
Nodes:CharacterManagerNode, FilmGrain, FlipFlopperSameArch
How to install it in ComfyDeploy?
Head over to the machine page
- Click on the "Create a new machine" button
- Select the
Edit
build steps - Add a new step -> Custom Node
- Search for
Comfyui-calbenodes
and select it - Close the build step dialig and then click on the "Save" button to rebuild the machine
CalbeNodes
A collection of custom nodes created for personal use and convenience.
Table of Contents
Installation
git clone this repository into your custom nodes folder
I tried to make it so all requirements come with comfy, so hopefully no installs needed.
Usage
The nodes will appear under a calbenodes heading and can be searched
Nodes
Character Manager
The Character Manager node is a versatile tool for managing and applying character-specific attributes in your image generation pipeline. It allows you to create, select, and apply character settings, including LoRA models, face images, and textual descriptions.
Features:
- Create and manage multiple characters
- Apply character-specific LoRA models
- Select preferred face images for characters
- Generate random face selections
- Create face image grids
- Apply character-specific activation text and descriptions
Inputs:
model
: The base model to apply character settings toclip
: The CLIP model for text processingcharacter
: Select from existing characters, create a new one, or choose randomlylora_strength
: Strength of the LoRA application (-10.0 to 10.0)seed
: Random seed for consistent resultsnew_name
: Name for creating a new characterlora_path
: Path to the character's LoRA fileface_images_dir
: Directory containing character face imagespreferred_face_image
: Path to the preferred face imageactivation_text
: Text to activate the character in promptsdescription
: Character descriptionnegative_prompt
: Negative prompt for the character
Outputs:
model
: Updated model with applied LoRAclip
: Updated CLIP modellora_activation
: Character activation textdescription
: Character descriptionnegative_prompt
: Character-specific negative promptpreferred_face
: Preferred face image (as tensor)random_face
: Randomly selected face image (as tensor)face_grid
: Grid of all character face images (as tensor)character_name
: Name of the selected or created characterseed
: The seed used for this execution
Usage:
- Select an existing character or choose "New Character" to create one.
- If creating a new character, provide necessary information like name, LoRA path, and face images directory.
- Adjust the LoRA strength as needed.
- The node will apply the character settings and return the updated model along with character-specific information and images.
Film Grain
The Film Grain node adds a realistic film grain effect to images, simulating the appearance of traditional photographic film.
Features:
- Adds customizable film grain to images
- Supports batch processing of multiple images
- Adjustable grain intensity
Inputs:
image
: The input image or batch of images (IMAGE type)intensity
: The strength of the film grain effect (FLOAT, range 0.01 to 1.0, default 0.07)
Outputs:
IMAGE
: The processed image(s) with added film grain
Usage:
- Connect an image or batch of images to the "image" input.
- Adjust the "intensity" parameter to control the strength of the film grain effect.
- The node will output the processed image(s) with the film grain applied.
Flip Flopper
The Flip Flopper node (Same Architecture) is an advanced sampling node that alternates between two models during the sampling process, allowing for unique and creative image generation.
Features:
- Alternates between two models during sampling
- Supports different VAEs for each model
- Customizable sampling parameters for each model
- Option to invert the order of model application
Inputs:
model1
andmodel2
: The two models to alternate betweenvae1
andvae2
: VAEs corresponding to each modeladd_noise
: Enable or disable noise additionnoise_seed
: Seed for noise generationsteps
: Total number of sampling stepscfg1
andcfg2
: CFG scales for each modelsampler_name1
andsampler_name2
: Sampler types for each modelscheduler1
andscheduler2
: Scheduler types for each modelpositive1
,negative1
,positive2
,negative2
: Conditioning for each modellatent_image
: Input latent imagedenoise
: Denoising strengthchunks
: Number of steps per chunkinvert
: Option to invert the order of model application
Outputs:
LATENT
: The resulting latent image after samplingFINAL_VAE
: The VAE used in the final iteration
Usage:
- Connect two models, their corresponding VAEs, and other required inputs.
- Set the sampling parameters for each model (CFG, sampler, scheduler, etc.).
- Adjust the number of steps and chunks as needed.
- The node will alternate between the two models during sampling, producing a unique result.
Contributing
This project is primarily for personal use, but if you have any suggestions or improvements, feel free to open an issue or submit a pull request.
License
MIT