Nodes Browser

Bjornulf_AudioVideoSync
Bjornulf_CharacterDescriptionGenerator
Bjornulf_CombineBackgroundOverlay
Bjornulf_CombineImages
Bjornulf_CombineTexts
Bjornulf_CombineTextsByLines
Bjornulf_CombineVideoAudio
Bjornulf_ConcatVideos
Bjornulf_FreeVRAM
Bjornulf_GrayscaleTransform
Bjornulf_GreenScreenToTransparency
Bjornulf_IfElse
Bjornulf_ImageDetails
Bjornulf_ImageMaskCutter
Bjornulf_ImagesListToVideo
Bjornulf_LoadImageWithTransparency
Bjornulf_LoadImagesFromSelectedFolder
Bjornulf_LoopAllLines
Bjornulf_LoopBasicBatch
Bjornulf_LoopCombosSamplersSchedulers
Bjornulf_LoopFloat
Bjornulf_LoopImages
Bjornulf_LoopInteger
Bjornulf_LoopIntegerSequential
Bjornulf_LoopLinesSequential
Bjornulf_LoopLoraSelector
Bjornulf_LoopModelClipVae
Bjornulf_LoopModelSelector
Bjornulf_LoopSamplers
Bjornulf_LoopSchedulers
Bjornulf_LoopTexts
Bjornulf_LoopWriteText
Bjornulf_MergeImagesHorizontally
Bjornulf_MergeImagesVertically
Bjornulf_PassPreviewImage
Bjornulf_PauseResume
Bjornulf_PickInput
Bjornulf_RandomImage
Bjornulf_RandomLineFromInput
Bjornulf_RandomLoraSelector
Bjornulf_RandomModelClipVae
Bjornulf_RandomModelSelector
Bjornulf_RandomTexts
Bjornulf_RemoveTransparency
Bjornulf_ResizeImage
Bjornulf_SaveBjornulfLobeChat
Bjornulf_SaveImagePath
Bjornulf_SaveImageToFolder
Bjornulf_SaveText
Bjornulf_SaveTmpImage
Bjornulf_ScramblerCharacter
Bjornulf_SelectImageFromList
Bjornulf_ShowText
Bjornulf_TextToSpeech
Bjornulf_TextToStringAndSeed
Bjornulf_VideoPingPong
Bjornulf_VideoPreview
Bjornulf_VideoToImagesList
Bjornulf_WriteText
Bjornulf_WriteTextAdvanced
Bjornulf_imagesToVideo
Bjornulf_ollamaLoader

ComfyDeploy: How Bjornulf_custom_nodes works in ComfyUI?

What is Bjornulf_custom_nodes?

Nodes: Ollama, Green Screen to Transparency, Save image for Bjornulf LobeChat, Text with random Seed, Random line from input, Combine images (Background+Overlay alpha), Image to grayscale (black & white), Remove image Transparency (alpha), Resize Image, ...

How to install it in ComfyDeploy?

Head over to the machine page

  1. Click on the "Create a new machine" button
  2. Select the Edit build steps
  3. Add a new step -> Custom Node
  4. Search for Bjornulf_custom_nodes and select it
  5. Close the build step dialig and then click on the "Save" button to rebuild the machine

πŸ”— Comfyui : Bjornulf_custom_nodes v0.56 πŸ”—

A list of 61 custom nodes for Comfyui : Display, manipulate, and edit text, images, videos, loras and more.
You can manage looping operations, generate randomized content, trigger logical conditions, pause and manually control your workflows and even work with external AI tools, like Ollama or Text To Speech.

Coffee : β˜•β˜•β˜•β˜•β˜• 5/5

❀️❀️❀️ https://ko-fi.com/bjornulf ❀️❀️❀️

☘ This project is part of my AI trio. ☘

1 - πŸ“ Text/Chat AI generation : Bjornulf Lobe Chat Fork
2 - πŸ”Š Speech AI generation : Bjornulf Text To Speech
<u>3 - 🎨 Image AI generation : Bjornulf Comfyui custom nodes (you are here)</u>

πŸ“‹ Nodes menu by category

πŸ‘ Display and Show πŸ‘

1. πŸ‘ Show (Text, Int, Float)
49. πŸ“ΉπŸ‘ Video Preview

βœ’ Text βœ’

2. βœ’ Write Text
3. βœ’πŸ—” Advanced Write Text (+ 🎲 random selection and πŸ…°οΈ variables)
4. πŸ”— Combine Texts
15. πŸ’Ύ Save Text
26. 🎲 Random line from input
28. πŸ”’πŸŽ² Text with random Seed
32. πŸ§‘πŸ“ Character Description Generator
48. πŸ”€πŸŽ² Text scrambler (πŸ§‘ Character)

β™» Loop β™»

6. β™» Loop
7. β™» Loop Texts
8. β™» Loop Integer
9. β™» Loop Float
10. β™» Loop All Samplers
11. β™» Loop All Schedulers
12. β™» Loop Combos
27. β™» Loop (All Lines from input)
33. β™» Loop (All Lines from input πŸ”— combine by lines)
38. β™»πŸ–Ό Loop (Images)
39. β™» Loop (βœ’πŸ—” Advanced Write Text + πŸ…°οΈ variables)
42. β™» Loop (Model+Clip+Vae) - aka Checkpoint / Model
53. β™» Loop Load checkpoint (Model Selector)
54. β™» Loop Lora Selector
56. β™»πŸ“ Loop Sequential (Integer)
57. β™»πŸ“ Loop Sequential (input Lines)

🎲 Randomization 🎲

3. βœ’πŸ—” Advanced Write Text (+ 🎲 random selection and πŸ…°οΈ variables)
5. 🎲 Random (Texts)
26. 🎲 Random line from input
28. πŸ”’πŸŽ² Text with random Seed
37. πŸŽ²πŸ–Ό Random Image
40. 🎲 Random (Model+Clip+Vae) - aka Checkpoint / Model
41. 🎲 Random Load checkpoint (Model Selector)
48. πŸ”€πŸŽ² Text scrambler (πŸ§‘ Character)
55. 🎲 Random Lora Selector

πŸ–ΌπŸ’Ύ Image Save πŸ’ΎπŸ–Ό

16. πŸ’ΎπŸ–ΌπŸ’¬ Save image for Bjornulf LobeChat
17. πŸ’ΎπŸ–Ό Save image as tmp_api.png Temporary API
18. πŸ’ΎπŸ–ΌπŸ“ Save image to a chosen folder name
14. πŸ’ΎπŸ–Ό Save Exact name

πŸ–ΌπŸ“₯ Image Load πŸ“₯πŸ–Ό

29. πŸ“₯πŸ–Ό Load Image with Transparency β–’
43. πŸ“₯πŸ–ΌπŸ“‚ Load Images from output folder

πŸ–Ό Image - others πŸ–Ό

13. πŸ“ Resize Image
22. πŸ”² Remove image Transparency (alpha)
23. πŸ”² Image to grayscale (black & white)
24. πŸ–Ό+πŸ–Ό Stack two images (Background + Overlay)
25. πŸŸ©βžœβ–’ Green Screen to Transparency
29. β¬‡οΈπŸ–Ό Load Image with Transparency β–’
30. πŸ–Όβœ‚ Cut image with a mask
37. πŸŽ²πŸ–Ό Random Image
38. β™»πŸ–Ό Loop (Images)
43. β¬‡οΈπŸ“‚πŸ–Ό Load Images from output folder
44. πŸ–ΌπŸ‘ˆ Select an Image, Pick
46. πŸ–ΌπŸ” Image Details
47. πŸ–Ό Combine Images
60. πŸ–ΌπŸ–Ό Merge Images/Videos πŸ“ΉπŸ“Ή (Horizontally)
61. πŸ–ΌπŸ–Ό Merge Images/Videos πŸ“ΉπŸ“Ή (Vertically)

πŸš€ Load checkpoints πŸš€

40. 🎲 Random (Model+Clip+Vae) - aka Checkpoint / Model
41. 🎲 Random Load checkpoint (Model Selector)
42. β™» Loop (Model+Clip+Vae) - aka Checkpoint / Model
53. β™» Loop Load checkpoint (Model Selector)

πŸš€ Load loras πŸš€

54. β™» Loop Lora Selector
55. 🎲 Random Lora Selector

πŸ“Ή Video πŸ“Ή

20. πŸ“Ή Video Ping Pong
21. πŸ“Ή Images to Video (FFmpeg)
49. πŸ“ΉπŸ‘ Video Preview
50. πŸ–ΌβžœπŸ“Ή Images to Video path (tmp video)
51. πŸ“ΉβžœπŸ–Ό Video Path to Images
52. πŸ”ŠπŸ“Ή Audio Video Sync 58. πŸ“ΉπŸ”— Concat Videos
59. πŸ“ΉπŸ”Š Combine Video + Audio
60. πŸ–ΌπŸ–Ό Merge Images/Videos πŸ“ΉπŸ“Ή (Horizontally)
61. πŸ–ΌπŸ–Ό Merge Images/Videos πŸ“ΉπŸ“Ή (Vertically)

πŸ€– AI πŸ€–

19. πŸ¦™ Ollama
31. πŸ”Š TTS - Text to Speech

πŸ”Š Audio πŸ”Š

31. πŸ”Š TTS - Text to Speech
52. πŸ”ŠπŸ“Ή Audio Video Sync
59. πŸ“ΉπŸ”Š Combine Video + Audio

πŸ’» System πŸ’»

34. 🧹 Free VRAM hack

🧍 Manual user Control 🧍

35. ⏸️ Paused. Resume or Stop, Pick πŸ‘‡
36. ⏸️ Paused. Select input, Pick πŸ‘‡

🧠 Logic / Conditional Operations 🧠

45. πŸ”€ If-Else (input / compare_with)

☁ Usage in cloud :

Comfyui is great for local usage, but I sometimes need more power than what I have...
I have a computer with a 4070 super with 12GB and flux fp8 simple wokflow take about ~40 seconds. With a 4090 in the cloud I can run flux fp16 in ~12 seconds. (There are of course also some workflow that I can't even run locally.)

My referal link for Runpod : https://runpod.io?ref=tkowk7g5 (If you use that i will have a commission, at no extra cost for you.)
If you want to use my nodes and comfyui in the cloud (and can install more stuff), I'm managing an optimized ready-to-use template on runpod : https://runpod.io/console/deploy?template=r32dtr35u1&ref=tkowk7g5
Template name : bjornulf-comfyui-allin-workspace, can be operational in ~3 minutes. (Depending on your pod, setup and download of extra models or whatever not included.)
You need to create and select a network volume before using that, size is up to you, i have 50Gb Storage because i use cloud only for Flux or lora training on a 4090. (~0.7$/hour)
⚠️ When pod is ready, you need to open a terminal in browser (After clicking on connect from your pod) and use this to launch ComfyUI manually : cd /workspace/ComfyUI && python main.py --listen 0.0.0.0 --port 3000 (Much better to control it with a terminal, check logs, etc...)
After that you can just click on the Connect to port 3000 button.
As file manager, you can use the included JupyterLab on port 8888.
If you have any issues with it, please let me know.
It will manage everything in Runpod network storage (/workspace/ComfyUI), so you can stop and start the cloud GPU without losing anything, change GPU or whatever.
Zone : I recommend EU-RO-1, but up to you.
Top-up your Runpod account with minimum 10$ to start.
⚠️ Warning, you will pay by the minute, so not recommended for testing or learning comfyui. Do that locally !!!
Run cloud GPU only when you already have your workflow ready to run.
Advice : take a cheap GPU for testing, downloading models or settings things up.
To download checkpoint or anything else, you need to use the terminal.
For downloading from Huggingface (get token here https://huggingface.co/settings/tokens).
Here is example for everything you need for flux dev :

huggingface-cli login --token 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
huggingface-cli download black-forest-labs/FLUX.1-dev flux1-dev.safetensors --local-dir /workspace/ComfyUI/models/unet
huggingface-cli download comfyanonymous/flux_text_encoders clip_l.safetensors --local-dir /workspace/ComfyUI/models/clip
huggingface-cli download comfyanonymous/flux_text_encoders t5xxl_fp16.safetensors --local-dir /workspace/ComfyUI/models/clip
huggingface-cli download black-forest-labs/FLUX.1-dev ae.safetensors --local-dir /workspace/ComfyUI/models/vae

To use Flux you can just drag and drop in your browser comfyui interface the .json from my github repo : workflows/FLUX_dev_troll.json, direct link : https://github.com/justUmen/ComfyUI-BjornulfNodes/blob/main/workflows/FLUX_dev_troll.json.

For downloading from civitai (get token here https://civitai.com/user/account), just copy/paste the link of checkpoint you want to download and use something like that, with your token in URL :

CIVITAI="8b275fada679ba5812b3da2bf35016f6"
wget --content-disposition -P /workspace/ComfyUI/models/checkpoints "https://civitai.com/api/download/models/272376?type=Model&format=SafeTensor&size=pruned&fp=fp16&token=$CIVITAI"

If you have any issues with this template from Runpod, please let me know, I'm here to help. 😊

πŸ— Dependencies (nothing to do for runpod ☁)

πŸͺŸπŸ Windows : Install dependencies on windows with embedded python (portable version)

First you need to find this python_embedded python.exe, then you can right click or shift + right click inside the folder in your file manager to open a terminal there.

This is where I have it, with the command you need :
H:\ComfyUI_windows_portable\python_embeded> .\python.exe -m pip install pydub ollama opencv-python

When you have to install something you can retake the same code and install the dependency you want :
.\python.exe -m pip install whateveryouwant

You can then run comfyui.

🐧🐍 Linux : Install dependencies (without venv, not recommended)

  • pip install ollama (you can also install ollama if you want : https://ollama.com/download) - You don't need to really install it if you don't want to use my ollama node. (BUT you need to run pip install ollama)
  • pip install pydub (for TTS node)
  • pip install opencv-python

🐧🐍 Linux : Install dependencies with python virtual environment (venv)

If you want to use a python virtual environment only for comfyUI, which I recommended, you can do that for example (also pre-install pip) :

sudo apt-get install python3-venv python3-pip
python3 -m venv /the/path/you/want/venv/bjornulf_comfyui

Once you have your environment in this new folder, you can activate it with and install dependencies inside :

source /the/path/you/want/venv/bjornulf_comfyui/bin/activate
pip install ollama pydub opencv-python

Then you can start comfyui with this environment (notice that you need to re-activate it each time you want to launch comfyui) :

cd /where/you/installed/ComfyUI && python main.py

πŸ“ Changelog

  • v0.2: Improve ollama node with system prompt + model selection.
  • v0.3: Add a new node : Save image to a chosen folder.
  • v0.3: Add comfyui Metadata / workflow to all my image-related nodes.
  • v0.4: Support transparency option with webm format, options encoders. As well as input for audio stream.
  • v0.5: New node : Remove image transparency (alpha) - Fill alpha channel with solid color.
  • v0.5: New node : Image to grayscale (black & white) - Convert an image to grayscale.
  • v0.6: New node : Combine images (Background + Overlay) - Combine two images into a single image.
  • v0.7: Replace Save API node with Save Bjornulf Lobechat node. (For my custom lobe-chat)
  • v0.8: Combine images : add an option to put image top, bottom or center.
  • v0.8: Combine texts : add option for slashes /
  • v0.8: Add basic node to transform greenscreen in to transparency.
  • v0.9: Add a new node : Return one random line from input.
  • v0.10: Add a new node : Loop (All Lines from input) - Iterate over all lines from an input text.
  • v0.11: Add a new node : Text with random Seed - Generate a random seed, along with text.
  • v0.12: Combine images : Add option to move vertically and horizontally. (from -50% to 150%)
  • v0.13: Add a new node: Load image with transparency (alpha) - Load an image with transparency.
  • v0.14: Add a new node: Cut image from a mask
  • v0.15: Add two new nodes: TTS - Text to Speech and Character Description Generator
  • v0.16: Big changes on Character Description Generator
  • v0.17: New loop node, combine by lines.
  • v0.18: New loop node, Free VRAM hack
  • v0.19: Changes for save to folder node : ignore missing images filenames, will use the highest number found + 1.
  • v0.20: Changes for lobechat save image : include the code of free VRAM hack + ignore missing images filenames
  • v0.21: Add a new write text node that also display the text in the comfyui console (good for debugging)
  • v0.22: Allow write text node to use random selection like this {hood|helmet} will randomly choose between hood or helmet.
  • v0.23: Add a new node: Pause, resume or stop workflow.
  • v0.24: Add a new node: Pause, select input, pick one.
  • v0.25: Two new nodes: Loop Images and Random image.
  • v0.26: New node : Loop write Text. Also increase nb of inputs allowed for most nodes. (+ update some breaking changes)
  • v0.27: Two new nodes : Loop (Model+Clip+Vae) and Random (Model+Clip+Vae) - aka Checkpoint / Model
  • v0.28: Fix random texts and add a lot of screenshots examples for several nodes.
  • v0.29: Fix floating points issues with loop float node.
  • v0.30: Update the basic Loop node with optional input.
  • v0.31: ❗Sorry, Breaking changes for Write/Show text nodes, cleaner system : 1 simple write text and the other is 1 advanced with console and special syntax. Also Show can now manage INT, FLOAT, TEXT.
  • v0.32: Quick rename to avoid breaking loop_text node.
  • v0.33: Control random on paused nodes, fix pydub sound bug permissions on Windows.
  • v0.34: Two new nodes : Load Images from output folder and Select an Image, Pick.
  • v0.35: Great improvements of the TTS node 31. It will also save the audio file in the "ComfyUI/Bjornulf_TTS/" folder. - Not tested on windows yet -
  • v0.36: Fix random model.
  • v0.37: New node : Random Load checkpoint (Model Selector). Alternative to the random checkpoint node. (Not preloading all checkpoints in memory, slower to switch between checkpoints, but more outputs to decide where to store your results.)
  • v0.38: New node : If-Else logic. (input == compare_with), examples with different latent space size. +fix some deserialization issues.
  • v0.39: Add variables management to Advanced Write Text node.
  • v0.40: Add variables management to Loop Advanced Write Text node. Add menu for all nodes to the README.
  • v0.41: Two new nodes : image details and combine images. Also ❗ Big changes to the If-Else node. (+many minor changes)
  • v0.42: Better README with category nodes, changes some node titles
  • v0.43: Add control_after_generate to Ollama and allow to keep in VRAM for 1 minute if needed. (For chaining quick generations.) Add fallback to 0.0.0.0
  • v0.44: Allow ollama to have a custom url in the file ollama_ip.txt in the comfyui custom nodes folder. Minor changes, add details/updates to README.
  • v0.45: Add a new node : Text scrambler (Character), change text randomly using the file scrambler/scrambler_character.json in the comfyui custom nodes folder.
  • v0.46: ❗ A lot of changes to Video nodes. Save to video is now using FLOAT for fps, not INT. (A lot of other custom nodes do that as well...) Add node to preview video, add node to convert a video path to a list of images. add node to convert a list of images to a temporary video + video_path. add node to synchronize duration of audio with video. (useful for MuseTalk) change TTS node with many new outputs ("audio_path", "full_path", "duration") to reuse with other nodes like MuseTalk, also TTS rename input to "connect_to_workflow", to avoid mistakes sending text to it.
  • v0.47: New node : Loop Load checkpoint (Model Selector).
  • v0.48: Two new nodes for loras : Random Lora Selector and Loop Lora Selector.
  • v0.49: New node : Loop Sequential (Integer) - Loop through a range of integer values. (But once per workflow run), audio sync is smarter and adapt the video duration to the audio duration.
  • v0.50: allow audio in Images to Video path (tmp video). Add three new nodes : Concat Videos, combine video/audio and Loop Sequential (input Lines). save text changes to write inside Comfyui folder. Fix random line from input outputing LIST. ❗ Breaking change to audio/video sync node, allowing different types as input.
  • v0.51: Fix some issues with audio/video sync node. Add two new nodes : merge images/videos vertical and horizontal. add requirements.txt and ollama_ip.txt
  • v0.52-53: Revert name git to Bjornulf_custom_nodes, match registry comfy
  • v0.54-55: add opencv-python to requirements.txt
  • 0.56: ❗Breaking changes : ollama node simplified, no ollama_ip.txt needed, waiting for collection ollama nodes to be ready.

πŸ“ Nodes descriptions

1 - πŸ‘ Show (Text, Int, Float)

Description:
The show node will only display text, or a list of several texts. (read only node)
3 types are managed : Green is for STRING type, Orange is for FLOAT type and blue is for INT type. I put colors so I/you don't try to edit them. 🀣

Show Text

2 - βœ’ Write Text

Description:
Simple node to write text.

write Text

3 - βœ’πŸ—” Advanced Write Text (+ 🎲 random selection and πŸ…°οΈ variables)

Description:
Advanced Write Text node allows for special syntax to accept random variants, like {hood|helmet} will randomly choose between hood or helmet.
You also have seed and control_after_generate to manage the randomness.
It is also displaying the text in the comfyui console. (Useful for debugging)
Example of console logs :

Raw text: photo of a {green|blue|red|orange|yellow} {cat|rat|house}
Picked text: photo of a green house

write Text Advanced

You can also create and reuse variables with this syntax : <name>. Usage example :

variables

4 - πŸ”— Combine Texts

Description:
Combine multiple text inputs into a single output. (can have separation with : comma, space, new line or nothing.)

Combine Texts

5 - 🎲 Random (Texts)

Description:
Generate and display random text from a predefined list. Great for creating random prompts.
You also have control_after_generate to manage the randomness.

Random Text

6 - β™» Loop

Description:
General-purpose loop node, you can connect that in between anything.

Loop

It has an optional input, if no input is given, it will loop over the value of the STRING "if_no_input" (take you can edit).
❗ Careful this node accept everything as input and output, so you can use it with texts, integers, images, mask, segs etc... but be consistent with your inputs/outputs.
Do not use this Loop if you can do otherwise.

This is an example together with my node 28, to force a different seed for each iteration :
Loop

7 - β™» Loop Texts

Description:
Cycle through a list of text inputs.

Loop Texts

Here is an example of usage with combine texts and flux :
Loop Texts example

8 - β™» Loop Integer

Description:
Iterate through a range of integer values, good for steps in ksampler, etc...

Loop Integer Loop Int + Show Text

❗ Don't forget that you can convert ksampler widgets to input by right-clicking the ksampler node :
Widget to Input

Here is an example of usage with ksampler (Notice that with "steps" this node isn't optimized, but good enough for quick testing.) :
Widget to Input

9 - β™» Loop Float

Description:
Loop through a range of floating-point numbers, good for cfg, denoise, etc...

Loop Float + Show Text Loop Float

Here is an example with controlnet, trying to make a red cat based on a blue rabbit :
Loop All Samplers

10 - β™» Loop All Samplers

Description:
Iterate over all available samplers to apply them sequentially. Ideal for testing.

Loop All Samplers

Here is an example of looping over all the samplers with the normal scheduler :
Loop All Samplers

11 - β™» Loop All Schedulers

Description:
Iterate over all available schedulers to apply them sequentially. Ideal for testing. (same idea as sampler above, but for schedulers)

Loop All Schedulers

12 - β™» Loop Combos

Description:
Generate a loop from a list of my own custom combinations (scheduler+sampler), or select one combo manually.
Good for testing.

Loop Combos

Example of usage to see the differences between different combinations :
example combos

13/14 - πŸ“ + πŸ–Ό Resize and Save Exact name βš οΈπŸ’£

Description:
Resize an image to exact dimensions. The other node will save the image to the exact path.
βš οΈπŸ’£ Warning : The image will be overwritten if it already exists.

Resize and Save Exact

15 - πŸ’Ύ Save Text

Description:
Save the given text input to a file. Useful for logging and storing text data.
If the file already exist, it will add the text at the end of the file.

Save Text

16 - πŸ’ΎπŸ–ΌπŸ’¬ Save image for Bjornulf LobeChat (❗For my custom lobe-chat❗)

Description:
❓ I made that node for my custom lobe-chat to send+receive images from Comfyui API : lobe-chat
It will save the image in the folder output/BJORNULF_LOBECHAT/. The name will start at api_00001.png, then api_00002.png, etc...
It will also create a link to the last generated image at the location output/BJORNULF_API_LAST_IMAGE.png.
This link will be used by my custom lobe-chat to copy the image inside the lobe-chat project.

Save Bjornulf Lobechat

17 - πŸ’ΎπŸ–Ό Save image as tmp_api.png Temporary API βš οΈπŸ’£

Description:
Save image for short-term use : ./output/tmp_api.png βš οΈπŸ’£

Save Temporary API

18 - πŸ’ΎπŸ–ΌπŸ“ Save image to a chosen folder name

Description:
Save image in a specific folder : my_folder/00001.png, my_folder/00002.png, etc...
Also allow multiple nested folders, like for example : animal/dog/small.

Save Temporary API

19 - πŸ¦™ Ollama

Description:
Will generate detailed text based of what you give it.

Ollama

I recommend using mistral-nemo if you can run it, but it's up to you. (Might have to tweak the system prompt a bit)

You also have control_after_generate to force the node to rerun for every workflow run. (Even if there is no modification of the node or its inputs.)

You have the option to keep in in you VRAM for a minute with keep_1min_in_vram. (If you plan having to generate many times with the same prompt)
Each run will be significantly faster, but not free your VRAM for something else.

Ollama

⚠️ Warning : Using keep_1min_in_vram might be a bit heavy on your VRAM. Think about if you really need it or not. Most of the time, when using keep_1min_in_vram, you don't want to have also a generation of image or anything else in the same time.

⚠️ You can create a file called ollama_ip.txt in my comfyui custom node folder if you have a special IP for your ollama server, mine is : http://192.168.1.37:11434

20 - πŸ“Ή Video Ping Pong

Description:
Create a ping-pong effect from a list of images (from a video) by reversing the playback direction when reaching the last frame. Good for an "infinity loop" effect.

Video Ping Pong

21 - πŸ“Ή Images to Video

Description:
Combine a sequence of images into a video file.

Images to Video

❓ I made this node because it supports transparency with webm format. (Needed for rembg)
Temporary images are stored in the folder ComfyUI/temp_images_imgs2video/ as well as the wav audio file.

22 - πŸ”² Remove image Transparency (alpha)

Description:
Remove transparency from an image by filling the alpha channel with a solid color. (black, white or greenscreen)
Of course it takes in an image with transparency, like from rembg nodes.
Necessary for some nodes that don't support transparency.

Remove Alpha

23 - πŸ”² Image to grayscale (black & white)

Description:
Convert an image to grayscale (black & white)

Image to Grayscale

Example : I sometimes use it with Ipadapter to disable color influence.
But you can sometimes also want a black and white image...

24 - πŸ–Ό+πŸ–Ό Stack two images (Background + Overlay)

Description:
Stack two images into a single image : a background and one (or several) transparent overlay. (allow to have a video there, just send all the frames and recombine them after.)

Superpose Images

Update 0.11 : Add option to move vertically and horizontally. (from -50% to 150%)
❗ Warning : For now, background is a static image. (I will allow video there later too.)
⚠️ Warning : If you want to directly load the image with transparency, use my node πŸ–Ό Load Image with Transparency β–’ instead of the Load Image node.

25 - πŸŸ©βžœβ–’ Green Screen to Transparency

Description:
Transform greenscreen into transparency.
Need clean greenscreen ofc. (Can adjust threshold but very basic node.)

Greenscreen to Transparency

26 - 🎲 Random line from input

Description:
Take a random line from an input text. (When using multiple "Write Text" nodes is annoying for example, you can use that and just copy/paste a list from outside.)
You can change fixed/randomize for control_after_generate to have a different text each time you run the workflow. (or not)

Random line from input

27 - β™» Loop (All Lines from input)

Description:
Iterate over all lines from an input text. (Good for testing multiple lines of text.)

Loop input

28 - πŸ”’ Text with random Seed

Description:
❗ This node is used to force to generate a random seed, along with text.
But what does that mean ???
When you use a loop (β™»), the loop will use the same seed for each iteration. (That is the point, it will keep the same seed to compare results.)
Even with randomize for control_after_generate, it is still using the same seed for every loop, it will change it only when the workflow is done.
Simple example without using random seed node : (Both images have different prompt, but same seed)

Text with random Seed 1

So if you want to force using another seed for each iteration, you can use this node in the middle. For example, if you want to generate a different image every time. (aka : You use loop nodes not to compare or test results but to generate multiple images.)
Use it like that for example : (Both images have different prompt AND different seed)

Text with random Seed 2

Here is an example of the similarities that you want to avoid with FLUX with different prompt (hood/helmet) but same seed :

Text with random Seed 3

Here is an example of the similarities that you want to avoid with SDXL with different prompt (blue/red) but same seed :

Text with random Seed 4

FLUX : Here is an example of 4 images without Random Seed node on the left, and on the right 4 images with Random Seed node :

Text with random Seed 5

29 - πŸ–Ό Load Image with Transparency β–’

Description:
Load an image with transparency.
The default Load Image node will not load the transparency.

Load image Alpha

30 - πŸ–Όβœ‚ Cut image with a mask

Description:
Cut an image from a mask.

Cut image

31 - πŸ”Š TTS - Text to Speech (100% local, any voice you want, any language)

Description:
Use my TTS server to generate high quality speech from text, with any voice you want, any language.
Listen to the audio example

TTS

❗ Node never tested on windows, only on linux for now. ❗

Use my TTS server to generate speech from text, based on XTTS v2.
❗ Of course to use this comfyui node (frontend) you need to use my TTS server (backend) : https://github.com/justUmen/Bjornulf_XTTS
I made this backend for https://github.com/justUmen/Bjornulf_lobe-chat, but you can use it with comfyui too with this node.
After having Bjornulf_XTTS installed, you NEED to create a link in my Comfyui custom node folder called speakers : ComfyUI/custom_nodes/Bjornulf_custom_nodes/speakers
That link must be a link to the folder where you installed/stored the voice samples you use for my TTS, like default.wav.
If my TTS server is running on port 8020 (You can test in browser with the link http://localhost:8020/tts_stream?language=en&speaker_wav=default&text=Hello) and voice samples are good, you can use this node to generate speech from text.

Details
This node should always be connected to a core node : Preview audio.

My node will generate and save the audio files in the ComfyUI/Bjornulf_TTS/ folder, followed by the language selected, the name of the voice sample, and the text.
Example of audio file from the screenshot above : ComfyUI/Bjornulf_TTS/Chinese/default.wav/你吃了吗.wav
You can notice that you don't NEED to select a chinese voice to speak chinese. Yes it will work, you can record yourself and make yourself speak whatever language you want.
Also, when you select a voice with this format fr/fake_Bjornulf.wav, it will create an extra folder fr of course. : ComfyUI/Bjornulf_TTS/English/fr/fake_Bjornulf.wav/hello_im_me.wav. Easy to see that you are using a french voice sample for an english recording.

control_after_generate as usual, it is used to force the node to rerun for every workflow run. (Even if there is no modification of the node or its inputs.)
overwrite is used to overwrite the audio file if it already exists. (For example if you don't like the generation, just set overwrite to True and run the workflow again, until you have a good result. After you can set it to back to False. (Paraphrasing : without overwrite set to True, It won't generate the audio file again if it already exists in the Bjornulf_TTS folder.)
autoplay is used to play the audio file inside the node when it is executed. (Manual replay or save is done in the preview audio node.)

So... note that if you know you have an audio file ready to play, you can still use my node but you do NOT need my TTS server to be running. My node will just play the audio file if it can find it, won't try to connect th backend TTS server.
Let's say you already use this node to create an audio file saying workflow is done with the Attenborough voice :

TTS

As long as you keep exactly the same settings, it will not use my server to play the audio file! You can safely turn the TTS server off, so it won't use your precious VRAM Duh. (TTS server should be using ~3GB of VRAM.)

Also connect_to_workflow is optional, it means that you can make a workflow with ONLY my TTS node to pre-generate the audio files with the sentences you want to use later, example :
TTS

If you want to run my TTS nodes along side image generation, i recommend you to use my PAUSE node so you can manually stop the TTS server after my TTS node. When the VRAM is freed, you can the click on the RESUME button to continue the workflow.
If you can afford to run both at the same time, good for you, but Locally I can't run my TTS server and FLUX at the same time, so I use this trick. :

TTS

32 - πŸ§‘πŸ“ Character Description Generator

Description:
Generate a character description based on a json file in the folder characters : ComfyUI/custom_nodes/Bjornulf_custom_nodes/characters
Make your own json file with your own characters, and use this node to generate a description.

characters characters

❗ For now it's very basic node, a lot of things are going to be added and changed !!!
Some details are unusable for some checkpoints, very much a work in progress, the json structure isn't set in stone either.
Some characters are included.

33 - β™» Loop (All Lines from input πŸ”— combine by lines)

Description:
Sometimes you want to loop over several inputs but you also want to separate different lines of your output.
So with this node, you can have the number of inputs and outputs you want. See example for usage.

loop combined

34 - 🧹 Free VRAM hack

Description:
So this is my attempt at freeing up VRAM after usage, I will try to improve that.

free vram free vram

For me, on launch ComfyUI is using 180MB of VRAM, after my clean up VRAM node it can go back down to 376MB.
I don't think there is a clean way to do that, so I'm using a hacky way.
So, not perfect but better than being stuck at 6GB of VRAM used if I know I won't be using it again...
Just connect this node with your workflow, it takes anything as input and return it as output. You can therefore put it anywhere you want.
❗ Comfyui is using cache to run faster (like not reloading checkpoints), so only use this free VRAM node when you need it.
❗ For this node to work properly, you need to enable the dev/api mode in ComfyUI. (You can do that in the settings)
It is also running an "empty/dummy" workflow to free up the VRAM, so it might take a few seconds to take effect after the end of the workflow.

35 - ⏸️ Paused. Resume or Stop ?

Description:
Automatically pause the workflow, and rings a bell when it does. (play the audio bell.m4a file provided)

pause resume stop pause resume stop pause resume stop

You can then manually resume or stop the workflow by clicking on the node's buttons.
I do that let's say for example if I have a very long upscaling process, I can check if the input is good before continuing. Sometimes I might stop the workflow and restart it with another seed.
You can connect any type of node to the pause node, above is an example with text, but you can send an IMAGE or whatever else, in the node input = output. (Of course you need to send the output to something that has the correct format...)

36 - βΈοΈπŸ” Paused. Select input, Pick one

Description:
Automatically pause the workflow, and rings a bell when it does. (play the audio bell.m4a file provided)

pick input

You can then manually select the input you want to use, and resume the workflow with it.
You can connect this node to anything you want, above is an example with IMAGE. But you can pick whatever you want, in the node input = output.

37 - πŸŽ²πŸ–Ό Random Image

Description:
Just take a random image from a list of images.

random image

38 - β™»πŸ–Ό Loop (Images)

Description:
Loop over a list of images.

loop images

Usage example : You have a list of images, and you want to apply the same process to all of them.
Above is an example of the loop images node sending them to an Ipadapter workflow. (Same seed of course.)

39 - β™» Loop (βœ’πŸ—” Advanced Write Text)

Description:
If you need a quick loop but you don't want something too complex with a loop node, you can use this combined write text + loop.

loop write text

It will take the same special syntax as the Advanced write text node {blue|red}, but it will loop over ALL the possibilities instead of taking one at random.
0.40 : You can also use variables <name> in the loop.

40 - 🎲 Random (Model+Clip+Vae) - aka Checkpoint / Model

Description:
Just simply take a trio at random from a load checkpoint node.

random checkpoint

Notice that it is using the core Load checkpoint node. It means that all checkpoint will be preloaded in memory.

Details :

  • It will take more VRAM, but it will be faster to switch between checkpoints.
  • It can't give you the currently loaded checkpoint name's.

Check node number 41 before deciding which one to use.

41 - 🎲 Random Load checkpoint (Model Selector)

Description:
This is another way to select a load checkpoint node randomly.

pick input

It will not preload all the checkpoints in memory, so it will be slower to switch between checkpoints.
But you can use more outputs to decide where to store your results. (model_folder is returning the last folder name of the checkpoint.)
I always store my checkpoints in a folder with the type of the model like SD1.5, SDXL, etc... So it's a good way for me to recover that information quickly.

Details :

  • Note that compared to node 40, you can't have separate configuration depending of the selected checkpoint. (For example CLIP Set Last Layer node set at -2 for a specific model, or a separate vae or clip.) Aka : All models are going to share the exact same workflow.

Check node number 40 before deciding which one to use.
Node 53 is the loop version of this node.

NOTE : If you want to load a single checkpoint but want to extract its folder name (To use the checkpoint name as a folder name for example, or with if/else node), you can use my node 41 with only one checkpoint. (It will take one at random, so... always the same one.)

42 - β™» Loop (Model+Clip+Vae) - aka Checkpoint / Model

pick input

Description:
Loop over all the trios from several checkpoint node.

43 - πŸ“₯πŸ–ΌπŸ“‚ Load Images from output folder

Description:
Quickly select all images from a folder inside the output folder. (Not recursively.)

pick input

So... As you can see from the screenshot the images are split based on their resolution.
It's also not possible to edit dynamically the number of outputs, so I just picked a number : 4.
The node will separate the images based on their resolution, so with this node you can have 4 different resolutions per folder. (If you have more than that, maybe you should have another folder...)
To avoid error or crash if you have less than 4 resolutions in a folder, the node will just output white tensors. (white square image.)
So this node is a little hacky for now, but i can select my different characters in less than a second.
If you want to know how i personnaly save my images for a specific character, here is part of my workflow (Notice that i personnaly use / for folders because I'm on linux) :
pick input
In this example I put "character/" as a string and then combine with "nothing". But it's the same if you do "character" and then combine with "/". (I just like having a / at the end of my folder's name...)

If you are satisfied with this logic, you can then select all these nodes, right click and Convert to Group Node, you can then have your own customized "save character node" :
pick input

Here is another example of the same thing but excluding the save folder node :
pick input

⚠️ If you really want to regroup all the images in one flow, you can use my node 47 Combine images to put them all together.

44 - πŸ–ΌπŸ‘ˆ Select an Image, Pick

Description:
Select an image from a list of images.

pick input

Useful in combination with my Load images from folder and preview image nodes.

You can also of course make a group node, like this one, which is the same as the screenshot above :
pick input

45 - πŸ”€ If-Else (input / compare_with)

Description:
Complex logic node if/else system.

if else

If the input given is equal to the compare_with given in the widget, it will forward send_if_true, otherwise it will forward send_if_false. (If no send_if_false it will return None.)
You can forward anything, below is an example of forwarding a different size of latent space depending if it's SDXL or not.

if else

Here is an example of the node with all outputs displayed with Show text nodes :

if else

send_if_false is optional, if not connected, it will be replaced by None.

if else

If-Else are chainables, just connect output to send_if_false.
⚠️ Always simply test input with compare_with, and connect the desired value to send_if_true. ⚠️
Here a simple example with 2 If-Else nodes (choose between 3 different resolutions).
❗ Notice that the same write text node is connected to both If-Else nodes input :

if else

Let's take a similar example but let's use my Write loop text node to display all 3 types once :

if else

If you understood the previous examples, here is a complete example that will create 3 images, landscape, portrait and square :

if else

Workflow is hidden for simplicity, but is very basic, just connect latent to Ksampler, nothing special.)
You can also connect the same advanced loop write text node with my save folder node to save the images (landscape/portrait/square) in separate folders, but you do you...

46 - πŸ–ΌπŸ” Image Details

Description:
Display the details of an image. (width, height, has_transparency, orientation, type)
RGBA is considered as having transparency, RGB is not.
orientation can be landscape, portrait or square.

image details

47 - πŸ–ΌπŸ”— Combine Images

Description:
Combine multiple images (A single image or a list of images.)
If you want to merge several images into a single image, check node 60 or 61.

There are two types of logic to "combine images". With "all_in_one" enabled, it will combine all the images into one tensor.
Otherwise it will send the images one by one. (check examples below) :

This is an example of the "all_in_one" option disabled (Note that there are 2 images, these are NOT side by side, they are combined in a list.) :

combine images

But for example, if you want to use my node select an image, pick, you need to enable all_in_one and the images must all have the same resolution.

combine images

You can notice that there is no visible difference when you use all_in_one with preview image node. (this is why I added the show text node, note that show text will make it blue, because it's an image/tensor.)

When you use combine image node, you can actually also send many images at once, it will combine them all.
Here is an example with Load images from folder node, Image details node and Combine images node. (Of course it can't have all_in_one set to True in this situation because the images have different resolutions) :

combine images

Here another simple example taking a few selected images from a folder and combining them (For later processing for example) :

combine images

48 - πŸ”€πŸŽ² Text scrambler (πŸ§‘ Character)

Description:
Take text as input and scramble (randomize) the text by using the file scrambler/character_scrambler.json in the comfyui custom nodes folder.

scrambler character

49 - πŸ“ΉπŸ‘ Video Preview

Description:
This node takes a video path as input and displays the video.

video preview

50 - πŸ–ΌβžœπŸ“Ή Images to Video path (tmp video)

Description:
This node will take a list of images and convert them to a temporary video file.
❗ Update 0.50 : You can now send audio to the video. (audio_path OR audio TYPE)

image to video path

51 - πŸ“ΉβžœπŸ–Ό Video Path to Images

Description:
This node will take a video path as input and convert it to a list of images.

video path to image

In the above example, I also take half of the frames by setting frame_interval to 2.
Note that i had 16 frames, on the top right preview you can see 8 images.

52 - πŸ”ŠπŸ“Ή Audio Video Sync

Description:

This node is an overengineered node that will try to synchronize the duration of an audio file with a video file.
❗ Video ideally needs to be a loop, check my ping pong video node if needed. The main goal of this synchronization is to have a clean transition between the end and the beginning of the video. (same frame)
You can then chain up several video and they will transition smoothly.

Some details, this node will :

  • If video slightly too long : add silence to the audio file.
  • If video way too long : will slow down the video up to 0.50x the speed + add silence to the audio. (now editable)
  • If audio slightly too long : will speed up video up to 1.5x the speed. (now editable)
  • If video way too long : will speed up video up to 1.5x the speed + add silence to the audio.

It is good like for example with MuseTalk https://github.com/chaojie/ComfyUI-MuseTalk

Here is an example of the Audio Video Sync node, notice that it is also convenient to recover the frames per second of the video, and send that to other nodes. (Spaghettis..., deal with it. 😎 If you don't understand it, you can test it.) :

audio sync video

❗ Update 0.50 : audio_duration is now optional, if not connected it will take it from the audio.
❗ Update 0.50 : You can now send the video with a list of images OR a video_path, same for audio : AUDIO or audio_path.

New v0.50 layout, same logic :

audio sync video

53 - β™» Loop Load checkpoint (Model Selector)

Description:
This is the loop version of node 41. (check there for similar details)
It will loop over all the selected checkpoints.

❗ The big difference with 41 is that checkpoints are preloaded in memory. You can run them all faster all at once.
It is a good way to test multiple checkpoints quickly.

loop model selector

54 - β™» Loop Lora Selector

Description:
Loop over all the selected Loras.

loop lora selector

Above is an example with Pony and several styles of Lora.

Below is another example, here with flux, to test if your Lora training was undertrained, overtrained or just right :

loop lora selector

55 - 🎲 Random Lora Selector

Description:
Just take a single Lora at random from a list of Loras.

random lora selector

56 - β™»πŸ“ Loop Sequential (Integer)

Description:
This loop works like a normal loop, BUT it is sequential : It will run only once for each workflow run !!!
The first time it will output the first integer, the second time the second integer, etc...
When the last is reached, the node will STOP the workflow, preventing anything else to run after it.
Under the hood it is using the file counter_integer.txt in the ComfyUI/Bjornulf folder.

loop sequential integer
loop sequential integer
loop sequential integer
loop sequential integer

57 - β™»πŸ“ Loop Sequential (input Lines)

Description:
This loop works like a normal loop, BUT it is sequential : It will run only once for each workflow run !!!
The first time it will output the first line, the second time the second line, etc...
You also have control of the line with +1 / -1 buttons.
When the last is reached, the node will STOP the workflow, preventing anything else to run after it.
Under the hood it is using the file counter_lines.txt in the ComfyUI/Bjornulf folder.

Here is an example of usage with my TTS node : when I have a list of sentences to process, if i don't like a version, I can just click on the -1 button, tick "overwrite" on TTS node and it will generate the same sentence again, repeat until good.

loop sequential line

58 - πŸ“ΉπŸ”— Concat Videos

Description:
Take two videos and concatenate them. (One after the other in the same video.)

concat video

59 - πŸ“ΉπŸ”Š Combine Video + Audio

Description:
Simply combine video and audio together.
Video : Use list of images or video path.
Audio : Use audio path or audio type.

combine video audio

60 - πŸ–ΌπŸ–Ό Merge Images/Videos πŸ“ΉπŸ“Ή (Horizontally)

Description:
Merge images or videos horizontally.

merge images

Here is one possible example for videos with node 60 and 61 :

merge videos

61 - πŸ–ΌπŸ–Ό Merge Images/Videos πŸ“ΉπŸ“Ή (Vertically)

Description:
Merge images or videos vertically.

merge images

Here is one possible example for videos with node 60 and 61 :

merge videos