Nodes Browser
ComfyDeploy: How SAMURAI Nodes for ComfyUI works in ComfyUI?
What is SAMURAI Nodes for ComfyUI?
ComfyUI nodes for video object segmentation using [a/SAMURAI](https://github.com/yangchris11/samurai) model.
How to install it in ComfyDeploy?
Head over to the machine page
- Click on the "Create a new machine" button
- Select the
Edit
build steps - Add a new step -> Custom Node
- Search for
SAMURAI Nodes for ComfyUI
and select it - Close the build step dialig and then click on the "Save" button to rebuild the machine
SAMURAI Nodes for ComfyUI
ComfyUI nodes for video object segmentation using SAMURAI model.
Installation
Note: It is recommended to use Conda environment for installation and running the nodes. Make sure to use the same Conda environment for both ComfyUI and SAMURAI installation! It is highly recommended to use the console version of ComfyUI
Requirements
- NVIDIA GPU with CUDA support
- Python 3.10 or higher
- ComfyUI
- Conda (recommended) or pip
-
Follow the SAMURAI installation guide to install the base model
-
Clone this repository into your ComfyUI custom nodes directory:
cd ComfyUI/custom_nodes
git clone https://github.com/takemetosiberia/ComfyUI-SAMURAI--SAM2-.git samurai_nodes
-
Copy the SAMURAI installation folder into
ComfyUI/custom_nodes/samurai_nodes/
-
Download model weights as described in SAMURAI guide
Project Structure
After installation, your directory structure should look like this:
ComfyUI/
└── custom_nodes/
└── samurai_nodes/
├── samurai/ # SAMURAI model installation
├── init.py # Module initialization
├── samurai_node.py
└── utils.py
Additional Dependencies
Most dependencies are included with SAMURAI installation. Additional required packages:
pip install hydra-core omegaconf loguru
Usage
The workflow consists of three main nodes:
SAMURAI Box Input
Allows selecting a region of interest (box) in the first frame of a video sequence.
- Input: video frames
- Output: box coordinates and start frame number
SAMURAI Points Input
Enables point-based object selection in the first frame.
- Input: video frames
- Output: point coordinates, labels, and start frame number
SAMURAI Refine
Performs video object segmentation using selected area.
- Input: video frames, box/points from input nodes
- Output: segmentation masks
Example Workflow
- Connect Load Video to SAMURAI Box/Points Input
- Draw box or place points around object of interest
- Connect to SAMURAI Refine
- Convert masks to images and save/combine as needed
For more examples and details, see SAMURAI documentation.
Troubleshooting
If you encounter any issues:
- Make sure you're using the correct Conda environment
- Verify that all dependencies are installed in your Conda environment
- Check if SAMURAI models is properly installed in the
samurai/sam2/checkpoints
directory
For CUDA-related issues, ensure your Conda environment has the correct PyTorch version with CUDA support.