Nodes Browser
ComfyDeploy: How IG-Motion-Search works in ComfyUI?
What is IG-Motion-Search?
Nodes for searching videos by motion
How to install it in ComfyDeploy?
Head over to the machine page
- Click on the "Create a new machine" button
- Select the
Edit
build steps - Add a new step -> Custom Node
- Search for
IG-Motion-Search
and select it - Close the build step dialig and then click on the "Save" button to rebuild the machine
MotionVideoSearch
MotionVideoSearch is a Python project for extracting motion-based embeddings from videos and storing them in a vector database (FAISS). You can then quickly search those embeddings with an input image, retrieving matching segments of the indexed videos. It offers:
- A command-line interface (CLI) built with Typer.
- Integration with PyTorch for embedding extraction.
- Video sampling, embedding, and database storage with SQLite + FAISS.
- Optional watermark detection and removal (via an external repo).
- Utility nodes for ComfyUI to create and search motion images within ComfyUI’s node graph environment.
Features
-
Video Ingestion
Slice videos into small segments, generate motion-based embeddings for each segment, and store them in FAISS for fast similarity search. -
Search
Query the index with a single motion image to find relevant video segments, returning top matches ranked by distance in vector space. -
Concurrency Support
Organize indexing jobs in different subdirectories, then merge (combine) them later into one big index. -
Watermark Removal (Optional)
If you clone l-comm/WatermarkRemoval, you can automatically detect and remove watermarks from video frames during ingestion. -
ComfyUI Nodes
Includes two custom nodes:IG_MotionVideoSearch
: Takes an image input and returns the top 5 ranked URLs from the FAISS index.IG_MotionVideoFrame
: Converts 24 consecutive frames into a single “dot frame,” which encodes motion features in a color-coded image.
ComfyUI
We have some ComfyUI nodes ready to be used with a pre-existing video database consisting of roughly 100,000 videos. You can get started using the following workflow: MotionSearch
So for example, this video
https://github.com/user-attachments/assets/3195c170-e080-498f-be09-a06db53d1a73
Will give the following search results
https://github.com/user-attachments/assets/b793af91-882c-44ea-8dc7-c051a73f74b2
https://github.com/user-attachments/assets/ca9008aa-281e-47c5-aff9-05ebce8fe43a
https://github.com/user-attachments/assets/dc7fd9da-1e62-49e1-98cf-9acd64f025bf
Installation
1. Set Up Environment
Clone the repository:
git clone https://github.com/IDGallagher/MotionVideoSearch
cd MotionVideoSearch
(Optional) Create a conda environment:
conda create -n mvs python=3.9
conda activate mvs
2. Install Dependencies
Install PyTorch and TorchVision compatible with your CUDA setup (example below uses CUDA 11.8):
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
Install ffmpeg 6.x:
conda install -c conda-forge ffmpeg=6.*
Install other dependencies:
pip install tqdm matplotlib einops einshape scipy timm lmdb av mediapy typer imageio imageio-ffmpeg requests opencv-python
Install Faiss (version example below):
conda install -c pytorch faiss-cpu=1.9.0
(Optional) Download the watermark removal repo if you want that functionality:
git clone https://github.com/l-comm/WatermarkRemoval.git
Usage
After installing, you can run the CLI commands via `python main.py [COMMAND] [OPTIONS]`.
1. Storing Embeddings
Use the `store` command to process videos and store their embeddings in the FAISS index:
python main.py store \
--dir /path/to/videos \
--max-time 1.0 \
--debug
- `--dir`: Directory with video files (e.g., `.mp4`, `.avi`, etc.).
- `--max-time`: How many seconds to process from each video (default 1 second).
- `--debug`: Enable debug mode to save intermediate frames and videos in a `debug` folder.
You can also process a CSV file of URLs:
python main.py store \
--csv path/to/videos.csv \
--max-time 1.0 \
--start-entry 1
- `--csv`: CSV with columns including `contentUrl`, `duration`, and `name/description`.
- `--start-entry`: Start ingesting from a particular row number in the CSV.
2. Searching
Once you have stored some embeddings, you can search with:
python main.py search \
--image ./query.jpg \
--top_k 5
This compares the motion image `query.jpg` against the FAISS index and returns the top 5 matches with their URLs, metadata, and distance.
3. Combining Multiple Indices
If you used the concurrency feature (`--concurrent-store`) to generate multiple subindexes, you can combine them all:
python main.py combine
This searches for numbered directories under `./data` that each contain `index.faiss`, merges them, and writes a final combined `index.faiss` in `./data`.
Concurrency Mode
To create multiple indexes in subdirectories, use:
python main.py store --concurrent-store
Each run will create (or use) a new numbered subdirectory under `./data`. You can specify a subdirectory with `--concurrent-index 3` to store in `./data/3/`, for example. Later, use `combine` to merge them.
ComfyUI Integration
If you want to use the motion search in ComfyUI, simply:
- Place or symlink this repo in ComfyUI’s `custom_nodes` folder.
- Restart ComfyUI.
- You should see two new nodes:
- IG_MotionVideoSearch: Takes an image and returns URLs + ranks.
- IG_MotionVideoFrame: Takes a stack of 24 frames and returns a single “dot frame.”
Contributing
Contributions and suggestions are welcome! Feel free to open issues or pull requests for enhancements or bug fixes.
License
This project is licensed under the MIT License. Please see the LICENSE file for details.