Nodes Browser
ComfyDeploy: How ComfyUI_pyannote works in ComfyUI?
What is ComfyUI_pyannote?
This repository provides custom nodes for ComfyUI designed to process audio files, performing speaker diarization and integrating speaker data into whisper-transcribed segments. These nodes utilize the PyAnnote library for speaker identification and pandas for efficient data handling.
How to install it in ComfyDeploy?
Head over to the machine page
- Click on the "Create a new machine" button
- Select the
Edit
build steps - Add a new step -> Custom Node
- Search for
ComfyUI_pyannote
and select it - Close the build step dialig and then click on the "Save" button to rebuild the machine
ComfyUI Custom Nodes: Speaker Diarization and Whisper Integration
Overview
This repository provides custom nodes for ComfyUI designed to process audio files, performing speaker diarization and integrating speaker data into whisper-transcribed segments. These nodes utilize the PyAnnote library for speaker identification and pandas for efficient data handling.
Custom Nodes
1. Speaker Diarization Node
Description:
Performs speaker diarization on an input audio file, identifying and segmenting speech by different speakers.
- Category: Audio Processing
- Node Name:
Speaker Diarization
Inputs:
- audio: (
AUDIO
)
The audio data to process. - hf_token: (
STRING
)
A Hugging Face authentication token for accessing the PyAnnote model.
Outputs:
- speaker_segments: (
List[Dict]
)
A list of dictionaries containing speaker segments withstart
,end
, andspeaker
information.
Usage Example:
After connecting an audio input, this node identifies where each speaker starts and ends in the audio file.
2. Whisper Segments to Speaker Node
Description:
Adds speaker labels to segments generated by a Whisper speech-to-text model, aligning them based on time overlaps with diarized segments.
- Category: Audio Processing
- Node Name:
Whisper Segments to Speaker
Inputs:
- whisper_segments: (
whisper_alignment
)
Transcription segments generated by Whisper, includingstart
andend
times. - speaker_segments: (
speaker_segments
)
Output from theSpeaker Diarization Node
, providing time-aligned speaker information.
Outputs:
- segments_alignment: (
whisper_alignment
)
The original Whisper segments enriched with speaker labels.
Usage Example:
Connect the output of the Speaker Diarization Node
and Whisper-transcribed segments. This node will align speaker data, allowing for a detailed transcription with speaker differentiation.
Dependencies
Ensure the following libraries are installed:
- PyAnnote Audio: For speaker diarization.
- pandas: For data manipulation.
- numpy: For numeric operations.
- torchaudio: For audio processing.
- pydub: For audio format handling.
Install dependencies using:
pip install pyannote.audio pandas numpy torchaudio pydub
Integration with ComfyUI
-
Place Node Files:
Add the provided Python file to the custom nodes directory in your ComfyUI setup. -
Register the Nodes:
ComfyUI automatically detects nodes from theNODE_CLASS_MAPPINGS
. Ensure the structure includes:NODE_CLASS_MAPPINGS = { "Speaker Diarization": SpeakerDiarizationNode, "Whisper Segments to Speaker": WhisperDiarizationNode }
-
Node Interface:
In the ComfyUI interface, locate the nodes under the Audio Processing category. Connect them as needed to process audio inputs and transcriptions.
Configuration
Hugging Face Token
To access the PyAnnote model, you need a Hugging Face token. Sign up or log in to Hugging Face and generate an access token from your account settings.
License
This project is open-source. Refer to the LICENSE file for details.
Contact
For issues or feature requests, please submit them via the repository's issue tracker.
Happy audio processing! 🎧