Nodes Browser
ComfyDeploy: How ComfyUI-NuA-BIRD works in ComfyUI?
What is ComfyUI-NuA-BIRD?
ComfyUI implementation of '[a/Blind Image Restoration via Fast Diffusion Inversion](https://github.com/hamadichihaoui/BIRD)' Original [a/article](https://arxiv.org/abs/2405.19572)
How to install it in ComfyDeploy?
Head over to the machine page
- Click on the "Create a new machine" button
- Select the
Edit
build steps - Add a new step -> Custom Node
- Search for
ComfyUI-NuA-BIRD
and select it - Close the build step dialig and then click on the "Save" button to rebuild the machine
ComfyUI-NuA-BIRD
ComfyUI implementation of "Blind Image Restoration via Fast Diffusion Inversion"</br> Original article
Features
- Blind Deblurring
- Non-uniform Deblurring
- Inpainting
- Denoising
- Superresolution
Installation
-
Clone the repository into the
ComfyUI/custom_nodes
directorycd ComfyUI/custom_nodes git clone https://github.com/nuanarchy/ComfyUI-NuA-BIRD.git
-
Install the required modules
pip install -r ComfyUI-NuA-BIRD/requirements.txt
-
Copy the model weights into the appropriate folder
ComfyUI/models/checkpoints
Examples
In the examples
folder, you will find the workflow diagrams, the JSON file with the configurations, and resulting images.
Workflow Diagrams
Blind Deblurring
<img src="examples/deblurring.png" alt="Blind Deblurring" width=auto height=auto>Non-uniform Deblurring
<img src="examples/deblurring_non_uniform.png" alt="Non-uniform Deblurring" width=auto height=auto>Inpainting
<img src="examples/inpainting.png" alt="Inpainting.png" width=auto height=auto>Denoising
<img src="examples/denoising.png" alt="Denoising" width=auto height=auto>Super Resolution
<img src="examples/super_resolution.png" alt="Super Resolution" width=auto height=auto>Important
The results primarily depend on the pretrained model and the dataset</br> Limitations:
- The model only works with square images at a resolution of 256x256 pixels
- Faces must be cropped and centered in the images
- For Super Resolution tasks, the input image resolution can be any size smaller than 256x256 pixels
If you want to overcome these limitations, you can train your own diffusion model using custom datasets.</br> You can use the OpenAI repository: improved-diffusion