Faster workflows for ComfyUI users on Mac with Apple silicon

Hello everyone,

I’ve done a quick port of DiffusionKit to ComfyUI to enable faster image generation for Mac with Apple Silicon chips.

For now, I’ve just developed basic nodes for txt2img workflows using Flux models. 

I believe this is a good investment for the future, as Apple Silicon chips and MLX will continue to improve over time.

The MLX custom nodes are available in the Custom Nodes Manager (you just need to install diffusionkit before running it). 

Feel free to test it, contribute, or submit feature requests in the comment section bellow !🙏

Given my env:

Device: MacBook M2 Max, 96 GB
Model: Flux 1.0 dev (not quantized)
Size: 512x512
Prompt: Photo of a cat
Steps: 10

I get

  • 70% faster when the model needs to be loaded
  • 35% faster when the model is loaded
  • 30% lower memory usage

Basic txt2img workflow (Flux): https://github.com/thoddnn/ComfyUI-MLX/tree/main/workflows

ComfyUI MLX nodes: https://github.com/thoddnn/ComfyUI-MLX

DiffusionKit: https://github.com/argmaxinc/DiffusionKit