4-step distillation of Flux.2 now available
https://preview.redd.it/onnlnaws8v7g1.png?width=1024&format=png&auto=webp&s=bb8d210a6af676a317c8b9933446000d28cd74e7
Custom nodes: https://github.com/Lakonik/ComfyUI-piFlow?tab=readme-ov-file#pi-flux2
Model: https://huggingface.co/Lakonik/pi-FLUX.2
Demo: https://huggingface.co/spaces/Lakonik/pi-FLUX.2
Not sure if people are still interested in Flux.2, but here it is. Supports both text-to-image generation and multi-image editing in 4 or more steps.
https://redd.it/1ppedq9
@rStableDiffusion
https://preview.redd.it/onnlnaws8v7g1.png?width=1024&format=png&auto=webp&s=bb8d210a6af676a317c8b9933446000d28cd74e7
Custom nodes: https://github.com/Lakonik/ComfyUI-piFlow?tab=readme-ov-file#pi-flux2
Model: https://huggingface.co/Lakonik/pi-FLUX.2
Demo: https://huggingface.co/spaces/Lakonik/pi-FLUX.2
Not sure if people are still interested in Flux.2, but here it is. Supports both text-to-image generation and multi-image editing in 4 or more steps.
https://redd.it/1ppedq9
@rStableDiffusion
LMstudio with Qwen3 VL 8b and Z image turbo is the best combination
Using an already existing image in LMstudio with Qwen VL running and an enlarged context window with the prompt
"From what you see in the image, write me a detailed prompt for the AI image generator, segment the prompt into subject, scene, style,..."
Use that prompt in ZIT and steps 10-20, and CFG 1 - 2 gives the best results depending on what you need.
https://redd.it/1pppf34
@rStableDiffusion
Using an already existing image in LMstudio with Qwen VL running and an enlarged context window with the prompt
"From what you see in the image, write me a detailed prompt for the AI image generator, segment the prompt into subject, scene, style,..."
Use that prompt in ZIT and steps 10-20, and CFG 1 - 2 gives the best results depending on what you need.
https://redd.it/1pppf34
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
PSA It is pronounced "oiler"
Too many videos online mispronouncing the word when talking about using the euler scheduler. If you didn't know ~now you do~. "Oiler". I did the same thing when I read his name first learning, but PLEASE from now on, get it right!
https://redd.it/1ppsn77
@rStableDiffusion
Too many videos online mispronouncing the word when talking about using the euler scheduler. If you didn't know ~now you do~. "Oiler". I did the same thing when I read his name first learning, but PLEASE from now on, get it right!
https://redd.it/1ppsn77
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
ComfyUI Tutorial Series Ep 73: Final Episode & Z-Image ControlNet 2.0
https://www.youtube.com/watch?v=DMbXB6g17IU
https://redd.it/1ppu9hi
@rStableDiffusion
https://www.youtube.com/watch?v=DMbXB6g17IU
https://redd.it/1ppu9hi
@rStableDiffusion
YouTube
ComfyUI Tutorial Series Ep 73: Final Episode & Z-Image ControlNet 2.0
In this final episode of the ComfyUI tutorial series, you’ll learn how to use Z-Image ControlNet 2.0 (Turbo Fun ControlNet Union v2) for Pose, Depth, and Canny control, plus hear what’s next after episode 73 and why this series is ending.
Get the workflows…
Get the workflows…