Qwen image 2512 inpaint, anyone got it working?
https://github.com/Comfy-Org/ComfyUI/pull/12359
Said it should be in comfyui but when I try the inpainting setup with the node "controlnetinpaintingalimamaapply", nothing errors but no edits are done to the image.
Using the latest control union model from here. I just want to simply mask an idea and do inpainting.
https://huggingface.co/alibaba-pai/Qwen-Image-2512-Fun-Controlnet-Union/tree/main
https://redd.it/1r54x7q
@rStableDiffusion
https://github.com/Comfy-Org/ComfyUI/pull/12359
Said it should be in comfyui but when I try the inpainting setup with the node "controlnetinpaintingalimamaapply", nothing errors but no edits are done to the image.
Using the latest control union model from here. I just want to simply mask an idea and do inpainting.
https://huggingface.co/alibaba-pai/Qwen-Image-2512-Fun-Controlnet-Union/tree/main
https://redd.it/1r54x7q
@rStableDiffusion
GitHub
Add working Qwen 2512 ControlNet (Fun ControlNet) support by krigeta · Pull Request #12359 · Comfy-Org/ComfyUI
Summary
This PR adds full support for Qwen 2.5 Fun ControlNet format, enabling ControlNet functionality for Qwen image generation models.
Related Issues
Closes Please add support Qwen-Image-2512-F...
This PR adds full support for Qwen 2.5 Fun ControlNet format, enabling ControlNet functionality for Qwen image generation models.
Related Issues
Closes Please add support Qwen-Image-2512-F...
Tried Z-Image Turbo on 32GB RAM + RTX 3050 via ForgeUI — consistently ~6–10s per 1080p image
Hey folks, been tinkering with SD setups and wanted to share some real-world performance numbers in case it helps others in the same hardware bracket.
Hardware:
• RTX 3050 (laptop GPU)
• 32 GB RAM
• Running everything through ForgeUI + Z-Image Turbo
Workflow:
• 1080p outputs
• Default-ish Turbo settings (sped up sampling + optimized caching)
• No crazy overclocking, just stable system config
Results:
I’m getting pretty consistent ~6–10 seconds per image at 1080p depending on the prompt complexity and sampler choice. Even with denser prompts and CFG bumped up, the RTX 3050 still holds its own surprisingly well with Turbo processing.
Before this I was bracing for 20–30s renders, but the combined ForgeUI + Z-Image Turbo setup feels like a legit game changer for this class of GPU.
Curious to hear from folks with similar rigs: • Is that ~6–10s/1080p what you’re seeing?
• Any specific Turbo settings that squeeze out more performance without quality loss?
• How do your artifacting/noise results compare at faster speeds?
• Anyone paired this with other UIs like Automatic1111 or NMKD and seen big diffs?
Appreciate any tips or shared benchmarks!
https://redd.it/1r58tz2
@rStableDiffusion
Hey folks, been tinkering with SD setups and wanted to share some real-world performance numbers in case it helps others in the same hardware bracket.
Hardware:
• RTX 3050 (laptop GPU)
• 32 GB RAM
• Running everything through ForgeUI + Z-Image Turbo
Workflow:
• 1080p outputs
• Default-ish Turbo settings (sped up sampling + optimized caching)
• No crazy overclocking, just stable system config
Results:
I’m getting pretty consistent ~6–10 seconds per image at 1080p depending on the prompt complexity and sampler choice. Even with denser prompts and CFG bumped up, the RTX 3050 still holds its own surprisingly well with Turbo processing.
Before this I was bracing for 20–30s renders, but the combined ForgeUI + Z-Image Turbo setup feels like a legit game changer for this class of GPU.
Curious to hear from folks with similar rigs: • Is that ~6–10s/1080p what you’re seeing?
• Any specific Turbo settings that squeeze out more performance without quality loss?
• How do your artifacting/noise results compare at faster speeds?
• Anyone paired this with other UIs like Automatic1111 or NMKD and seen big diffs?
Appreciate any tips or shared benchmarks!
https://redd.it/1r58tz2
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
I got tired of guessing if my Character LoRA trainings were actually good, so I built a local tool to measure them scientifically. Here is MirrorMetric (Open Source and totally local)
https://redd.it/1r5j8a8
@rStableDiffusion
https://redd.it/1r5j8a8
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit: I got tired of guessing if my Character LoRA trainings were actually good, so I built…
Explore this post and more from the StableDiffusion community