r/StableDiffusion – Telegram
This media is not supported in your browser
VIEW IN TELEGRAM
For very low resolution videos restoration, SeedVR2 is better than FlashVSR+ like 256px to 1024px

https://redd.it/1rgovde
@rStableDiffusion
Z-Image-Turbo Controlnet Union 2.1 version 2602 just released

https://preview.redd.it/je2zyojhf9mg1.png?width=917&format=png&auto=webp&s=7eb32d6dca2a129acde4b1137275aabf116c7505

[2026.02.26\] Update to version 2602, with support for Gray Control.

Personally I had much better results with the Lite versions BTW (the full versions really produced very bad quality outputs, for some reason)

Download: https://huggingface.co/alibaba-pai/Z-Image-Turbo-Fun-Controlnet-Union-2.1/tree/main

https://redd.it/1rh6nwr
@rStableDiffusion
Act step 1.5 M2M best practices - do we have them?

Love ace step 1.5. Amazing and fast for text to music. But music to music, it's terrible. At medium noise, it changes the songs completely. Essentially the same as t2m but lower quality. At low denoise it just messes up audio quality.

Anyone manged to get decent results out of music to music? E.g. tweaking genre, replacing some words in lyrics, or similar?

https://redd.it/1rh6lmz
@rStableDiffusion
ELI5 why the finetuning community is much less active for Z image turbo and base than for SDXL

SDXL has like every imaginable Lora and Checkpoint on civitai, including weirdest niche things beyond imagination, but the only ones for ZiT and ZiB are some slight style ones for realism and of course some stuff for nudity and sex which, surprisingly, are worse than the ones for SDXL, which is an infinitely worse model.

Was ZiB and ZiT overhyped? Because for all the hype I thought people would have created the coolest lora and checkpoints by now, just like they did for SDXL, even taking into account that SDXL is 3 years old and Z image just a few weeks to months, but STILL.

Isnt it as great as people thought?

https://redd.it/1rhftq8
@rStableDiffusion
Free ComfyUI Colab Pack for popular models (T4-friendly, GGUF-first, auto quant by VRAM)

Hey everyone,



I just open-sourced my Free ComfyUI Colab Pack for popular models.

Main goal: make testing and using strong models easier on Colab Free T4, without painful setup.



What is inside:

\- model-specific Colab notebooks

\- ready workflows per model

\- GGUF-first approach for lower VRAM pressure

\- auto quant selection by VRAM budget

\- HF + Civitai token prompts

\- stable Cloudflare tunnel launch logic



I spent a lot of time building and maintaining these notebooks as open source.

If this project helps you, stars, and PRs are very welcome.



If you want to support development, even $1 helps a lot and goes to GPU server costs and food.

Donate info is in the repo.



Repo:

https://github.com/ekkonwork/free-comfyui-colab-pack



Issues welcome <3

https://preview.redd.it/e1tin2r9eamg1.png?width=1408&format=png&auto=webp&s=3ff874c75efa9696ef94f6409c55dc6c30fb3ef7



https://redd.it/1rhbkaz
@rStableDiffusion
Gemini is already smarter with censorship then it's creators.
https://redd.it/1rhkhi3
@rStableDiffusion
Using controlnets in 2026

Hey guys, I am pretty new to comfy(2 months) and I was wondering if anyone still use controlnets and in what ways? Specially with newer models like zit and flux, would love to know how they contribute or are they obsolete now.

https://redd.it/1rhlytk
@rStableDiffusion