r/StableDiffusion – Telegram
How would you go about generating video with a character ref sheet?

I've generated a character sheet for a character that I want to use in a series of videos. I'm struggling to figure out how to properly use it when creating videos. Specifically Titmouse style DnD animation of a fight sequence that happened in game.

Would appreciate an workflow examples you can point to or tutorial vids for making my own.

https://preview.redd.it/kpallbyckxkg1.png?width=1024&format=png&auto=webp&s=d0fe33baeabeee6d356020ea81c0bae707cad638

https://preview.redd.it/805h1eyckxkg1.png?width=1024&format=png&auto=webp&s=42ef42bde1edee800e25210bf471831c93290726



https://redd.it/1rb5n9h
@rStableDiffusion
A single diffusion pass is enough to fool SynthID

I've been digging into invisible watermarks, SynthID, StableSignature, TreeRing — the stuff baked into pixels by Gemini, DALL-E, etc. Can't see them, can't Photoshop them out, they survive screenshots. Got curious how robust they actually are, so I threw together noai-watermark over a weekend. It runs a watermarked image through a diffusion model and the output looks the same but the watermark is gone. A single pass at low strength fools SynthID. There's also a CtrlRegen mode for higher quality. Strips all AI metadata too.

Mostly built this for research and education, wanted to understand how these systems work under the hood. Open source if anyone wants to poke around.

github: https://github.com/mertizci/noai-watermark

https://redd.it/1rbb24f
@rStableDiffusion
I Combined Wan Animate 2.2 Complete Ecosystem Workflow | SCAIL + SteadyDancer + One-to-All Workflows Into ONE Ultimate Multi-Character Animation Setup (Now on CivitAI)
https://redd.it/1rbftee
@rStableDiffusion
This media is not supported in your browser
VIEW IN TELEGRAM
For very low resolution videos restoration, SeedVR2 is better than FlashVSR+ like 256px to 1024px

https://redd.it/1rgovde
@rStableDiffusion
Z-Image-Turbo Controlnet Union 2.1 version 2602 just released

https://preview.redd.it/je2zyojhf9mg1.png?width=917&format=png&auto=webp&s=7eb32d6dca2a129acde4b1137275aabf116c7505

[2026.02.26\] Update to version 2602, with support for Gray Control.

Personally I had much better results with the Lite versions BTW (the full versions really produced very bad quality outputs, for some reason)

Download: https://huggingface.co/alibaba-pai/Z-Image-Turbo-Fun-Controlnet-Union-2.1/tree/main

https://redd.it/1rh6nwr
@rStableDiffusion
Act step 1.5 M2M best practices - do we have them?

Love ace step 1.5. Amazing and fast for text to music. But music to music, it's terrible. At medium noise, it changes the songs completely. Essentially the same as t2m but lower quality. At low denoise it just messes up audio quality.

Anyone manged to get decent results out of music to music? E.g. tweaking genre, replacing some words in lyrics, or similar?

https://redd.it/1rh6lmz
@rStableDiffusion
ELI5 why the finetuning community is much less active for Z image turbo and base than for SDXL

SDXL has like every imaginable Lora and Checkpoint on civitai, including weirdest niche things beyond imagination, but the only ones for ZiT and ZiB are some slight style ones for realism and of course some stuff for nudity and sex which, surprisingly, are worse than the ones for SDXL, which is an infinitely worse model.

Was ZiB and ZiT overhyped? Because for all the hype I thought people would have created the coolest lora and checkpoints by now, just like they did for SDXL, even taking into account that SDXL is 3 years old and Z image just a few weeks to months, but STILL.

Isnt it as great as people thought?

https://redd.it/1rhftq8
@rStableDiffusion