r/StableDiffusion – Telegram
Media is too big
VIEW IN TELEGRAM
Mario the crazy conspiracy theorist was too much fun not to create! LTX-2

https://redd.it/1olt8jb
@rStableDiffusion
Reporting Pro 6000 Blackwell can handle batch size 8 while training an Illustrious LoRA.
https://redd.it/1olvxy8
@rStableDiffusion
FlashVSR_Ultra_Fast vs. Topaz Starlight
https://redd.it/1olznsq
@rStableDiffusion
What Illustrious models is everyone using?

I have experimented with many Illustrious models, with WAI, Prefect and JANKU being my favorites, but I am curious what you guys are using! I'd love to find a daily driver as opposed to swapping between models so often.

https://redd.it/1om1e9a
@rStableDiffusion
How can I face swap and regenerate these paintings?
https://redd.it/1oly7mh
@rStableDiffusion
Movie night with my fav lil slasher~ 🍿💖
https://redd.it/1om3jrl
@rStableDiffusion
Got Wan2.2 I2V running 2.5x faster on 8xH100 using Sequence Parallelism + Magcache

https://preview.redd.it/07lwyvl5zryf1.png?width=1200&format=png&auto=webp&s=ad22c52c861c18c94c54f27bbe71a6e120a8f3e7

Hey everyone,

I was curious how much faster we can get with Magcache on 8xH100 instead of 1xH100 for Wan 2.2 I2V. Currently, the original repositories of Magcache and Teacache only support 1GPU inference for Wan2.2 because of FSDP, as shown in this GitHub issue.

I managed to scale Magcache on 8XH100 with FSDP and sequence parallelism. Also experimented with several techniques: Flash-Attention-3, TF32 tensor cores, int8 quantization, Magcache, and torch.compile.

The fastest combo I got was FA3+TF32+Magcache+torch.compile that runs a 1280x720 video (81 frames, 40 steps) in 109s, down from 250s baseline (8xH100 sequence parallelism and FA2 only) without noticeable loss of quality. We can also play with the Magcache parameters for a quality tradeoff, for example, E024K2R10 (Error threshold =0.24, Skip K=2, Retention ratio = 0.1) to get 2.5x + speed boost.

Full breakdown, commands, and comparisons are here:

👉 Blog post with full benchmarks and configs

👉 Github repo with code

Curious if anyone else here is exploring sequence parallelism or similar caching methods on FSDP-based video diffusion models? Would love to compare notes.

Disclosure: I worked on and co-wrote this technical breakdown as part of the Morphic team

https://redd.it/1om8sr9
@rStableDiffusion
Dataset tool to organize images by quality (sharp / blurry, jpeg artifacts, compression, etc).

I have rolled some of my own image quality tools before but I'll try asking. Any tool that allows for grouping / sorting / filtering images by different quality criteria like sharpness, blurriness, jpeg artifacts (even imperceptible), compression, out-of-focus depth of field, etc - basically by overall quality?

I am looking to root out outliers out of larger datasets that could negatively affect training quality.

https://redd.it/1omac5p
@rStableDiffusion