r/StableDiffusion – Telegram
Could I use a AI 3D scanner to make this 3D printable? I made this using SD
https://redd.it/1oy42nz
@rStableDiffusion
Some new WAN 2.2 Lightning LoRA comparisons

A comparison of all Lightning LoRA pairs, from oldest to newest.

\-All models are set to 1 strenght
\-Using FP8_SCALED base models

T2V 432x768px - EULER / SIMPLE - shift 5 - 41frames
T2I 1080x1920px - GRADIENT ESTIMATION / BONG TANGENT - shift 5 - 1frame

If you're asking me, I would tell you to use the 250928 pair, much better colors, less "high cfg" oversaturated / bright look, more natural, more overall / fine details.
Maybe try SEKO v2 if you are rendering more synthetic stuff like anime or CGI style.

Here : https://huggingface.co/lightx2v/Wan2.2-Lightning/discussions/64

https://preview.redd.it/6g0d6jpz2i1g1.jpg?width=4352&format=pjpg&auto=webp&s=cc9489b8eee7677eced827d5e9213dabcdbaf49b

https://preview.redd.it/5g1ihe7h3i1g1.jpg?width=4352&format=pjpg&auto=webp&s=5e754bd31f16fe6a13d684a5e8e6685f67e85843

https://redd.it/1oy5hjv
@rStableDiffusion
Free tools for video face swap?

Are there any free tools that can do video face swaps without huge watermarks or crashing? I tried a few trial versions but none were stable. Would love something open source if possible

https://redd.it/1oykglu
@rStableDiffusion
Qwen and gwen edit 2509 - is the model like flux? Is a small number of images (10) enough to train a lora?

With Flux, I had worse results if I tried to train a Lora with 20, 30, or 50 photos (people Lora).

Theoretically, models with a much larger number of parameters need fewer images.

I don't know if the same logic applies to Qwen.

https://redd.it/1oykzrv
@rStableDiffusion