r/StableDiffusion – Telegram
Wan2.2 : better results with lower resolution?

Usually I do a test by generating at a low resolutions like 480x480 , if I like the results I generate at a higher resolution.

But in some cases I find the low resolution generations to be better in prompt adherence and looking more natural, higher resolutions like 720x720 some time look weird.

Anyone else notice the same?

https://redd.it/1q3lq5n
@rStableDiffusion
The Z-Image Turbo Lora-Training Townhall

Okay guys, I think we all know that bringing up training on Reddit is always a total fustercluck. It's an art more than it is a science. To that end I'm proposing something slightly different...

Put your steps, dataset image count and anything else you think is relevant in a quick, clear comment. If you agree with someone else's comment, upvote them.

I'll run training for as many as I can of the most upvoted with an example data set and we can do a science on it.



https://redd.it/1q3tcae
@rStableDiffusion
Turned myself into a GTA-style character. Kinda feels illegal
https://redd.it/1q3vjp7
@rStableDiffusion
SVI: One simple change fixed my slow motion and lack of prompt adherence...
https://redd.it/1q45liy
@rStableDiffusion
LTXV2 Pull Request In Comfy, Coming Soon? (weights not released yet)

https://github.com/comfyanonymous/ComfyUI/pull/11632

Looking at the PR it seems to support audio and use Gemma3 12B as text encoder.

The previous LTX models had speed but nowhere near the quality of Wan 2.2 14B.

LTX 0.9.7 actually followed prompts quite well, and had a good way of handling infinite length generation in comfy, you just put in prompts delimited by a '|' character, the dev team behind LTX clearly cares as the workflows are nicely organised, they release distilled + non distilled versions same day etc.

There seems to be something about Wan 2.2 that makes it avoid body horror/keep coherence when doing more complex things, smaller/faster models like Wan 5B, Hunyuan 1.5 and even the old Wan 1.3B CAN produce really good results, but 90% of the time you'll get weird body horror or artifacts somewhere in the video, whereas with Wan 2.2 it feels more like 20%.

On top of that some of the models break down a lot quicker with lower resolution, so you're forced into higher res, partially losing the speed benefits, or they have a high quality but stupidly slow VAE (HY 1.5 and Wan 5B are like this).

I hope LTX can achieve that while being faster, or improve on Wan (more consistent/less dice roll prompt following similar to Qwen image/z image, which might be likely due to gemma as text encoder) while being the same speed.

https://redd.it/1q49ulp
@rStableDiffusion
This media is not supported in your browser
VIEW IN TELEGRAM
I open-sourced a tool that turns any photo into a playable Game Boy ROM using AI

https://redd.it/1q4pgaa
@rStableDiffusion
I’m the Co-founder & CEO of Lightricks. We just open-sourced LTX-2, a production-ready audio-video AI model. AMA.

Hi everyone. **I’m Zeev Farbman, Co-founder & CEO of Lightricks.**

I’ve spent the last few years working closely with our team on [LTX-2](https://ltx.io/model), a production-ready audio–video foundation model. This week, we did a full open-source release of LTX-2, including weights, code, a trainer, benchmarks, LoRAs, and documentation.

Open releases of multimodal models are rare, and when they do happen, they’re often hard to run or hard to reproduce. We built LTX-2 to be something you can actually use: it runs locally on consumer GPUs and powers real products at Lightricks.

**I’m here to answer questions about:**

* Why we decided to open-source LTX-2
* What it took ship an open, production-ready AI model
* Tradeoffs around quality, efficiency, and control
* Where we think open multimodal models are going next
* Roadmap and plans

Ask me anything!
I’ll answer as many questions as I can, with some help from the LTX-2 team.

*Verification:*

[Lightricks CEO Zeev Farbman](https://preview.redd.it/3oo06hz2x4cg1.jpg?width=2400&format=pjpg&auto=webp&s=4c3764327c90a1af88b7e056084ed2ac8f87c60b)



https://redd.it/1q7dzq2
@rStableDiffusion
This media is not supported in your browser
VIEW IN TELEGRAM
LTX-2 team literally challenging Alibaba Wan team, this was shared on their official X account :)

https://redd.it/1q7kygr
@rStableDiffusion