r/StableDiffusion – Telegram
Trellis 2 is already getting dethroned by other open source 3D generators in 2026

Today I saw two videos that show what 2026 will hold for 3D model generation.

A few days ago Ultrashape 1.0 released their model and can create much more detailed 3D geometry, then Trellis 2, without textures though, but an extra pass with the texture part of Trellis 2 might be doable.

https://github.com/PKU-YuanGroup/UltraShape-1.0


https://youtu.be/7kPNA86G\_GA?si=11\_vppK38I1XLqBz


Also the base models of Huyuan 3D and Sparc 3D, Lattice and FaithC, respectively are planed to release, together with other nice 3D goodness, already out or coming.

https://github.com/Zeqiang-Lai/LATTICE


https://github.com/Luo-Yihao/FaithC



https://youtu.be/1qn1zFpuZoc?si=siXIz1y3pv01qDZt



Also a new 3D multi part generator is also on the horizon with MoCa:

https://github.com/lizhiqi49/MoCA



Plus for auto rigging and text to 3d animations, here are some ComfyUi addons:

https://github.com/PozzettiAndrea/ComfyUI-UniRig


https://github.com/jtydhr88/ComfyUI-HY-Motion1



https://redd.it/1q3ijwo
@rStableDiffusion
Wan2.2 : better results with lower resolution?

Usually I do a test by generating at a low resolutions like 480x480 , if I like the results I generate at a higher resolution.

But in some cases I find the low resolution generations to be better in prompt adherence and looking more natural, higher resolutions like 720x720 some time look weird.

Anyone else notice the same?

https://redd.it/1q3lq5n
@rStableDiffusion
The Z-Image Turbo Lora-Training Townhall

Okay guys, I think we all know that bringing up training on Reddit is always a total fustercluck. It's an art more than it is a science. To that end I'm proposing something slightly different...

Put your steps, dataset image count and anything else you think is relevant in a quick, clear comment. If you agree with someone else's comment, upvote them.

I'll run training for as many as I can of the most upvoted with an example data set and we can do a science on it.



https://redd.it/1q3tcae
@rStableDiffusion
Turned myself into a GTA-style character. Kinda feels illegal
https://redd.it/1q3vjp7
@rStableDiffusion
SVI: One simple change fixed my slow motion and lack of prompt adherence...
https://redd.it/1q45liy
@rStableDiffusion
LTXV2 Pull Request In Comfy, Coming Soon? (weights not released yet)

https://github.com/comfyanonymous/ComfyUI/pull/11632

Looking at the PR it seems to support audio and use Gemma3 12B as text encoder.

The previous LTX models had speed but nowhere near the quality of Wan 2.2 14B.

LTX 0.9.7 actually followed prompts quite well, and had a good way of handling infinite length generation in comfy, you just put in prompts delimited by a '|' character, the dev team behind LTX clearly cares as the workflows are nicely organised, they release distilled + non distilled versions same day etc.

There seems to be something about Wan 2.2 that makes it avoid body horror/keep coherence when doing more complex things, smaller/faster models like Wan 5B, Hunyuan 1.5 and even the old Wan 1.3B CAN produce really good results, but 90% of the time you'll get weird body horror or artifacts somewhere in the video, whereas with Wan 2.2 it feels more like 20%.

On top of that some of the models break down a lot quicker with lower resolution, so you're forced into higher res, partially losing the speed benefits, or they have a high quality but stupidly slow VAE (HY 1.5 and Wan 5B are like this).

I hope LTX can achieve that while being faster, or improve on Wan (more consistent/less dice roll prompt following similar to Qwen image/z image, which might be likely due to gemma as text encoder) while being the same speed.

https://redd.it/1q49ulp
@rStableDiffusion