I made 3 rtx 5090 available for image upscaling online. Enjoy!
you get up to 120s of gpu compute time daily ( 4 upscales to 4MPx with supir )
limit will probably increase in future as i add more gpus.
direct link is banned for whatever reason so i link a random subdomain:
https://232.image-upscaling.net
https://redd.it/1q39l84
@rStableDiffusion
you get up to 120s of gpu compute time daily ( 4 upscales to 4MPx with supir )
limit will probably increase in future as i add more gpus.
direct link is banned for whatever reason so i link a random subdomain:
https://232.image-upscaling.net
https://redd.it/1q39l84
@rStableDiffusion
image-upscaling.net
Free AI Image Upscaler | No Sign-Up, No Watermarks, Up to 16K
Upscale your images online up to 4x using the free AI Image Enhancer!
Flux 2 dev, tested with Lora Turbo and Pi-Flow node, Quality vs. Speed (8GB VRAM)
https://redd.it/1q39fm7
@rStableDiffusion
https://redd.it/1q39fm7
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit: Flux 2 dev, tested with Lora Turbo and Pi-Flow node, Quality vs. Speed (8GB VRAM)
Explore this post and more from the StableDiffusion community
ComfyUI Wan 2.2 SVI Pro: Perfect Long Video Workflow (No Color Shift)
https://www.youtube.com/watch?v=PJnTcVOqJCM
https://redd.it/1q3c7a5
@rStableDiffusion
https://www.youtube.com/watch?v=PJnTcVOqJCM
https://redd.it/1q3c7a5
@rStableDiffusion
YouTube
ComfyUI Wan 2.2 SVI Pro: Perfect Long Video Workflow (No Color Shift)
In this video, I show you how to generate continuous AI videos longer than 40 seconds with perfect character consistency. Most users struggle with faces melting or motion getting stuck after the first few seconds. I break down my exact "Manual SVI" method…
[Update] I added a Speed Sorter to my free local Metadata Viewer so you can cull thousands of AI images in minutes.
https://redd.it/1q34juf
@rStableDiffusion
https://redd.it/1q34juf
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit: [Update] I added a Speed Sorter to my free local Metadata Viewer so you can cull…
Explore this post and more from the StableDiffusion community
How do you create truly realistic facial expressions with z-image?
https://redd.it/1q36whm
@rStableDiffusion
https://redd.it/1q36whm
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit: How do you create truly realistic facial expressions with z-image?
Explore this post and more from the StableDiffusion community
Trellis 2 is already getting dethroned by other open source 3D generators in 2026
Today I saw two videos that show what 2026 will hold for 3D model generation.
A few days ago Ultrashape 1.0 released their model and can create much more detailed 3D geometry, then Trellis 2, without textures though, but an extra pass with the texture part of Trellis 2 might be doable.
https://github.com/PKU-YuanGroup/UltraShape-1.0
https://youtu.be/7kPNA86G\_GA?si=11\_vppK38I1XLqBz
Also the base models of Huyuan 3D and Sparc 3D, Lattice and FaithC, respectively are planed to release, together with other nice 3D goodness, already out or coming.
https://github.com/Zeqiang-Lai/LATTICE
https://github.com/Luo-Yihao/FaithC
https://youtu.be/1qn1zFpuZoc?si=siXIz1y3pv01qDZt
Also a new 3D multi part generator is also on the horizon with MoCa:
https://github.com/lizhiqi49/MoCA
Plus for auto rigging and text to 3d animations, here are some ComfyUi addons:
https://github.com/PozzettiAndrea/ComfyUI-UniRig
https://github.com/jtydhr88/ComfyUI-HY-Motion1
https://redd.it/1q3ijwo
@rStableDiffusion
Today I saw two videos that show what 2026 will hold for 3D model generation.
A few days ago Ultrashape 1.0 released their model and can create much more detailed 3D geometry, then Trellis 2, without textures though, but an extra pass with the texture part of Trellis 2 might be doable.
https://github.com/PKU-YuanGroup/UltraShape-1.0
https://youtu.be/7kPNA86G\_GA?si=11\_vppK38I1XLqBz
Also the base models of Huyuan 3D and Sparc 3D, Lattice and FaithC, respectively are planed to release, together with other nice 3D goodness, already out or coming.
https://github.com/Zeqiang-Lai/LATTICE
https://github.com/Luo-Yihao/FaithC
https://youtu.be/1qn1zFpuZoc?si=siXIz1y3pv01qDZt
Also a new 3D multi part generator is also on the horizon with MoCa:
https://github.com/lizhiqi49/MoCA
Plus for auto rigging and text to 3d animations, here are some ComfyUi addons:
https://github.com/PozzettiAndrea/ComfyUI-UniRig
https://github.com/jtydhr88/ComfyUI-HY-Motion1
https://redd.it/1q3ijwo
@rStableDiffusion
GitHub
GitHub - PKU-YuanGroup/UltraShape-1.0: High-Fidelity 3D Shape Generation via Scalable Geometric Refinement
High-Fidelity 3D Shape Generation via Scalable Geometric Refinement - PKU-YuanGroup/UltraShape-1.0
TwinFlow: Realizing One-step Generation on Large Models with Self-adversarial Flows
https://huggingface.co/inclusionAI/TwinFlow-Z-Image-Turbo
https://redd.it/1q3lrk6
@rStableDiffusion
https://huggingface.co/inclusionAI/TwinFlow-Z-Image-Turbo
https://redd.it/1q3lrk6
@rStableDiffusion
huggingface.co
inclusionAI/TwinFlow-Z-Image-Turbo · Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
Wan2.2 : better results with lower resolution?
Usually I do a test by generating at a low resolutions like 480x480 , if I like the results I generate at a higher resolution.
But in some cases I find the low resolution generations to be better in prompt adherence and looking more natural, higher resolutions like 720x720 some time look weird.
Anyone else notice the same?
https://redd.it/1q3lq5n
@rStableDiffusion
Usually I do a test by generating at a low resolutions like 480x480 , if I like the results I generate at a higher resolution.
But in some cases I find the low resolution generations to be better in prompt adherence and looking more natural, higher resolutions like 720x720 some time look weird.
Anyone else notice the same?
https://redd.it/1q3lq5n
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
Release: Invoke AI 6.10 - now supports Z-Image Turbo
The new Invoke AI v6.10.0 RC1 now supports Z-Image Turbo... https://github.com/invoke-ai/InvokeAI/releases
https://redd.it/1q3ruuo
@rStableDiffusion
The new Invoke AI v6.10.0 RC1 now supports Z-Image Turbo... https://github.com/invoke-ai/InvokeAI/releases
https://redd.it/1q3ruuo
@rStableDiffusion
GitHub
Releases · invoke-ai/InvokeAI
Invoke is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The ...
This media is not supported in your browser
VIEW IN TELEGRAM
Time-lapse of a character creation process using Qwen Edit 2511
https://redd.it/1q3sb0z
@rStableDiffusion
https://redd.it/1q3sb0z
@rStableDiffusion
The Z-Image Turbo Lora-Training Townhall
Okay guys, I think we all know that bringing up training on Reddit is always a total fustercluck. It's an art more than it is a science. To that end I'm proposing something slightly different...
Put your steps, dataset image count and anything else you think is relevant in a quick, clear comment. If you agree with someone else's comment, upvote them.
I'll run training for as many as I can of the most upvoted with an example data set and we can do a science on it.
https://redd.it/1q3tcae
@rStableDiffusion
Okay guys, I think we all know that bringing up training on Reddit is always a total fustercluck. It's an art more than it is a science. To that end I'm proposing something slightly different...
Put your steps, dataset image count and anything else you think is relevant in a quick, clear comment. If you agree with someone else's comment, upvote them.
I'll run training for as many as I can of the most upvoted with an example data set and we can do a science on it.
https://redd.it/1q3tcae
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
Turned myself into a GTA-style character. Kinda feels illegal
https://redd.it/1q3vjp7
@rStableDiffusion
https://redd.it/1q3vjp7
@rStableDiffusion
WAN2.2 SVI v2.0 Pro Simplicity - infinite prompt, separate prompt lengths
https://redd.it/1q3wjyo
@rStableDiffusion
https://redd.it/1q3wjyo
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit: WAN2.2 SVI v2.0 Pro Simplicity - infinite prompt, separate prompt lengths
Explore this post and more from the StableDiffusion community
SVI: One simple change fixed my slow motion and lack of prompt adherence...
https://redd.it/1q45liy
@rStableDiffusion
https://redd.it/1q45liy
@rStableDiffusion