Could I use a AI 3D scanner to make this 3D printable? I made this using SD
https://redd.it/1oy42nz
@rStableDiffusion
https://redd.it/1oy42nz
@rStableDiffusion
Some new WAN 2.2 Lightning LoRA comparisons
A comparison of all Lightning LoRA pairs, from oldest to newest.
\-All models are set to 1 strenght
\-Using FP8_SCALED base models
T2V 432x768px - EULER / SIMPLE - shift 5 - 41frames
T2I 1080x1920px - GRADIENT ESTIMATION / BONG TANGENT - shift 5 - 1frame
If you're asking me, I would tell you to use the 250928 pair, much better colors, less "high cfg" oversaturated / bright look, more natural, more overall / fine details.
Maybe try SEKO v2 if you are rendering more synthetic stuff like anime or CGI style.
Here : https://huggingface.co/lightx2v/Wan2.2-Lightning/discussions/64
https://preview.redd.it/6g0d6jpz2i1g1.jpg?width=4352&format=pjpg&auto=webp&s=cc9489b8eee7677eced827d5e9213dabcdbaf49b
https://preview.redd.it/5g1ihe7h3i1g1.jpg?width=4352&format=pjpg&auto=webp&s=5e754bd31f16fe6a13d684a5e8e6685f67e85843
https://redd.it/1oy5hjv
@rStableDiffusion
A comparison of all Lightning LoRA pairs, from oldest to newest.
\-All models are set to 1 strenght
\-Using FP8_SCALED base models
T2V 432x768px - EULER / SIMPLE - shift 5 - 41frames
T2I 1080x1920px - GRADIENT ESTIMATION / BONG TANGENT - shift 5 - 1frame
If you're asking me, I would tell you to use the 250928 pair, much better colors, less "high cfg" oversaturated / bright look, more natural, more overall / fine details.
Maybe try SEKO v2 if you are rendering more synthetic stuff like anime or CGI style.
Here : https://huggingface.co/lightx2v/Wan2.2-Lightning/discussions/64
https://preview.redd.it/6g0d6jpz2i1g1.jpg?width=4352&format=pjpg&auto=webp&s=cc9489b8eee7677eced827d5e9213dabcdbaf49b
https://preview.redd.it/5g1ihe7h3i1g1.jpg?width=4352&format=pjpg&auto=webp&s=5e754bd31f16fe6a13d684a5e8e6685f67e85843
https://redd.it/1oy5hjv
@rStableDiffusion
huggingface.co
lightx2v/Wan2.2-Lightning · Some NEW* comparisons for you
A comparison of all Lightning LoRA pairs, from oldest to newest.
Camera motion cloning video generation Test : ComfyUI Uni3C Workflow
https://youtu.be/myeSYeV7Hbk
https://redd.it/1oyaejs
@rStableDiffusion
https://youtu.be/myeSYeV7Hbk
https://redd.it/1oyaejs
@rStableDiffusion
YouTube
카메라 모션을 그대로 모방하는 영상 생성 기술 #comfyui #aivfx #wanvideo
채널에 가입하여 혜택을 누려보세요.
https://www.youtube.com/channel/UC3BaoTQ0TngNK0amK4I78Aw/join
-------------------------------------
강좌링크 : https://youtu.be/yB0V_Xi8VUM
https://www.youtube.com/channel/UC3BaoTQ0TngNK0amK4I78Aw/join
-------------------------------------
강좌링크 : https://youtu.be/yB0V_Xi8VUM
Outfit Extractor/Transfer+Multi View Relight LORA Using Nunchaku Qwen LORA Model Loader
https://youtu.be/qyw1L1BhopI
https://redd.it/1oygylw
@rStableDiffusion
https://youtu.be/qyw1L1BhopI
https://redd.it/1oygylw
@rStableDiffusion
YouTube
ComfyUI Tutorial : How To Use Nunchaku Qwen LORA Model Loader #comfyui #comfyuitutorial #qwenimage
On this tutorial I will show you how to do outfit transfer or virtual try on using new workflow that allows you to extract any outfit from an image than using target image to transfer it while keeping the consistency of the target image and details. the workflow…
VFX Shot Creation Process of a Broken Airplane on a Desert Using AI
https://youtu.be/gWb1PR54_YA
https://redd.it/1oyh02s
@rStableDiffusion
https://youtu.be/gWb1PR54_YA
https://redd.it/1oyh02s
@rStableDiffusion
YouTube
[NUKE & AI 활용 고급강좌 ] 사막위 부서진 비행기 제작 프로세스
채널에 가입하여 혜택을 누려보세요.
https://www.youtube.com/channel/UC3BaoTQ0TngNK0amK4I78Aw/join
------------------------------------------------
강좌링크
https://youtu.be/Rx5IPtE3WRU
https://youtu.be/b-Ox-EjVA2o
https://www.youtube.com/channel/UC3BaoTQ0TngNK0amK4I78Aw/join
------------------------------------------------
강좌링크
https://youtu.be/Rx5IPtE3WRU
https://youtu.be/b-Ox-EjVA2o
Free tools for video face swap?
Are there any free tools that can do video face swaps without huge watermarks or crashing? I tried a few trial versions but none were stable. Would love something open source if possible
https://redd.it/1oykglu
@rStableDiffusion
Are there any free tools that can do video face swaps without huge watermarks or crashing? I tried a few trial versions but none were stable. Would love something open source if possible
https://redd.it/1oykglu
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
Qwen and gwen edit 2509 - is the model like flux? Is a small number of images (10) enough to train a lora?
With Flux, I had worse results if I tried to train a Lora with 20, 30, or 50 photos (people Lora).
Theoretically, models with a much larger number of parameters need fewer images.
I don't know if the same logic applies to Qwen.
https://redd.it/1oykzrv
@rStableDiffusion
With Flux, I had worse results if I tried to train a Lora with 20, 30, or 50 photos (people Lora).
Theoretically, models with a much larger number of parameters need fewer images.
I don't know if the same logic applies to Qwen.
https://redd.it/1oykzrv
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
[LoRA] PanelPainter V2 — Manga Panel Coloring (Qwen Image Edit 2509)
https://redd.it/1oymz0o
@rStableDiffusion
https://redd.it/1oymz0o
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit: [LoRA] PanelPainter V2 — Manga Panel Coloring (Qwen Image Edit 2509)
Explore this post and more from the StableDiffusion community
WIP report: t5 sd1.5
Just a little attention mongering, because I'm an attention.. junkie...
Still trying to retrain sd to take T5 frontend.
Uncountable oddities. But here's a training output progression to make it look like im actually progressing towards something :-}
target was "a woman". This is at 10,000 steps through 18,000 steps, batch size 64
\\"woman\\"
Sad thing is, output degrades in various ways after that, so I cant release that checkpoint.
The work continues....
https://redd.it/1oyphkb
@rStableDiffusion
Just a little attention mongering, because I'm an attention.. junkie...
Still trying to retrain sd to take T5 frontend.
Uncountable oddities. But here's a training output progression to make it look like im actually progressing towards something :-}
target was "a woman". This is at 10,000 steps through 18,000 steps, batch size 64
\\"woman\\"
Sad thing is, output degrades in various ways after that, so I cant release that checkpoint.
The work continues....
https://redd.it/1oyphkb
@rStableDiffusion
Most efficient/convenient setup/tooling for a 5060 Ti 16gb on Linux?
I just upgraded from an RTX 2070 Super 8gb to a RTX 5060 Ti 16gb. Common generation for a single image went from \~20.5 seconds to \~12.5 seconds. I then used a Dockerfile to build a wheel for Sage Attention 2.2 (so I could use recent versions of python/torch/cuda)—installing that yielded about a 6% speedup, to roughly \~11.5 seconds.
The RTX 5060 is sm120 (SM 12.0) Blackwell. It's fast but I guess there aren't a ton of optimizations (Sage/Flash) built for it yet. ChatGPT tells me I can install prebuilt wheels of Flash Attention 3 with great Blackwell support that offer far greater speeds, but I'm not sure it's right about that--where are these wheels? I don't even see a major version 3 in the flash attention repo's release section yet.
IMO this is all pretty fast now. But I was interested in testing out some video (e.g. Wan 2.2) and for that any speedup is really helpful. I'm not up for compiling Flash Attention--I gave it a try one evening but after two hours of 100% CPU I was about 1/8th of the way through the compilation and I quit it. Seems much better to download a good precompiled wheel somewhere if available. But (on Blackwell) would I really get a big improvement over Sage Attention 2.2?
And I've never tried Nunchaku and I'm not sure how that compares.
Is Sage Attention 2.2 about on par with alternatives for sm120 Blackwell? What do you think the best option is for someone with a RTX 5060 Ti 16gb on Linux?
https://redd.it/1oyomk1
@rStableDiffusion
I just upgraded from an RTX 2070 Super 8gb to a RTX 5060 Ti 16gb. Common generation for a single image went from \~20.5 seconds to \~12.5 seconds. I then used a Dockerfile to build a wheel for Sage Attention 2.2 (so I could use recent versions of python/torch/cuda)—installing that yielded about a 6% speedup, to roughly \~11.5 seconds.
The RTX 5060 is sm120 (SM 12.0) Blackwell. It's fast but I guess there aren't a ton of optimizations (Sage/Flash) built for it yet. ChatGPT tells me I can install prebuilt wheels of Flash Attention 3 with great Blackwell support that offer far greater speeds, but I'm not sure it's right about that--where are these wheels? I don't even see a major version 3 in the flash attention repo's release section yet.
IMO this is all pretty fast now. But I was interested in testing out some video (e.g. Wan 2.2) and for that any speedup is really helpful. I'm not up for compiling Flash Attention--I gave it a try one evening but after two hours of 100% CPU I was about 1/8th of the way through the compilation and I quit it. Seems much better to download a good precompiled wheel somewhere if available. But (on Blackwell) would I really get a big improvement over Sage Attention 2.2?
And I've never tried Nunchaku and I'm not sure how that compares.
Is Sage Attention 2.2 about on par with alternatives for sm120 Blackwell? What do you think the best option is for someone with a RTX 5060 Ti 16gb on Linux?
https://redd.it/1oyomk1
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
Get rid of the halftone pattern in Qwen Image/Qwen Image Edit with this
https://redd.it/1oytasv
@rStableDiffusion
https://redd.it/1oytasv
@rStableDiffusion
Has anyone switched fully from cloud AI to local, What surprised you most?
Hey everyone,
I’ve been thinking about moving away from cloud AI tools and running everything locally instead. I keep hearing mixed things. Some people say it feels amazing and private, others say the models feel slower or not as smart.
If you’ve actually made the switch to local AI, I would love to hear your honest experience:
What surprised you the most?
Was it the speed? The setup? Freedom?
Did you miss anything from cloud models?
And for anyone who tried switching but went back, what made you return?
I’m not trying to start a cloud vs. local fight. I am just curious how it feels to use local AI day to day. Real stories always help more than specs or benchmarks.
Thanks in advance!
https://redd.it/1oyv3zt
@rStableDiffusion
Hey everyone,
I’ve been thinking about moving away from cloud AI tools and running everything locally instead. I keep hearing mixed things. Some people say it feels amazing and private, others say the models feel slower or not as smart.
If you’ve actually made the switch to local AI, I would love to hear your honest experience:
What surprised you the most?
Was it the speed? The setup? Freedom?
Did you miss anything from cloud models?
And for anyone who tried switching but went back, what made you return?
I’m not trying to start a cloud vs. local fight. I am just curious how it feels to use local AI day to day. Real stories always help more than specs or benchmarks.
Thanks in advance!
https://redd.it/1oyv3zt
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
3060 12gb to 5060 Ti 16gb upgrade
So i can potentially get a 5060 TI 16gb for like $450 (i'm not from USA so maybe accurate or not :) ) brand new from a local business with warranty and all the good stuff.
Could you tell me if the upgrade is worth it, or should i keep on saving until next year so i can get an even better card?
I am pretty sure that at least for this yeas is as good as it gets, i already try on FB Marketplace of my city and is full of lemons/iffy stuff/overpriced garbage.
The best is could get is a 3080 12gb that i cannot run with the PSU i have, not used 4060 16gb, not a single decent x070 RTX series, just nothing
As a note i only have a 500w gold PSU so i cannot right now put anything power hungry on my pc.
https://redd.it/1oyz49j
@rStableDiffusion
So i can potentially get a 5060 TI 16gb for like $450 (i'm not from USA so maybe accurate or not :) ) brand new from a local business with warranty and all the good stuff.
Could you tell me if the upgrade is worth it, or should i keep on saving until next year so i can get an even better card?
I am pretty sure that at least for this yeas is as good as it gets, i already try on FB Marketplace of my city and is full of lemons/iffy stuff/overpriced garbage.
The best is could get is a 3080 12gb that i cannot run with the PSU i have, not used 4060 16gb, not a single decent x070 RTX series, just nothing
As a note i only have a 500w gold PSU so i cannot right now put anything power hungry on my pc.
https://redd.it/1oyz49j
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community