r/StableDiffusion – Telegram
Looking for WAN 2.2 benchmarks (RTX 3090 or similar)

I’m not a big fan of ComfyUI (sadly, since most tutorials are based on it). Right now I’m running WAN 2.2 using Wan2GP and pure terminal calls, hooked up to an API on an external server with an RTX 3060.

I’ve been searching around but couldn’t find any benchmarks or performance data for WAN 2.2 on higher-end GPUs like the RTX 3090, 4090, or other 24GB–32GB cards.

I’m considering upgrading to a used 3090, but I’m not sure if it would actually make a big difference, ideally, I’d like to generate videos without waiting 5–10 minutes per run.

Does anyone know where I can find WAN 2.2 GPU performance comparisons or benchmarks? Or maybe some real-world feedback from people using it on beefier cards?

I would also like Flux benchmarks if exists anywhere.

https://redd.it/1ow0p11
@rStableDiffusion
Warning! Make sure to NOT store your ConfyUI creations in the ComfyUI folder!

So, like the dumbass I am, I just kept creating new folders inside the Output folder in ComfyUIs folder.

Then today there was a problem starting ComfyUI, it said something about having to install python dependencies (or something similar), but it wouldn't proceed due to there already being a .venv folder. I googled it quickly, and it was suggested to just delete the .venv folder, which I did.

What I didn't know was that it out ComfyUI in some kind of "reset everything to defaults" mode, whereupon ComfyUI permanently deleted everything. Several hundred gigabytes of models gone, as well as everything I've ever created with ComfyUI. The models is one thing, I don't have limited bandwidth, and I have a really fast connection. But I'm quite disappointed in losing every image and every video I've created. It's not a super big deal, its nothing important (which is why I didn't have it backed up), but it just feels so incredibly stupid.

So yeah, either don't use the output folder at all, or make sure to create backups.

https://redd.it/1owiicy
@rStableDiffusion
Qwen Lora requires much higher learning rates than Quen Edit 2509? Any tutorials? Or personal experience? What has been discovered about Qwen Lora training in the last three months?

1e-4 doesn't work well with qwen; even with over 2,000 steps, it seems undertrained. But that same value seems acceptable for qwen. edit 2509v

I have little experience. I don't know if it's my mistake. Batch size = 1. I generally practice with 10 to 30 images.

https://redd.it/1owkr5w
@rStableDiffusion
What model have best prompt adherence?

What I'm interested is build complex scene, like two person fighting pointing guns / sword and etc.

https://redd.it/1owqitn
@rStableDiffusion
OpenSource Face Swapper

can i ask what's the best face swapper currently ? i prefer one with open source, thanks !

https://redd.it/1owolx1
@rStableDiffusion
Slow LoRA Training LoRA on 5090
https://redd.it/1owut58
@rStableDiffusion
Eigen-Banana-Qwen-Image-Edit: Fast Image Editing with Qwen-Image-Edit LoRA
https://redd.it/1owykn7
@rStableDiffusion