r/StableDiffusion – Telegram
600k 1mp+ dataset

https://huggingface.co/datasets/opendiffusionai/cc12m-1mp\_plus-realistic

I previously posted some higher-resolution datasets, but they only got up to around 200k images.
I dug deeper, including 1mp (1024x1024 or greater) sized images from CC12M, and that brings up the image count to 600k.

Disclaimer: The quality is not as good as some of our hand-curated datasets. But... when you need large amounts of data, you have to make sacrifices sometimes. sigh.



https://redd.it/1p5kjuk
@rStableDiffusion
Is 16+8 VRAM and 32 GB of RAM enough for Wan 2.2?

Just bought a 5060 TI and will be using my 3060 TI in secondary slot.

So I have this question.

I don't really want to buy more RAM right now because I'm on an AM4, I was thinking of upgrading the whole system at the end of next year.

https://redd.it/1p5h1zp
@rStableDiffusion
Hunyuan 1.5 step distilled loras are out.

https://huggingface.co/Comfy-Org/HunyuanVideo_1.5_repackaged/tree/main/split_files/loras

Seems to work with T2V 720p model as well but obviously might be different results than using the 720p lora when that comes out. Using it with euler/beta 1 str, 1CFG 4-8 steps works.

I get gen times as low as (non-cold start after model is loaded and prompt is processed)

>6/6 [00:28<00:00, 4.81s/it]
>Prompt executed in 47.89 seconds

With a 3080 and the FP16 model, 49 frames 640*480, no sage or fast accumulation as the individual iterations are quite fast already and the vae decoding takes up a decent % of the time.

https://redd.it/1p5ou1s
@rStableDiffusion
Switching to Nvidia for SD

So right now I have a 6950xt (went AMD since I didn't really have AI in mind at the time.) and I was wanting to swap over to a Nvidia GPU to use Sable Diffusion. But I don't really know how much a performance bump I would get if I went budget and got something like a 3060 12gb. Right now I've been using one obsession to generate images and getting around 1.4it/s. I was also looking at getting a 5070 but am a little hesitant from the price (I'm broke).

https://redd.it/1p5pvu1
@rStableDiffusion
You can train an AI but you can't name a file? Oh please!

What is it with WAN LoRA creators and file names?

This is basic, guys! <LoRA Name><type[I2V/T2V/IT2V\]><[High/Low\]><[optional:version\]><etc.>

Ask yourself; how useful is it to have a big list of LoRAs named wan2.2-*something wan2_2_*something, Wan22-something, and so on and so on and so ******on. It's mental. If I'm looking for "Huge Breasts", I'm looking under H or B, not W, with dozens of other LoRa created by people lacking brain.

Instead of taking two seconds to come up with a logical name, thousands of users each need to put up with, "WTF is this in my downloads?" or "Where the fuck is that LoRA in this list?". And then rename it themselves (or risk insanity), and from then on it gets difficult to keep up with new versions. I mean, whateverthefuck you do, at least put the name of the LoRA at the start! Sheesh!

All previous models had logically named LoRA. Not one of my SDXL LoRA has a name beginning "SDXL" (okay, one does!). Why would it need to? WAN LoRA creators, for some reason, feel the need to put WAN*** at the start of the name. WHY???

Am I missing something?

https://redd.it/1p5w6s7
@rStableDiffusion
HunyuanVideo-1.5 Torch Compile

Has anyone succeeded at getting torch compile to work with HunyuanVideo-1.5 for speeding it up?

https://redd.it/1p60d0s
@rStableDiffusion
Lightweight tool to see GPU bottlenecks during model training (feedback wanted)
https://redd.it/1p64ssq
@rStableDiffusion
Any websites that people share ai creation that isn't porn or hentai

I was wondering if there are any websites where people can share their works I don't care about workflows, I'm not looking for porn or hentai but instead something more professional.

https://redd.it/1p670ep
@rStableDiffusion