r/StableDiffusion – Telegram
RTX 3090 24 GB VS RTX 5080 16GB

Hey, guys, I currently own an average computer with 32GB RAM and an RTX 3060, and I am looking to either buy a new PC or replace my old card with an RTX 3090 24GB. The new computer that I have in mind has an RTX 5080 16GB, and 64GB RAM.

I am just tired of struggling to use image models beyond XL (Flux, Qwen, Chroma), being unable to generate videos with Wan 2.2, and needing several hours to locally train a simple Lora for 1.5; training XL is out of the question. So what do you guys recommend to me?

How important is CPU RAM when using AI models? It is worth discarding the 3090 24GB for a new computer with twice my current RAM, but with a 5080 16GB?

https://redd.it/1oso8md
@rStableDiffusion
Haven’t used SD in a while, is illustrious/pony still the go to or has there been better checkpoints lately?

Haven’t used sd for about several months since illustrious came out and I do and don’t like illustrious. Was curious on what everyone is using now?

Also would like to know if what video models everyone is using for local stuff?

https://redd.it/1osv278
@rStableDiffusion
Good Ai video generators that have "mid frame"?

So I've been using pixverse to create videos because it has a start, mid, and endframe option but I'm kind of struggling to get a certain aspect down.

For simplicity sake, say I'm trying to make a video of a character punching another character.

Start frame: Both characters in stances against eachother

Mid frame: Still of one character's fist colliding with the other character

End frame: Aftermath still of the punch with character knocked back

From what I can tell, it seems like whatever happens before and whatever happens after the midframe was generated separately and spliced together without using eachother for context, there is no constant momentum carried over the mid frame. As a result, there is a short period where the fist slows down until is barely moving as it touches the other character and after the midframe, the fist doesn't move.

Anyone figured out a way to preserve momentum before and after a frame you want to use?

https://redd.it/1ot3da3
@rStableDiffusion
UniLumos: Fast and Unified Image and Video Relighting

https://github.com/alibaba-damo-academy/Lumos-Custom?tab=readme-ov-file

So many new releases set off my 'wtf are you talking about?' klaxon, so I've tried to paraphrase their jargon. Apologies if I'm misinterpreted it.

What does it do ?

UniLumos, a relighting framework for both images and videos that takes foreground objects and reinserts them into other backgrounds and relights them as appropriate to the new background. In effect making an intelligent green screen cutout that also grades the film .

iS iT fOr cOmFy ? aNd wHeN ?

No and ask on Github you lazy scamps

Is it any good ?

Like all AI , it's a tool for specific uses and some will work and some won't, if you try extreme examples, prepare to eat a box of 'Disappointment Donuts'. The examples (on Github) are for showing the relighting, not context.

Original

Processed



https://redd.it/1ota9tc
@rStableDiffusion
A little overwhelmed with all the choices
https://redd.it/1otaj4v
@rStableDiffusion
Is there a way to edit photos inside ComfyUI? like a photoshop node or something
https://redd.it/1otdzku
@rStableDiffusion
Ovi 1.1 is now 10 seconds

https://reddit.com/link/1otllcy/video/gyspbbg91h0g1/player

The Ovi 1.1 now is 10 seconds! In addition,

1. We have simplified the audio denoscription tags from

Audio Denoscription: <AUDCAP>Audio denoscription here<ENDAUDCAP>

to

Audio Denoscription: Audio: Audio denoscription here

This makes prompt editing much easier.

2. We will also release a new 5-second base model checkpoint that was retrained using higher quality, 960x960p resolution videos, instead of the original Ovi 1.0 that was trained using 720x720p videos. The new 5-second base model also follows the simplified prompt above.

3. The 10-second video was trained using full bidirectional dense attention instead of causal or AR approach to ensure quality of generation.

We will release both 10-second & new 5-second weights very soon on our github repo - https://github.com/character-ai/Ovi


https://redd.it/1otllcy
@rStableDiffusion