Considering a beefy upgrade. How much would WAN and VACE benefit from 96 GB VRAM?
Considering buying the RTX Pro 6000 with 96 GB VRAM to increase resolution and frame range in WAN. I also train models, but will mostly use it for high-end video diffusion and VFX projects. I have heard that WAN struggles with quality above 720p, but in my experience, 1-second test clips rendered in 1080p look fine. I have had good results at 1408 by 768 for about 121 frames, but hit OEM errors when going any higher on my current RTX 4090 24 GB.
I would love to hear any real-world experiences regarding maximum resolution and frame ranges with 96 GB VRAM before upgrading.
https://redd.it/1op2gqd
@rStableDiffusion
Considering buying the RTX Pro 6000 with 96 GB VRAM to increase resolution and frame range in WAN. I also train models, but will mostly use it for high-end video diffusion and VFX projects. I have heard that WAN struggles with quality above 720p, but in my experience, 1-second test clips rendered in 1080p look fine. I have had good results at 1408 by 768 for about 121 frames, but hit OEM errors when going any higher on my current RTX 4090 24 GB.
I would love to hear any real-world experiences regarding maximum resolution and frame ranges with 96 GB VRAM before upgrading.
https://redd.it/1op2gqd
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
Masking and Scheduling LoRA
https://blog.comfy.org/p/masking-and-scheduling-lora-and-model-weights
https://redd.it/1op5sdw
@rStableDiffusion
https://blog.comfy.org/p/masking-and-scheduling-lora-and-model-weights
https://redd.it/1op5sdw
@rStableDiffusion
blog.comfy.org
Masking and Scheduling LoRA and Model Weights
As of Monday, December 2nd, ComfyUI now supports masking and scheduling LoRA and model weights natively as part of its conditioning system.
I still find flux Kontext much better for image restauration once you get the intuition on prompting and preparing the images. Qwen edit ruins and changes way too much.
https://redd.it/1op7wv0
@rStableDiffusion
https://redd.it/1op7wv0
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit: I still find flux Kontext much better for image restauration once you get the intuition…
Explore this post and more from the StableDiffusion community
Qwen trained model wild examples both Realistic and Fantastic, Full step by step tutorial published, train with as low as 6 GB GPUs, Qwen can do amazing ultra complex prompts + emotions very well - Images generated with SwarmUI with our ultra easy to use presets - 1-Click to use
https://redd.it/1opivzh
@rStableDiffusion
https://redd.it/1opivzh
@rStableDiffusion
Reddit
From the sdforall community on Reddit: Qwen trained model wild examples both Realistic and Fantastic, Full step by step tutorial…
Explore this post and more from the sdforall community