Is an RTX 5090 necessary for the newest and most advanced AI video models? Is it normal for RTX GPUs to be so expensive in Europe? If video models continue to advance, will more GB of VRAM be needed? What will happen if GPU prices continue to rise? Is AMD behind NVIDIA?
https://redd.it/1oufag3
@rStableDiffusion
https://redd.it/1oufag3
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit: Is an RTX 5090 necessary for the newest and most advanced AI video models? Is it…
Explore this post and more from the StableDiffusion community
ComfyUi on new AMD GPU - today and future
Hi, I want to get more invested in AI generation and also lora training. I have some experience with comfy from work, but would like to dig deeper at home.
Since NVidia GPUs with 24GB are above my budget, I am curious about the AMD Radeon AI PRO R9700.
I know that AMD was said to be no good for comfyui. Has this changed? I read about PyTorch support and things like ROCm etc, but to be honest I don't know how that affects workflows in practical means. Does this mean that I will be able to do everything that I would be able to do with NVidia? I have no background in engineering whatsoever, so I would have a hard time finding workarounds and stuff. But is this even the case with the new GPUs from AMD?
Would be greatful for any help!
https://redd.it/1ouhneo
@rStableDiffusion
Hi, I want to get more invested in AI generation and also lora training. I have some experience with comfy from work, but would like to dig deeper at home.
Since NVidia GPUs with 24GB are above my budget, I am curious about the AMD Radeon AI PRO R9700.
I know that AMD was said to be no good for comfyui. Has this changed? I read about PyTorch support and things like ROCm etc, but to be honest I don't know how that affects workflows in practical means. Does this mean that I will be able to do everything that I would be able to do with NVidia? I have no background in engineering whatsoever, so I would have a hard time finding workarounds and stuff. But is this even the case with the new GPUs from AMD?
Would be greatful for any help!
https://redd.it/1ouhneo
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
Sharing the winners of the first Arca Gidan Prize. All made with open models + most shared the workflows and LoRAs they used. Amazing to see what a solo artist can do in a week (but we'll give more time for the next edition!)
Link here. Congrats to prize recipients and all who participated! I'll share details on the next one here + on our discord if you're interested.
https://redd.it/1oujqlj
@rStableDiffusion
Link here. Congrats to prize recipients and all who participated! I'll share details on the next one here + on our discord if you're interested.
https://redd.it/1oujqlj
@rStableDiffusion
The Arca Gidan Prize
The Arca Gidan Prize - Nov 2025 Submissions
An award for those who push open source AI art models to their artistic limits.
What's the best wan checkpoint/LoRA/finetune to animate cartoon and anime?
https://redd.it/1oukz3d
@rStableDiffusion
https://redd.it/1oukz3d
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
FIBO- by BRIAAI A text to image model trained on long structured captions . allows iterative editing of images.
https://redd.it/1oumkt0
@rStableDiffusion
https://redd.it/1oumkt0
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit: FIBO- by BRIAAI A text to image model trained on long structured captions . allows…
Explore this post and more from the StableDiffusion community
InfinityStar - new model
https://huggingface.co/FoundationVision/InfinityStar
We introduce InfinityStar, a unified spacetime autoregressive framework for high-resolution image and dynamic video synthesis. Building on the recent success of autoregressive modeling in both vision and language, our purely discrete approach jointly captures spatial and temporal dependencies within a single architecture. This unified design naturally supports a variety of generation tasks such as text-to-image, text-to-video, image-to-video, and long-duration video synthesis via straightforward temporal autoregression. Through extensive experiments, InfinityStar scores 83.74 on VBench, outperforming all autoregressive models by large margins, even surpassing diffusion competitors like HunyuanVideo. Without extra optimizations, our model generates a 5s, 720p video approximately 10$\\times$ faster than leading diffusion-based methods. To our knowledge, InfinityStar is the first discrete autoregressive video generator capable of producing industrial-level 720p videos. We release all code and models to foster further research in efficient, high-quality video generation.
weights on HF
https://huggingface.co/FoundationVision/InfinityStar/tree/main
InfinityStarInteract\_24K\_iters
infinitystar\_8b\_480p\_weights
infinitystar\_8b\_720p\_weights
https://redd.it/1ov05oq
@rStableDiffusion
https://huggingface.co/FoundationVision/InfinityStar
We introduce InfinityStar, a unified spacetime autoregressive framework for high-resolution image and dynamic video synthesis. Building on the recent success of autoregressive modeling in both vision and language, our purely discrete approach jointly captures spatial and temporal dependencies within a single architecture. This unified design naturally supports a variety of generation tasks such as text-to-image, text-to-video, image-to-video, and long-duration video synthesis via straightforward temporal autoregression. Through extensive experiments, InfinityStar scores 83.74 on VBench, outperforming all autoregressive models by large margins, even surpassing diffusion competitors like HunyuanVideo. Without extra optimizations, our model generates a 5s, 720p video approximately 10$\\times$ faster than leading diffusion-based methods. To our knowledge, InfinityStar is the first discrete autoregressive video generator capable of producing industrial-level 720p videos. We release all code and models to foster further research in efficient, high-quality video generation.
weights on HF
https://huggingface.co/FoundationVision/InfinityStar/tree/main
InfinityStarInteract\_24K\_iters
infinitystar\_8b\_480p\_weights
infinitystar\_8b\_720p\_weights
https://redd.it/1ov05oq
@rStableDiffusion
huggingface.co
FoundationVision/InfinityStar · Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
ComfyUI Tutorial Series Ep 70: Nunchaku Qwen Loras - Relight, Camera Angle & Scene Change
https://www.youtube.com/watch?v=9sD5Ekavjgo
https://redd.it/1ov8r21
@rStableDiffusion
https://www.youtube.com/watch?v=9sD5Ekavjgo
https://redd.it/1ov8r21
@rStableDiffusion
YouTube
ComfyUI Tutorial Series Ep 70: Nunchaku Qwen Loras - Relight, Camera Angle & Scene Change
In this episode, we can finally use Loras with the Nunchaku Qwen model in ComfyUI. I’ll show you 9 powerful Loras that help you edit and transform images in new ways. Learn how to change camera angles, relight your subjects, remove shadows, blend two images…