AI communities be cautious ⚠️ more scams will poping up using specifically Seedream models
This is an just awareness post.
Warning newcomers to be cautious of them,
Selling some courses on prompting, I guess
https://redd.it/1opn965
@rStableDiffusion
This is an just awareness post.
Warning newcomers to be cautious of them,
Selling some courses on prompting, I guess
https://redd.it/1opn965
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
Mixed Precision Quantization System in ComfyUI most recent update
https://redd.it/1opw64u
@rStableDiffusion
https://redd.it/1opw64u
@rStableDiffusion
Infinite Length AI Videos with no Color Shift (Wan2.2 VACE-FUN)
https://youtu.be/f82CZl23OOo
https://redd.it/1oq0xgl
@rStableDiffusion
https://youtu.be/f82CZl23OOo
https://redd.it/1oq0xgl
@rStableDiffusion
YouTube
Create Infinite Length AI Videos with Wan VACE-FUN | No Jerky Motion, No Color Shifting
For consulting and business inquiries, email the.artofficial.trainer@gmail.com — revolutionize your pipeline with next-generation AI video tools before the rest of the industry catches on!
Taking advantage of Wan-Fun VACE 2.2’s advanced video extension properties…
Taking advantage of Wan-Fun VACE 2.2’s advanced video extension properties…
Has anyone tried the newer video model Longcat yet?
Hugging Face: https://huggingface.co/meituan-longcat/LongCat-Video
GitHub: https://github.com/meituan-longcat/LongCat-Video
Would be nice to have some more examples.
https://redd.it/1oq7egc
@rStableDiffusion
Hugging Face: https://huggingface.co/meituan-longcat/LongCat-Video
GitHub: https://github.com/meituan-longcat/LongCat-Video
Would be nice to have some more examples.
https://redd.it/1oq7egc
@rStableDiffusion
huggingface.co
meituan-longcat/LongCat-Video · Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
Thank you SD sub
I just really wanted to say thank you to all of you folks in here who have been so helpful and patient and amazing regardless of anyone's knowledge level.
This sub is VERY different from "big reddit" in that most everyone here is civil and does not gate-keep knowledge. In this day and age, that is rare.
Context:
I was in the middle of creating a workflow to help test a prompt with all of the different sampler and scheduler possibilities. I was thinking through how to connect and I remade the workflow a few times until I figured out how to do it while reusing as few nodes as possibles, then using less visible wires, etc etc.
Anyway, I paused and I realized I just hit my 2 month mark of using ComfyUI and AI in general, outside of ChatGPT. When I first started ComfyUI seemed incredibly complex and I thought, "there's no way I'm going to be able to make my own workflows, I'll just spend time searching for other people's workflows that match what I want instead". But now it's no problem and far better because I understand the workflow I'm creating.
I just wanted to thank you all for helping me get here so fast.
Thanks fam.
https://redd.it/1oq9fzi
@rStableDiffusion
I just really wanted to say thank you to all of you folks in here who have been so helpful and patient and amazing regardless of anyone's knowledge level.
This sub is VERY different from "big reddit" in that most everyone here is civil and does not gate-keep knowledge. In this day and age, that is rare.
Context:
I was in the middle of creating a workflow to help test a prompt with all of the different sampler and scheduler possibilities. I was thinking through how to connect and I remade the workflow a few times until I figured out how to do it while reusing as few nodes as possibles, then using less visible wires, etc etc.
Anyway, I paused and I realized I just hit my 2 month mark of using ComfyUI and AI in general, outside of ChatGPT. When I first started ComfyUI seemed incredibly complex and I thought, "there's no way I'm going to be able to make my own workflows, I'll just spend time searching for other people's workflows that match what I want instead". But now it's no problem and far better because I understand the workflow I'm creating.
I just wanted to thank you all for helping me get here so fast.
Thanks fam.
https://redd.it/1oq9fzi
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
Release SDXL + IPAdapters for StreamDiffusion
The Daydream team just rolled out SDXL support for StreamDiffusion, bringing the latest Stable Diffusion model into a fully open-source, real-time video workflow.
This update enables HD video generation at 15 to 25 FPS, depending on setup, using TensorRT acceleration. Everything is open for you to extend, remix, and experiment with through the Daydream platform or our StreamDiffusion fork.
Here are some highlights we think might be interesting for this community:
SDXL Integration
3.5× larger model with richer visuals
Native 1024×1024 resolution for sharper output
Noticeably reduced flicker and artifacts for smoother frame-to-frame results
IPAdapters
Guide your video’s look and feel using a reference image
Works like a LoRA, but adjustable in real time
Two modes:
Standard: Blend or apply artistic styles dynamically
FaceID: Maintain character identity across sequences
Multi-ControlNet + Temporal Tools
Combine HED, Depth, Pose, Tile, and Canny ControlNets in one workflow
Runtime tuning for weight, composition, and spatial consistency
7+ temporal weight types, including linear, ease-in/out, and style transfer
Performance is stable around 15 to 25 FPS, even with complex multi-model setups.
We’ve also paired SD1.5 with IPAdapters for those who prefer the classic model, now running with smoother, high-framerate style transfer.
Creators are already experimenting with SDXL-powered real-time tools on Daydream, showing what’s possible when next-generation models meet live performance.
Everything is open source, so feel free to explore it, test it, and share what you build. Feedback and demos are always welcome - we are building for the community, so we rely on it!
You can give it a go and learn more here: https://docs.daydream.live/introduction
https://redd.it/1oqa38r
@rStableDiffusion
The Daydream team just rolled out SDXL support for StreamDiffusion, bringing the latest Stable Diffusion model into a fully open-source, real-time video workflow.
This update enables HD video generation at 15 to 25 FPS, depending on setup, using TensorRT acceleration. Everything is open for you to extend, remix, and experiment with through the Daydream platform or our StreamDiffusion fork.
Here are some highlights we think might be interesting for this community:
SDXL Integration
3.5× larger model with richer visuals
Native 1024×1024 resolution for sharper output
Noticeably reduced flicker and artifacts for smoother frame-to-frame results
IPAdapters
Guide your video’s look and feel using a reference image
Works like a LoRA, but adjustable in real time
Two modes:
Standard: Blend or apply artistic styles dynamically
FaceID: Maintain character identity across sequences
Multi-ControlNet + Temporal Tools
Combine HED, Depth, Pose, Tile, and Canny ControlNets in one workflow
Runtime tuning for weight, composition, and spatial consistency
7+ temporal weight types, including linear, ease-in/out, and style transfer
Performance is stable around 15 to 25 FPS, even with complex multi-model setups.
We’ve also paired SD1.5 with IPAdapters for those who prefer the classic model, now running with smoother, high-framerate style transfer.
Creators are already experimenting with SDXL-powered real-time tools on Daydream, showing what’s possible when next-generation models meet live performance.
Everything is open source, so feel free to explore it, test it, and share what you build. Feedback and demos are always welcome - we are building for the community, so we rely on it!
You can give it a go and learn more here: https://docs.daydream.live/introduction
https://redd.it/1oqa38r
@rStableDiffusion
Daydream
Introduction - Daydream
Comprehensive API documentation for Daydream
Denoiser 2.000000000000001 ( Anti Glaze, Anti Nightshade)
https://preview.redd.it/oj12hm5r7pzf1.png?width=1133&format=png&auto=webp&s=a9aeb17a9ca5ac245546bd5e2bf400b177e32924
Hey everyone,
I’ve been thinking for a while, and I’ve decided to release the denoiser.
It’s performing much better now: averaging 39.6 PSNR.
Download model + checkpoint . If you want the GUI source code, you can find it on Civitai — it’s available there as a ZIP folder.
https://redd.it/1oqak7j
@rStableDiffusion
https://preview.redd.it/oj12hm5r7pzf1.png?width=1133&format=png&auto=webp&s=a9aeb17a9ca5ac245546bd5e2bf400b177e32924
Hey everyone,
I’ve been thinking for a while, and I’ve decided to release the denoiser.
It’s performing much better now: averaging 39.6 PSNR.
Download model + checkpoint . If you want the GUI source code, you can find it on Civitai — it’s available there as a ZIP folder.
https://redd.it/1oqak7j
@rStableDiffusion
InfinityStar: amazing 720p, 10x faster than diffusion-based
https://x.com/wildmindai/status/1986502031532826776
https://redd.it/1oqfcdc
@rStableDiffusion
https://x.com/wildmindai/status/1986502031532826776
https://redd.it/1oqfcdc
@rStableDiffusion
X (formerly Twitter)
Wildminder (@wildmindai) on X
InfinityStar by Bytedance: A unified 8B spacetime autoregressive model for high-res image & video gen;
- 5s 720p video ~10x faster than DiT;
- scores 83.74 on VBench, topping other AR models and HunyuanVideo;
- Flan-T5-XL as text encoder.
- 480/720p,…
- 5s 720p video ~10x faster than DiT;
- scores 83.74 on VBench, topping other AR models and HunyuanVideo;
- Flan-T5-XL as text encoder.
- 480/720p,…