ComfyUI Course - Learn ComfyUI From Scratch | Full 5 Hour Course (Ep01)
https://www.youtube.com/watch?v=HkoRkNLWQzY
https://redd.it/1qdnhl5
@rStableDiffusion
https://www.youtube.com/watch?v=HkoRkNLWQzY
https://redd.it/1qdnhl5
@rStableDiffusion
YouTube
ComfyUI Course - Learn ComfyUI From Scratch | Full 5 Hour Course (Ep01)
This ComfyUI course for beginners teaches you how to use ComfyUI from scratch, starting with the fundamentals and building a real understanding of how AI image generation works locally.
This ComfyUI course is approximately 5 hours long and is designed for…
This ComfyUI course is approximately 5 hours long and is designed for…
Flux 2 Klein Model Family is here!
https://huggingface.co/black-forest-labs/FLUX.2-klein-4B https://huggingface.co/black-forest-labs/FLUX.2-klein-9B https://huggingface.co/black-forest-labs/FLUX.2-klein-base-9B https://huggingface.co/black-forest-labs/FLUX.2-klein-base-4B
https://redd.it/1qdmt1r
@rStableDiffusion
https://huggingface.co/black-forest-labs/FLUX.2-klein-4B https://huggingface.co/black-forest-labs/FLUX.2-klein-9B https://huggingface.co/black-forest-labs/FLUX.2-klein-base-9B https://huggingface.co/black-forest-labs/FLUX.2-klein-base-4B
https://redd.it/1qdmt1r
@rStableDiffusion
huggingface.co
black-forest-labs/FLUX.2-klein-4B · Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
LTX-2 Updates
https://reddit.com/link/1qdug07/video/a4qt2wjulkdg1/player
We were overwhelmed by the community response to LTX-2 last week. From the moment we released, this community jumped in and started creating configuration tweaks, sharing workflows, and posting optimizations here, on, Discord, Civitai, and elsewhere. We've honestly lost track of how many custom LoRAs have been shared. And we're only two weeks in.
We committed to continuously improving the model based on what we learn, and today we pushed an update to GitHub to address some issues that surfaced right after launch.
What's new today:
Latent normalization node for ComfyUI workflows \- This will dramatically improve audio/video quality by fixing overbaking and audio clipping issues.
Updated VAE for distilled checkpoints \- We accidentally shipped an older VAE with the distilled checkpoints. That's fixed now, and results should look much crisper and more realistic.
Training optimization \- We’ve added a low-VRAM training configuration with memory optimizations across the entire training pipeline that significantly reduce hardware requirements for LoRA training.
This is just the beginning. As our co-founder and CEO mentioned in last week's AMA, LTX-2.5 is already in active development. We're building a new latent space with better properties for preserving spatial and temporal details, plus a lot more we'll share soon. Stay tuned.
https://redd.it/1qdug07
@rStableDiffusion
https://reddit.com/link/1qdug07/video/a4qt2wjulkdg1/player
We were overwhelmed by the community response to LTX-2 last week. From the moment we released, this community jumped in and started creating configuration tweaks, sharing workflows, and posting optimizations here, on, Discord, Civitai, and elsewhere. We've honestly lost track of how many custom LoRAs have been shared. And we're only two weeks in.
We committed to continuously improving the model based on what we learn, and today we pushed an update to GitHub to address some issues that surfaced right after launch.
What's new today:
Latent normalization node for ComfyUI workflows \- This will dramatically improve audio/video quality by fixing overbaking and audio clipping issues.
Updated VAE for distilled checkpoints \- We accidentally shipped an older VAE with the distilled checkpoints. That's fixed now, and results should look much crisper and more realistic.
Training optimization \- We’ve added a low-VRAM training configuration with memory optimizations across the entire training pipeline that significantly reduce hardware requirements for LoRA training.
This is just the beginning. As our co-founder and CEO mentioned in last week's AMA, LTX-2.5 is already in active development. We're building a new latent space with better properties for preserving spatial and temporal details, plus a lot more we'll share soon. Stay tuned.
https://redd.it/1qdug07
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
This media is not supported in your browser
VIEW IN TELEGRAM
Viking Influencer with LTX2 image 2 video with native generated audio. It can do accents
https://redd.it/1qdu01e
@rStableDiffusion
https://redd.it/1qdu01e
@rStableDiffusion
This media is not supported in your browser
VIEW IN TELEGRAM
You can just create AI animations that react to your Music using this ComfyUI workflow 🔊
https://redd.it/1qed4sy
@rStableDiffusion
https://redd.it/1qed4sy
@rStableDiffusion
Wow, Flux 2 Klein Edit - actually a proper edit model that works correctly.
I'm using the 9B distilled model - this is literally the FIRST open source model that I can place myself into an image and it 100% keeps my likeness correctly. And it can swap faces even.
Even Qwen Image Edit can't do that, it always "places me" in an image but it doesn't look like me. It just can't do it.
From my tests so far, this thing is insane in accuracy. Really good.
You can even easily change the entire scene with a photo and it will keep the characters 100% accurate.
https://redd.it/1qehwx6
@rStableDiffusion
I'm using the 9B distilled model - this is literally the FIRST open source model that I can place myself into an image and it 100% keeps my likeness correctly. And it can swap faces even.
Even Qwen Image Edit can't do that, it always "places me" in an image but it doesn't look like me. It just can't do it.
From my tests so far, this thing is insane in accuracy. Really good.
You can even easily change the entire scene with a photo and it will keep the characters 100% accurate.
https://redd.it/1qehwx6
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
For some things, Z-Image is still king, with Klein often looking overdone
https://redd.it/1qenabq
@rStableDiffusion
https://redd.it/1qenabq
@rStableDiffusion
I converted some Half Life 1/2 screenshots into real life with the help of Klein 4b!
https://redd.it/1qjemoj
@rStableDiffusion
https://redd.it/1qjemoj
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit: I converted some Half Life 1/2 screenshots into real life with the help of Klein…
Explore this post and more from the StableDiffusion community
ComfyUI Nunchaku Tutorial: Install, Models, and Workflows Explained (Ep02)
https://www.youtube.com/watch?v=LDqD9Fp8J6g
https://redd.it/1qj20b6
@rStableDiffusion
https://www.youtube.com/watch?v=LDqD9Fp8J6g
https://redd.it/1qj20b6
@rStableDiffusion
YouTube
ComfyUI Nunchaku Tutorial: Install, Models, and Workflows Explained (Ep02)
In this episode of the ComfyUI course, you’ll learn how to install the Nunchaku custom node, understand int4 and fp4 Nunchaku models, and use ready-made Nunchaku workflows to significantly reduce VRAM usage and speed up image generation. This tutorial is…