ComfyUI Nunchaku Tutorial: Install, Models, and Workflows Explained (Ep02)
https://www.youtube.com/watch?v=LDqD9Fp8J6g
https://redd.it/1qj20b6
@rStableDiffusion
https://www.youtube.com/watch?v=LDqD9Fp8J6g
https://redd.it/1qj20b6
@rStableDiffusion
YouTube
ComfyUI Nunchaku Tutorial: Install, Models, and Workflows Explained (Ep02)
In this episode of the ComfyUI course, you’ll learn how to install the Nunchaku custom node, understand int4 and fp4 Nunchaku models, and use ready-made Nunchaku workflows to significantly reduce VRAM usage and speed up image generation. This tutorial is…
This media is not supported in your browser
VIEW IN TELEGRAM
LTX-2 IC-LoRA I2V + FLUX.2 ControlNet & Pass Extractor (ComfyUI)
https://redd.it/1qj1o4z
@rStableDiffusion
https://redd.it/1qj1o4z
@rStableDiffusion
This media is not supported in your browser
VIEW IN TELEGRAM
No LTX2, just cause I added music doesn't mean you have to turn it into a party 🙈
https://redd.it/1qj1y1v
@rStableDiffusion
https://redd.it/1qj1y1v
@rStableDiffusion
RUN LTX2 using WAN2GP with 6gb Vram and 16gb ram
Sample Video
# I was able to run LTX 2 with my rtx 3060 6gb and 16 gb ram with this method
P.s I am not a Tech Master or a coder so if this doesnt work for you guys i may not be of any help :(
ill keep it as simple as possible
add this to your start.js noscript-youll find it inside wan.git folder inside pinokio if you downloaded from there
"python wgp.py \--multiple-images --perc-reserved-mem-max 0.1 {{args.compile ? '--compile' : ''}}"
just paste your entire start.js noscript on google ai mode and ask it to add this if you don't know where to put this line you can try changing 0.1 to 0.05 if vram memory issue still persists.
second error i encountered was ffmpeg crashes ,videos were generating but audio was crashing to fix that
download ffmpeg full build from gyan.dev
find your ffmpeg files inside pinokio folder just search for ffmpeg mine was here D:\\pinokio\\bin\\miniconda\\pkgs\\ffmpeg-8.0.1-gpl_h74fd8f1_909\\Library\\bin
then Press Windows + R
Type: sysdm.cpl
Press Enter
Go to the Advanced tab
Click Environment Variables…
Select Path under system variables → Edit and click on new and paste this > (Drive:\\pinokio\\bin\\miniconda\\pkgs\\ffmpeg-8.0.1-gpl_h74fd8f1_909\\Library\\bin) your drive may vary so keep that in mind click ok on all windows
(i asked this step from chatgpt so if any error happens just paste your problem there)
(example prompt for the question -I’m using Pinokio (with Wan2GP / LTX-2) and my video generates correctly but I get an FFmpeg error when merging audio. I already have FFmpeg installed via Pinokio/conda. Can you explain how FFmpeg works in this pipeline, where it should be located, how to add it to PATH on Windows, and how to fix common audio codec errors so audio and video merge correctly?)
restart you pc
then to verify open cmd and run this ffmpeg -version
if it prints version info you are good
thats all i did
sample attached generated using wan2gp with rtx 3060 6gb it takes 15 minutes to generate 720 p video use ic lora detailer for quality
sometimes you need to restart the environment if making 10 second video gives OOM error
https://redd.it/1qjmbf6
@rStableDiffusion
Sample Video
# I was able to run LTX 2 with my rtx 3060 6gb and 16 gb ram with this method
P.s I am not a Tech Master or a coder so if this doesnt work for you guys i may not be of any help :(
ill keep it as simple as possible
add this to your start.js noscript-youll find it inside wan.git folder inside pinokio if you downloaded from there
"python wgp.py \--multiple-images --perc-reserved-mem-max 0.1 {{args.compile ? '--compile' : ''}}"
just paste your entire start.js noscript on google ai mode and ask it to add this if you don't know where to put this line you can try changing 0.1 to 0.05 if vram memory issue still persists.
second error i encountered was ffmpeg crashes ,videos were generating but audio was crashing to fix that
download ffmpeg full build from gyan.dev
find your ffmpeg files inside pinokio folder just search for ffmpeg mine was here D:\\pinokio\\bin\\miniconda\\pkgs\\ffmpeg-8.0.1-gpl_h74fd8f1_909\\Library\\bin
then Press Windows + R
Type: sysdm.cpl
Press Enter
Go to the Advanced tab
Click Environment Variables…
Select Path under system variables → Edit and click on new and paste this > (Drive:\\pinokio\\bin\\miniconda\\pkgs\\ffmpeg-8.0.1-gpl_h74fd8f1_909\\Library\\bin) your drive may vary so keep that in mind click ok on all windows
(i asked this step from chatgpt so if any error happens just paste your problem there)
(example prompt for the question -I’m using Pinokio (with Wan2GP / LTX-2) and my video generates correctly but I get an FFmpeg error when merging audio. I already have FFmpeg installed via Pinokio/conda. Can you explain how FFmpeg works in this pipeline, where it should be located, how to add it to PATH on Windows, and how to fix common audio codec errors so audio and video merge correctly?)
restart you pc
then to verify open cmd and run this ffmpeg -version
if it prints version info you are good
thats all i did
sample attached generated using wan2gp with rtx 3060 6gb it takes 15 minutes to generate 720 p video use ic lora detailer for quality
sometimes you need to restart the environment if making 10 second video gives OOM error
https://redd.it/1qjmbf6
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
Qwen3-TTS, a series of powerful speech generation capabilities
https://redd.it/1qjuebr
@rStableDiffusion
https://redd.it/1qjuebr
@rStableDiffusion
This media is not supported in your browser
VIEW IN TELEGRAM
PersonaPlex: Voice and role control for full duplex conversational speech models by Nvidia
https://redd.it/1qjtpf1
@rStableDiffusion
https://redd.it/1qjtpf1
@rStableDiffusion
LTX2 issues probably won't be fixed by loras/workflows
When Wan2.2 released the speedup loras were a mess, there was mass confusion on getting enough motion out of characters, and the video length issues resulted in a flood of hacky continuation workflows
But the core model always worked well: it had excellent prompt adherence, and it understood the movement and structure of humans well
LTX2 at its peak exceeds Wan, and some of the outputs are brilliant in terms of fluid movement and quality
But the model is unstable, which results in a high fail rate. It is an absolute shot in the dark as to whether the prompts will land as expected, and the structure of humans is fragile and often nonsensical
I'll admit LTX2 has made it difficult to go back to Wan because when it's better, it's much better. But it's core base simply needs more work, so I'm mostly holding out for LTX3
https://redd.it/1qjyoqz
@rStableDiffusion
When Wan2.2 released the speedup loras were a mess, there was mass confusion on getting enough motion out of characters, and the video length issues resulted in a flood of hacky continuation workflows
But the core model always worked well: it had excellent prompt adherence, and it understood the movement and structure of humans well
LTX2 at its peak exceeds Wan, and some of the outputs are brilliant in terms of fluid movement and quality
But the model is unstable, which results in a high fail rate. It is an absolute shot in the dark as to whether the prompts will land as expected, and the structure of humans is fragile and often nonsensical
I'll admit LTX2 has made it difficult to go back to Wan because when it's better, it's much better. But it's core base simply needs more work, so I'm mostly holding out for LTX3
https://redd.it/1qjyoqz
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
AI girls flodding social media, including Reddit
Hi everyone,
I guess anyone who has worked with diffusion models for a while can spot that average 1girl AI look from a mile away.
I'm just curious by now how do you guys deal with it? Do you report it or just ignore it?
Personally, I report it if the subreddit explicitly bans AI. But Instagram is so flooded with bots and accounts fishing for engagement that I feel like it's pointless to try and report every single one.
https://redd.it/1qk0vac
@rStableDiffusion
Hi everyone,
I guess anyone who has worked with diffusion models for a while can spot that average 1girl AI look from a mile away.
I'm just curious by now how do you guys deal with it? Do you report it or just ignore it?
Personally, I report it if the subreddit explicitly bans AI. But Instagram is so flooded with bots and accounts fishing for engagement that I feel like it's pointless to try and report every single one.
https://redd.it/1qk0vac
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community