SVI 2.0 Pro - Tip about seeds
I apologize if this is common knowledge, but I saw a few SVI 2.0 Pro workflows that use a global random seed, in which this wouldn't work.
If your workflow has a random noise seed node attached to each extension step (instead of 1 global random seed for all), you can work like this:
Eg: If you have generated step 1, 2, and 3, but don’t like how step 3 turned out, you can just change the seed and / or prompt of step 3 and run again.
Now the workflow will skip step 1 and 2 (as they are already generated and nothing changed), keep them, and will only generate step 3 again.
This way you can extend and adjust as many times as you want, without having to regenerate earlier good extensions or wait for them to be generated again.
It’s awesome, really - I'm a bit mind blown about how good SVI 2.0 Pro is.
https://preview.redd.it/r4ymil14ryag1.png?width=2292&format=png&auto=webp&s=d3a17bbb8e70438cf773474449a9f35ea6e23b6c
Edit:
This is the workflow I am using:
https://github.com/user-attachments/files/24359648/wan22\_SVI\_Pro\_native\_example\_KJ.json
Though I did change models to native and experimenting with some other speed loras.
https://redd.it/1q2354i
@rStableDiffusion
I apologize if this is common knowledge, but I saw a few SVI 2.0 Pro workflows that use a global random seed, in which this wouldn't work.
If your workflow has a random noise seed node attached to each extension step (instead of 1 global random seed for all), you can work like this:
Eg: If you have generated step 1, 2, and 3, but don’t like how step 3 turned out, you can just change the seed and / or prompt of step 3 and run again.
Now the workflow will skip step 1 and 2 (as they are already generated and nothing changed), keep them, and will only generate step 3 again.
This way you can extend and adjust as many times as you want, without having to regenerate earlier good extensions or wait for them to be generated again.
It’s awesome, really - I'm a bit mind blown about how good SVI 2.0 Pro is.
https://preview.redd.it/r4ymil14ryag1.png?width=2292&format=png&auto=webp&s=d3a17bbb8e70438cf773474449a9f35ea6e23b6c
Edit:
This is the workflow I am using:
https://github.com/user-attachments/files/24359648/wan22\_SVI\_Pro\_native\_example\_KJ.json
Though I did change models to native and experimenting with some other speed loras.
https://redd.it/1q2354i
@rStableDiffusion
Frustrated with current state of video generation
I'm sure this boils down to a skill issue at the moment but
I've been trying video for a long time and I just don't think it's useful for much other than short dumb videos. It's too hard to get actual consistency and you have little control over the action, requiring a lot of redos. Which takes a lot more time then you would think. Even the closed source models are really unreliable in generation
Whenever you see someone's video that "looks finished" they probably had to gen that thing 20 times to get what they wanted, and that's just one chunk of the video, most have many chunks. If you are paying for an online service that's a lot of wasted "credits" just burning on nothing
I want to like doing video and want to think it's going to allow people to make stories but it just not good enough, not easy enough to use, too unpredictable, and too slow right now.
Even the online tools aren't much better from my testing . They still give me too much randomness. For example even Veo gave me slow motion problems similar to WAN for some scenes
What are your thoughts?
https://redd.it/1q27cp7
@rStableDiffusion
I'm sure this boils down to a skill issue at the moment but
I've been trying video for a long time and I just don't think it's useful for much other than short dumb videos. It's too hard to get actual consistency and you have little control over the action, requiring a lot of redos. Which takes a lot more time then you would think. Even the closed source models are really unreliable in generation
Whenever you see someone's video that "looks finished" they probably had to gen that thing 20 times to get what they wanted, and that's just one chunk of the video, most have many chunks. If you are paying for an online service that's a lot of wasted "credits" just burning on nothing
I want to like doing video and want to think it's going to allow people to make stories but it just not good enough, not easy enough to use, too unpredictable, and too slow right now.
Even the online tools aren't much better from my testing . They still give me too much randomness. For example even Veo gave me slow motion problems similar to WAN for some scenes
What are your thoughts?
https://redd.it/1q27cp7
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
I figured out how to completely bypass Nano Banana Pro's invisible watermark and here's how you can try it for free
https://redd.it/1q29ya6
@rStableDiffusion
https://redd.it/1q29ya6
@rStableDiffusion
PSA : to counteract slowness in SVI Pro use a model that already has a prebuilt LX2V LoRA
I renamed the model and forgot the original name, but I think it’s fp8, which already has a fast LoRA available, either from Civitai or from HF (Kijai).
I’ll upload the differences once I get home.
https://redd.it/1q2m5nl
@rStableDiffusion
I renamed the model and forgot the original name, but I think it’s fp8, which already has a fast LoRA available, either from Civitai or from HF (Kijai).
I’ll upload the differences once I get home.
https://redd.it/1q2m5nl
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
I've created an SVI Pro workflow that can easily extended to generate longer videos using Subgraphs
https://redd.it/1q2s4bn
@rStableDiffusion
https://redd.it/1q2s4bn
@rStableDiffusion
Qwen Image 2512 - 3 Days Later Discussion.
I've been training and testing qwen image 2512 since Its come out.
Has anyone noticed
\- The flexibility has gotten worse
\- 3 arms, noticeably more body deformity
\- This overly sharpened texture, very noticeable in hair.
\- Bad at anime/styling
\- Using 2 or 3 LoRA's makes the quality quite bad
\- prompt adherence seems to get worse as you describe.
Seems this model was finetuned more towards photorealism.
Thoughts?
https://redd.it/1q2qe12
@rStableDiffusion
I've been training and testing qwen image 2512 since Its come out.
Has anyone noticed
\- The flexibility has gotten worse
\- 3 arms, noticeably more body deformity
\- This overly sharpened texture, very noticeable in hair.
\- Bad at anime/styling
\- Using 2 or 3 LoRA's makes the quality quite bad
\- prompt adherence seems to get worse as you describe.
Seems this model was finetuned more towards photorealism.
Thoughts?
https://redd.it/1q2qe12
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community