Why is no one talking about Kandinsky 5.0 Video models?
Hello!
A few months ago, some video models that show potential from Kandinsky were launched, but there's nothing about them on civitai, no loras, no workflows, nothing, not even on huggingface so far.
So I'm really curious why the people are not using these new video models when I heard that they can even do notSFW out-of-the-box?
Is WAN 2.2 way better than Kandinsky and that's why the people are not using it or what are the other reasons? From what I researched so far it's a model that shows potential.
https://redd.it/1pzyhm7
@rStableDiffusion
Hello!
A few months ago, some video models that show potential from Kandinsky were launched, but there's nothing about them on civitai, no loras, no workflows, nothing, not even on huggingface so far.
So I'm really curious why the people are not using these new video models when I heard that they can even do notSFW out-of-the-box?
Is WAN 2.2 way better than Kandinsky and that's why the people are not using it or what are the other reasons? From what I researched so far it's a model that shows potential.
https://redd.it/1pzyhm7
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
There's a new paper that proposes new way to reduce model size by 50-70% without drastically nerfing the quality of model. Basically promising something like 70b model on phones. This guy on twitter tried it and its looking promising but idk if it'll work for image gen
https://x.com/i/status/2005995329485959507
https://redd.it/1q0a42k
@rStableDiffusion
https://x.com/i/status/2005995329485959507
https://redd.it/1q0a42k
@rStableDiffusion
X (formerly Twitter)
Brian Roemmele (@BrianRoemmele) on X
BOOM!
It works on LLMs!
I am using Nash Equilibrium on the attention head of an LLM! I may be the first to do this at this level.
I am achieving a 50-70% effective size reduction on a quantization of 4-bit weights shrinking the model and is enabling on…
It works on LLMs!
I am using Nash Equilibrium on the attention head of an LLM! I may be the first to do this at this level.
I am achieving a 50-70% effective size reduction on a quantization of 4-bit weights shrinking the model and is enabling on…