This media is not supported in your browser
VIEW IN TELEGRAM
LTX 2 can generate 20 sec video at once with audio. They said they will open source model soon
https://redd.it/1ofqikk
@rStableDiffusion
https://redd.it/1ofqikk
@rStableDiffusion
🔥 Perplexity AI PRO - 1-Year Plan - Limited Time SUPER PROMO! 90% OFF!
https://redd.it/1ofrqcd
@rStableDiffusion
https://redd.it/1ofrqcd
@rStableDiffusion
This media is not supported in your browser
VIEW IN TELEGRAM
Texturing with SDXL-Lighting (4 step LoRA) in real time on RTX 4080
https://redd.it/1ofrwza
@rStableDiffusion
https://redd.it/1ofrwza
@rStableDiffusion
Not cool guys! Who leaked my VAE dataset? Come clean, i won't be angry, i promise...
https://redd.it/1ofvh27
@rStableDiffusion
https://redd.it/1ofvh27
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit: Not cool guys! Who leaked my VAE dataset? Come clean, i won't be angry, i promise...
Explore this post and more from the StableDiffusion community
Pony 7 weights released. Yet this image tells everything about it
https://preview.redd.it/cpi9frjr0bxf1.png?width=1280&format=png&auto=webp&s=2ee6f038b91dd912cba295e024c7e21a65f46943
n3ko, 2girls, (yamato_\\(one piece\\)), (yae_miko), cat ears, pink makeup, tall, mature, seductive, standing, medium_hair, pink green glitter glossy sheer neck striped jumpsuit, lace-up straps, green_eyes, highres, absurdres, (flat colors:1.1), flat background
https://redd.it/1ofzf8n
@rStableDiffusion
https://preview.redd.it/cpi9frjr0bxf1.png?width=1280&format=png&auto=webp&s=2ee6f038b91dd912cba295e024c7e21a65f46943
n3ko, 2girls, (yamato_\\(one piece\\)), (yae_miko), cat ears, pink makeup, tall, mature, seductive, standing, medium_hair, pink green glitter glossy sheer neck striped jumpsuit, lace-up straps, green_eyes, highres, absurdres, (flat colors:1.1), flat background
https://redd.it/1ofzf8n
@rStableDiffusion
FlashPack: High-throughput tensor loading for PyTorch
https://github.com/fal-ai/flashpack
FlashPack — a new, high-throughput file format and loading mechanism for PyTorch that makes model checkpoint I/O blazingly fast, even on systems without access to GPU Direct Storage (GDS).
With FlashPack, loading any model can be 3–6× faster than with the current state-of-the-art methods like
https://redd.it/1og1toy
@rStableDiffusion
https://github.com/fal-ai/flashpack
FlashPack — a new, high-throughput file format and loading mechanism for PyTorch that makes model checkpoint I/O blazingly fast, even on systems without access to GPU Direct Storage (GDS).
With FlashPack, loading any model can be 3–6× faster than with the current state-of-the-art methods like
accelerate or the standard load_state_dict() and to() flow — all wrapped in a lightweight, pure-Python package that works anywhere.https://redd.it/1og1toy
@rStableDiffusion
GitHub
GitHub - fal-ai/flashpack: High-throughput tensor loading for PyTorch
High-throughput tensor loading for PyTorch. Contribute to fal-ai/flashpack development by creating an account on GitHub.
This media is not supported in your browser
VIEW IN TELEGRAM
Automatically texturing a character with SDXL & ControlNet in Blender
https://redd.it/1og3u26
@rStableDiffusion
https://redd.it/1og3u26
@rStableDiffusion
Transform Your Videos Using Wan 2.1 Ditto (Low Vram Workflow)
https://youtu.be/iuakm3YQYY8
https://redd.it/1oge209
@rStableDiffusion
https://youtu.be/iuakm3YQYY8
https://redd.it/1oge209
@rStableDiffusion
YouTube
ComfyUI Tutorial: Transform Your Videos Using Wan 2.1 Ditto #comfyui #wan2 #comfyuitutorial
On this tutorial I will show you how to edit your video using the new ditto model that allows you to change the style of your video while keeping the poses and motions of the original video consistent, and without using any controlnet like depth, canny, openpose…
What's the big deal about Chroma?
I am trying to understand why are people excited about Chroma. For photorealistic images I get improper faces, takes too long and quality is ok.
I use ComfyUI.
What is the use case of Chroma? Am I using it wrong?
https://redd.it/1ogbkm1
@rStableDiffusion
I am trying to understand why are people excited about Chroma. For photorealistic images I get improper faces, takes too long and quality is ok.
I use ComfyUI.
What is the use case of Chroma? Am I using it wrong?
https://redd.it/1ogbkm1
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
Pico-Banana-400K: A Large-Scale Dataset for Text-Guided Image Editing (a new open dataset by Apple)
https://github.com/apple/pico-banana-400k
https://redd.it/1ogg414
@rStableDiffusion
https://github.com/apple/pico-banana-400k
https://redd.it/1ogg414
@rStableDiffusion
GitHub
GitHub - apple/pico-banana-400k
Contribute to apple/pico-banana-400k development by creating an account on GitHub.
Genuine question, why is no one using Hunyuan video?
I'm seeing most people using WAN only. Also, Lora support for hunyuan I2V seems to not exist at all?
I really would have tested both of them but I doubt my PC can handle it. So are there specific reasons why WAN is much widely used and why there is barely any support for hunyuan (i2v)?
https://redd.it/1oge14v
@rStableDiffusion
I'm seeing most people using WAN only. Also, Lora support for hunyuan I2V seems to not exist at all?
I really would have tested both of them but I doubt my PC can handle it. So are there specific reasons why WAN is much widely used and why there is barely any support for hunyuan (i2v)?
https://redd.it/1oge14v
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community