More Nunchaku SVDQuants available - Jib Mix Flux, Fluxmania,
CyberRealistic and PixelWave
Hey everyone! Since my last post got great feedback, I've finished my SVDQuant pipeline and cranked out a few more models:
[Jib Mix Flux V12](https://huggingface.co/spooknik/Jib-Mix-Flux-SVDQ)
CyberRealistic Flux V2.5
[Fluxmania Legacy](https://huggingface.co/spooknik/Fluxmania-SVDQ)
Pixelwave schnell 04 (Int4 coming within 24 hours)
Update on Chroma: Unfortunately, it won't work with Deepcompressor/Nunchaku out of the box due to differences in the model architecture. I attempted a Flux/Chroma merge to get around this, but the results weren't promising. I'll wait for official Nunchaku support before tackling it.
Requests welcome! Drop a comment if there's a model you'd like to see as an SVDQuant - I might just make it happen.
*(Ko-Fi in my profile if you'd like to buy me a coffee ☕)*
https://redd.it/1oe6bcz
@rStableDiffusion
CyberRealistic and PixelWave
Hey everyone! Since my last post got great feedback, I've finished my SVDQuant pipeline and cranked out a few more models:
[Jib Mix Flux V12](https://huggingface.co/spooknik/Jib-Mix-Flux-SVDQ)
CyberRealistic Flux V2.5
[Fluxmania Legacy](https://huggingface.co/spooknik/Fluxmania-SVDQ)
Pixelwave schnell 04 (Int4 coming within 24 hours)
Update on Chroma: Unfortunately, it won't work with Deepcompressor/Nunchaku out of the box due to differences in the model architecture. I attempted a Flux/Chroma merge to get around this, but the results weren't promising. I'll wait for official Nunchaku support before tackling it.
Requests welcome! Drop a comment if there's a model you'd like to see as an SVDQuant - I might just make it happen.
*(Ko-Fi in my profile if you'd like to buy me a coffee ☕)*
https://redd.it/1oe6bcz
@rStableDiffusion
huggingface.co
spooknik/Jib-Mix-Flux-SVDQ · Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
"Conflagration" Wan22 FLF ComfyUI
https://youtu.be/gQC-60yFfVU
https://redd.it/1oe2k9h
@rStableDiffusion
https://youtu.be/gQC-60yFfVU
https://redd.it/1oe2k9h
@rStableDiffusion
LTXV 2.0 is out
https://website.ltx.video/blog/introducing-ltx-2
https://redd.it/1oe3le4
@rStableDiffusion
https://website.ltx.video/blog/introducing-ltx-2
https://redd.it/1oe3le4
@rStableDiffusion
website.ltx.video
The Next-Generation Multimodal AI Foundation Model by Lightricks | LTX-2
Discover LTX-2, Lightricks’ next-generation multimodal AI model for video with synchronised audio and image creation. Generate, enhance, and repurpose visuals faster than ever.
Brie's Qwen Edit Lazy Relight workflow
https://preview.redd.it/lxabti1muvwf1.png?width=1628&format=png&auto=webp&s=1d4fe7d23f82ec7280e48f14fb8e0761c29de627
Hey everyone\~
I've released the first version of my Qwen Edit Lazy Relight. It takes a character and injects it into a scene, adapting it to the scene's lighting and shadows.
You just put in an image of a character, an image of your background, maybe tweak the prompt a bit, and it'll place the character in the scene. You need need to adjust the character's position and scale in the workflow though. Some other params to adjust if need be.
It uses Qwen Edit 2509 All-In-One
The workflow is here:
https://civitai.com/models/2068064?modelVersionId=2340131
The new AIO model is by the venerable Phr00t, found here:
https://huggingface.co/Phr00t/Qwen-Image-Edit-Rapid-AIO/tree/main/v5
Its kinda made to work in conjunction with my previous character repose workflow:
https://civitai.com/models/1982115?modelVersionId=2325436
Works fine by itself though too.
I made this so I could place characters into a scene after reposing, then I can crop out images for initial / key / end frames for video generation. I'm sure it can be used in other ways too.
Depending on the complexity of the scene, character pose, character style and lighting conditions, it'll require varying degrees of gatcha. Also a good concise prompt helps too. There are prompt notes in the workflow.
What I've found is if there's nice clean lighting in the scene, and the character is placed clearly on a reasonable surface, the relight, shadows and reflections come out better. Zero shots do happen, but if you've got a weird scene, or the character is placed in a way that doesn't make sense, Qwen just won't 'get' it and it will either light and shadow it wrong, or not at all.
The 2D character is properly lit and casts a decent shadow. The rest of the scene remains the same.
The anime character has a decent reflection on the ground, although there's no change to the tint.
The 3D character is lit from below with a yellow light. This one was more difficult due to the level's complexity.
More images are available on CivitAI if you're interested.
You can check out my Twitter for WIP pics I genned while polishing this workflow here: https://x.com/SlipperyGem
I also post about open source AI news, Comfy workflows and other shenanigans'.
Stay Cheesy Y'all\~!
\- Brie Wensleydale.
https://redd.it/1oe6x7k
@rStableDiffusion
https://preview.redd.it/lxabti1muvwf1.png?width=1628&format=png&auto=webp&s=1d4fe7d23f82ec7280e48f14fb8e0761c29de627
Hey everyone\~
I've released the first version of my Qwen Edit Lazy Relight. It takes a character and injects it into a scene, adapting it to the scene's lighting and shadows.
You just put in an image of a character, an image of your background, maybe tweak the prompt a bit, and it'll place the character in the scene. You need need to adjust the character's position and scale in the workflow though. Some other params to adjust if need be.
It uses Qwen Edit 2509 All-In-One
The workflow is here:
https://civitai.com/models/2068064?modelVersionId=2340131
The new AIO model is by the venerable Phr00t, found here:
https://huggingface.co/Phr00t/Qwen-Image-Edit-Rapid-AIO/tree/main/v5
Its kinda made to work in conjunction with my previous character repose workflow:
https://civitai.com/models/1982115?modelVersionId=2325436
Works fine by itself though too.
I made this so I could place characters into a scene after reposing, then I can crop out images for initial / key / end frames for video generation. I'm sure it can be used in other ways too.
Depending on the complexity of the scene, character pose, character style and lighting conditions, it'll require varying degrees of gatcha. Also a good concise prompt helps too. There are prompt notes in the workflow.
What I've found is if there's nice clean lighting in the scene, and the character is placed clearly on a reasonable surface, the relight, shadows and reflections come out better. Zero shots do happen, but if you've got a weird scene, or the character is placed in a way that doesn't make sense, Qwen just won't 'get' it and it will either light and shadow it wrong, or not at all.
The 2D character is properly lit and casts a decent shadow. The rest of the scene remains the same.
The anime character has a decent reflection on the ground, although there's no change to the tint.
The 3D character is lit from below with a yellow light. This one was more difficult due to the level's complexity.
More images are available on CivitAI if you're interested.
You can check out my Twitter for WIP pics I genned while polishing this workflow here: https://x.com/SlipperyGem
I also post about open source AI news, Comfy workflows and other shenanigans'.
Stay Cheesy Y'all\~!
\- Brie Wensleydale.
https://redd.it/1oe6x7k
@rStableDiffusion
This media is not supported in your browser
VIEW IN TELEGRAM
Video as a prompt : full model releaed by Bytedance built on Wan & CogVideoX ( lot of high quality examples on project page)
https://redd.it/1oee6h8
@rStableDiffusion
https://redd.it/1oee6h8
@rStableDiffusion
This media is not supported in your browser
VIEW IN TELEGRAM
HoloCine: Holistic Generation of Cinematic Multi-Shot Long Video Narratives
https://redd.it/1oemcri
@rStableDiffusion
https://redd.it/1oemcri
@rStableDiffusion
Qwen Image Edit 2509 model subject training is next level. These images are 4 base + 4 upscale steps. 2656x2656 pixel. No face inpainting has been made all raw. The training dataset was very weak but results are amazing. Shown the training dataset at the end - used black images as control images
https://redd.it/1oei49m
@rStableDiffusion
https://redd.it/1oei49m
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit: Qwen Image Edit 2509 model subject training is next level. These images are 4 base…
Explore this post and more from the StableDiffusion community