Is there a comparison of different quantization of QWEN? Plus some questions.
I want to know which is best for my setup to get decent speed, I have a 3090.
is there any finetunes that are considered better then the base QWEN model?
Can I use QWEN Edit one for making images without any drawbacks?
Can I use 3b VLs as text encoder instead of 7b that comes with it?
https://redd.it/1ohxngw
@rStableDiffusion
I want to know which is best for my setup to get decent speed, I have a 3090.
is there any finetunes that are considered better then the base QWEN model?
Can I use QWEN Edit one for making images without any drawbacks?
Can I use 3b VLs as text encoder instead of 7b that comes with it?
https://redd.it/1ohxngw
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
VACE 2.2 - Restyling a video clip
https://www.youtube.com/watch?v=aElw6LUAs1w
https://redd.it/1oi1c6n
@rStableDiffusion
https://www.youtube.com/watch?v=aElw6LUAs1w
https://redd.it/1oi1c6n
@rStableDiffusion
YouTube
VACE 2.2 - Part 4 - Re-styling
This uses VACE 2.2 in a WAN 2.2 dual model workflow in Comfyui to restyle a video using a reference image. It also uses a blended controlnet made from the original video clip to maintain the video structure.
This is the last in a 4 part series of videos…
This is the last in a 4 part series of videos…
How to make Wan 2.2 animate via official website wan.video?
Can't find where is wan 2.2 animate on their official website https://wan.video/
I want best quality avialable which i can do online
https://redd.it/1oi63yl
@rStableDiffusion
Can't find where is wan 2.2 animate on their official website https://wan.video/
I want best quality avialable which i can do online
https://redd.it/1oi63yl
@rStableDiffusion
wan.video
Wan AI: Leading AI Video Generation Model
Wan is an AI creative platform. It aims to lower the barrier to creative work using artificial intelligence, offering features like text-to-image, image-to-image, text-to-video, image-to-video, and image editing.
🔥 Perplexity AI PRO - 1-Year Plan - Limited Time SUPER PROMO! 90% OFF!
https://redd.it/1oi783c
@rStableDiffusion
https://redd.it/1oi783c
@rStableDiffusion
Delaying a Lora to prevent unwanted effects
For Forge or other non-Comfyui users (not sure it will work in the spaghetti realm), there is a useful trick, possibly obvious to some, that I just realized recently and wanted to share.
For example, imagine some weird individual wants to apply a <lora:BigAss:1> to a character. Most inevitably, the resulting image will show the BigAss implemented but the character will also be turning his/her back to emphasize the said BigAss. If that's what the sketchy creator wants, fine. But if he'd like his character to keep facing the viewer and have the BigAss attribute remain as a subtle trace of his taste for the thick, how does he do it?
I found that 90% of the time, using [<lora:BigAss:1>:5\] will work. Reminder: the square brackets with one semicolon don't affect the emphasis, but the number of steps after which the element is activated. So the image has some time to generate (5 steps here) which is usually enough to set in place the character pose, and then the BigAss attributes enters into play. For me it was a big game changer.
https://redd.it/1oi8i4g
@rStableDiffusion
For Forge or other non-Comfyui users (not sure it will work in the spaghetti realm), there is a useful trick, possibly obvious to some, that I just realized recently and wanted to share.
For example, imagine some weird individual wants to apply a <lora:BigAss:1> to a character. Most inevitably, the resulting image will show the BigAss implemented but the character will also be turning his/her back to emphasize the said BigAss. If that's what the sketchy creator wants, fine. But if he'd like his character to keep facing the viewer and have the BigAss attribute remain as a subtle trace of his taste for the thick, how does he do it?
I found that 90% of the time, using [<lora:BigAss:1>:5\] will work. Reminder: the square brackets with one semicolon don't affect the emphasis, but the number of steps after which the element is activated. So the image has some time to generate (5 steps here) which is usually enough to set in place the character pose, and then the BigAss attributes enters into play. For me it was a big game changer.
https://redd.it/1oi8i4g
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
ComfyUI Tutorial Series Ep 68: How to Create Anime Illustrations - NetaYume v3.5
https://www.youtube.com/watch?v=RXuTNuyM6GI
https://redd.it/1oidgei
@rStableDiffusion
https://www.youtube.com/watch?v=RXuTNuyM6GI
https://redd.it/1oidgei
@rStableDiffusion
YouTube
ComfyUI Tutorial Series Ep 68: How to Create Anime Illustrations - NetaYume v3.5
Learn how to create anime illustrations in ComfyUI using the NetaYume v3.5 model! In this episode, we’ll walk through a complete setup, from installing the model and building a basic workflow to adding upscalers, testing styles, and using image-to-image generation.…