ComfyUI Tutorial Enhanced Image Editing With Qwen Edit 2511
https://youtu.be/0c76Kp0IPRg
https://redd.it/1pwtaok
@rStableDiffusion
https://youtu.be/0c76Kp0IPRg
https://redd.it/1pwtaok
@rStableDiffusion
YouTube
ComfyUI Tutorial Enhanced Image Editing With Qwen Edit 2511 #comfyui #comfyuitutorial #qwen2511
On this tutorial, we explore the new QWEN ImageEdit 2511 model, highlighting its improved prompt adherence and ai image editing capabilities. We test various prompts of qwen image edit, including GGUF versions, and showcase examples of its output, such as…
Z-Image how to train my face for lora?
Hi to all,
Any good tutorial how to train my face in Z-Image?
https://redd.it/1pwv3sh
@rStableDiffusion
Hi to all,
Any good tutorial how to train my face in Z-Image?
https://redd.it/1pwv3sh
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
Is Qwen Image edit 2511 just better with 4-step lighting LORA?
I have been testing the FP8 version of Qwen Image Edit 2511 with the official ComfyUI workflow, and er_sde sampler and beta scheduler, and I've got mixed feelings compared to 2509 so far. When changing a single element from a base image, I've found the new version was more prone to change the overall scene (background, character's pose or face), which I consider an undesired effect. It also have a stronger blurrying that was already discussed. On a positive note, there are less occurences of ignored prompts.
Someone posted (I can't retrieve it, maybe deleted?) that moving from 4-step LORA to regular ComfyUI does not improve image quality, even going as far as to the original 40 steps CFG 4 recommendation with BF16 quantization, especially with the blur.
So I added the 4-step LORA to my workflow, and I've got better prompt comprehension and rendering in almost every testing I've done. Why is that? I always thought of these lighting lora as a fine tune to get faster generation at the expense of prompt adherence or image details. But I couldnt see these drawbacks really. What am I missing? Are there use cases for regular qwen edit with standard parameters anymore?
Now, my use of Qwen Image Edit involves mostly short prompts to change one thing of an image at a time. Maybe things are different when writing longer prompts with more details? What's your experience so far?
Now, I wont complain, it means I can have better results in shorter time. Though it makes wonder if using expensive graphic card worth it. 😁
https://redd.it/1pwtqan
@rStableDiffusion
I have been testing the FP8 version of Qwen Image Edit 2511 with the official ComfyUI workflow, and er_sde sampler and beta scheduler, and I've got mixed feelings compared to 2509 so far. When changing a single element from a base image, I've found the new version was more prone to change the overall scene (background, character's pose or face), which I consider an undesired effect. It also have a stronger blurrying that was already discussed. On a positive note, there are less occurences of ignored prompts.
Someone posted (I can't retrieve it, maybe deleted?) that moving from 4-step LORA to regular ComfyUI does not improve image quality, even going as far as to the original 40 steps CFG 4 recommendation with BF16 quantization, especially with the blur.
So I added the 4-step LORA to my workflow, and I've got better prompt comprehension and rendering in almost every testing I've done. Why is that? I always thought of these lighting lora as a fine tune to get faster generation at the expense of prompt adherence or image details. But I couldnt see these drawbacks really. What am I missing? Are there use cases for regular qwen edit with standard parameters anymore?
Now, my use of Qwen Image Edit involves mostly short prompts to change one thing of an image at a time. Maybe things are different when writing longer prompts with more details? What's your experience so far?
Now, I wont complain, it means I can have better results in shorter time. Though it makes wonder if using expensive graphic card worth it. 😁
https://redd.it/1pwtqan
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community