The out-of-the-box difference between Qwen Image and Qwen Image 2512 is really quite large
https://redd.it/1q1k0ee
@rStableDiffusion
https://redd.it/1q1k0ee
@rStableDiffusion
Polyglot R2: Translate and Enhance Prompts for Z-Image Without Extra Workflow Nodes
ComfyUI + Z-Image + Polyglot
You can use Polyglot to translate and improve your prompts for Z-Image or any other image generation model, without needing to add another new node to your workflow.
As shown in the video example, I:
• Write the prompt in my native language
• Translate it into English
• Enhance the prompt
All of this happens in just a few seconds and without leaving the interface, without adding complexity to the workflow, and without additional nodes. This works perfectly in any workflow or UI you want. In fact, across your entire operating system.
If you are not familiar with Polyglot, I invite you to check it out here:
https://andercoder.com/polyglot/
The project is fully open source (I am counting on your star):
https://github.com/andersondanieln/polyglot
And now, what I find even cooler:
Polyglot has its own fine tuning.
Polyglot R2 is a model trained on a dataset specifically designed for how the program works and specialized in translation and text transformation, with only 4B parameters and based on Qwen3 4B.
You can find the latest version here:
https://huggingface.co/CalmState/Qwen-3-4b-Polyglot-r2
https://huggingface.co/CalmState/Qwen-3-4b-Polyglot-r2-Q8\_0-GGUF
https://huggingface.co/CalmState/Qwen-3-4b-Polyglot-r2-Q4\_K\_M-GGUF
Well, everything is free and open source.
I hope you like it and happy new year to you all!
😊
https://redd.it/1q1hobk
@rStableDiffusion
ComfyUI + Z-Image + Polyglot
You can use Polyglot to translate and improve your prompts for Z-Image or any other image generation model, without needing to add another new node to your workflow.
As shown in the video example, I:
• Write the prompt in my native language
• Translate it into English
• Enhance the prompt
All of this happens in just a few seconds and without leaving the interface, without adding complexity to the workflow, and without additional nodes. This works perfectly in any workflow or UI you want. In fact, across your entire operating system.
If you are not familiar with Polyglot, I invite you to check it out here:
https://andercoder.com/polyglot/
The project is fully open source (I am counting on your star):
https://github.com/andersondanieln/polyglot
And now, what I find even cooler:
Polyglot has its own fine tuning.
Polyglot R2 is a model trained on a dataset specifically designed for how the program works and specialized in translation and text transformation, with only 4B parameters and based on Qwen3 4B.
You can find the latest version here:
https://huggingface.co/CalmState/Qwen-3-4b-Polyglot-r2
https://huggingface.co/CalmState/Qwen-3-4b-Polyglot-r2-Q8\_0-GGUF
https://huggingface.co/CalmState/Qwen-3-4b-Polyglot-r2-Q4\_K\_M-GGUF
Well, everything is free and open source.
I hope you like it and happy new year to you all!
😊
https://redd.it/1q1hobk
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
Lora Training with different body parts
I am trying to create and train my character Lora for ZiT. I have good set of images but I want to have the capability to have uncensored images without using any other loras. So is it possible to use random pictures of intimate body parts (closeup without any face) and combine with my images and then train it so whenever I prompt, it can produce images without the need to use external Loras?
https://redd.it/1q1r5ru
@rStableDiffusion
I am trying to create and train my character Lora for ZiT. I have good set of images but I want to have the capability to have uncensored images without using any other loras. So is it possible to use random pictures of intimate body parts (closeup without any face) and combine with my images and then train it so whenever I prompt, it can produce images without the need to use external Loras?
https://redd.it/1q1r5ru
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community