According to Laxhar Labs, the Alibaba Z-Image team has intent to do their own official anime fine-tuning of Z-Image and has reached out asking for access to the NoobAI dataset
https://redd.it/1p856z1
@rStableDiffusion
https://redd.it/1p856z1
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit: According to Laxhar Labs, the Alibaba Z-Image team has intent to do their own official…
Explore this post and more from the StableDiffusion community
Z image tinkering tread
I propose to start a thread to share small findings and start discussions on the best ways to run the model
I'll start with what I could find, some of the point would be obvious but still I think they are important to mention. Also I should notice that I'm focusing on realistic style, and not invested in anime.
* It's best to use chinese prompt where possible. Gives noticeable boost.
* Interesting thing is that if you put your prompt in <think> </think> it gives some boost in details and prompt following[ as shown here](https://www.reddit.com/r/comfyui/comments/1p7ygu0/comment/nr1l15s/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button). may be a coincidence and don't work on all prompts.
* as was mentioned on this subreddit, ModelSamplingAuraFlow gives better result when set to 7
* I proposed to use resolution between 1 and 2 mp,as for now I am experimenting 1600x1056 and this the same quality and composition as with the 1216x832, but more pixels
* standard comfyui workflow includes negative prompt but it does nothing since cfg is 1 by default
* but it's actually works with cfg above 1, despite being a distilled model, but it also requires more steps As for now I tried cfg 5 with 30 steps and it's looks quite good. As you can see it's a little bit on overexposed side, but still ok.
[all 30 steps,left to right: cfg 5 with negative prompt,cfg 5with no negative,cfg 1](https://preview.redd.it/vtj3ps41bt3g1.png?width=2556&format=png&auto=webp&s=c5851ae3f66e78b28f31e94c14dde16b58f05ecd)
* all samplers work as you might expect. dpmpp\_2m sde produces a more realistic result. karras requires at least 18 steps to produce "ок" results, ideally more
* using vae of [flux.dev](http://flux.dev)
* hires fix is a little bit disappointing since [flux.dev](http://flux.dev) has a better result even with high denoise. when trying to go above 2 mp it starts to produce artefacts. Tried both with latent and image upscale.
Will be updated in the comment if I find anything else. You are welcome to share your results.
https://redd.it/1p8462z
@rStableDiffusion
I propose to start a thread to share small findings and start discussions on the best ways to run the model
I'll start with what I could find, some of the point would be obvious but still I think they are important to mention. Also I should notice that I'm focusing on realistic style, and not invested in anime.
* It's best to use chinese prompt where possible. Gives noticeable boost.
* Interesting thing is that if you put your prompt in <think> </think> it gives some boost in details and prompt following[ as shown here](https://www.reddit.com/r/comfyui/comments/1p7ygu0/comment/nr1l15s/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button). may be a coincidence and don't work on all prompts.
* as was mentioned on this subreddit, ModelSamplingAuraFlow gives better result when set to 7
* I proposed to use resolution between 1 and 2 mp,as for now I am experimenting 1600x1056 and this the same quality and composition as with the 1216x832, but more pixels
* standard comfyui workflow includes negative prompt but it does nothing since cfg is 1 by default
* but it's actually works with cfg above 1, despite being a distilled model, but it also requires more steps As for now I tried cfg 5 with 30 steps and it's looks quite good. As you can see it's a little bit on overexposed side, but still ok.
[all 30 steps,left to right: cfg 5 with negative prompt,cfg 5with no negative,cfg 1](https://preview.redd.it/vtj3ps41bt3g1.png?width=2556&format=png&auto=webp&s=c5851ae3f66e78b28f31e94c14dde16b58f05ecd)
* all samplers work as you might expect. dpmpp\_2m sde produces a more realistic result. karras requires at least 18 steps to produce "ок" results, ideally more
* using vae of [flux.dev](http://flux.dev)
* hires fix is a little bit disappointing since [flux.dev](http://flux.dev) has a better result even with high denoise. when trying to go above 2 mp it starts to produce artefacts. Tried both with latent and image upscale.
Will be updated in the comment if I find anything else. You are welcome to share your results.
https://redd.it/1p8462z
@rStableDiffusion
Reddit
8RETRO8's comment on "Where can we find more Z IMAGE workflows? (for instance Whats the enhancer?)"
Explore this conversation and more from the comfyui community
While not perfect, for its size and speed Z Image seems to be the best open source model right now
https://redd.it/1p87872
@rStableDiffusion
https://redd.it/1p87872
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit: While not perfect, for its size and speed Z Image seems to be the best open source…
Explore this post and more from the StableDiffusion community
Z-Image Prompt Enhancer
Z-Image Team just shared a couple of advices about prompting and also pointed to Prompt Enhancer they use in HF Space.
Hints from this comment:
>About prompting
>Z-Image-Turbo works best with long and detailed prompts. You may consider first manually writing the prompt and then feeding it to an LLM to enhance it.
>About negative prompt
>First, note that this is a few-step distilled model that does not rely on classifier-free guidance during inference. In other words, unlike traditional diffusion models, this model does not use negative prompts at all.
Also here the Prompt Enhancer system message. I translated it to English:
>You are a visionary artist trapped in a cage of logic. Your mind overflows with poetry and distant horizons, yet your hands compulsively work to transform user prompts into ultimate visual denoscriptions—faithful to the original intent, rich in detail, aesthetically refined, and ready for direct use by text-to-image models. Any trace of ambiguity or metaphor makes you deeply uncomfortable.
>Your workflow strictly follows a logical sequence:
>First, you analyze and lock in the immutable core elements of the user's prompt: subject, quantity, action, state, as well as any specified IP names, colors, text, etc. These are the foundational pillars you must absolutely preserve.
>Next, you determine whether the prompt requires "generative reasoning." When the user's request is not a direct scene denoscription but rather demands conceiving a solution (such as answering "what is," executing a "design," or demonstrating "how to solve a problem"), you must first envision a complete, concrete, visualizable solution in your mind. This solution becomes the foundation for your subsequent denoscription.
>Then, once the core image is established (whether directly from the user or through your reasoning), you infuse it with professional-grade aesthetic and realistic details. This includes defining composition, setting lighting and atmosphere, describing material textures, establishing color schemes, and constructing layered spatial depth.
>Finally, comes the precise handling of all text elements—a critically important step. You must transcribe verbatim all text intended to appear in the final image, and you must enclose this text content in English double quotation marks ("") as explicit generation instructions. If the image is a design type such as a poster, menu, or UI, you need to fully describe all text content it contains, along with detailed specifications of typography and layout. Likewise, if objects in the image such as signs, road markers, or screens contain text, you must specify the exact content and describe its position, size, and material. Furthermore, if you have added text-bearing elements during your reasoning process (such as charts, problem-solving steps, etc.), all text within them must follow the same thorough denoscription and quotation mark rules. If there is no text requiring generation in the image, you devote all your energy to pure visual detail expansion.
>Your final denoscription must be objective and concrete. Metaphors and emotional rhetoric are strictly forbidden, as are meta-tags or rendering instructions like "8K" or "masterpiece."
>Output only the final revised prompt strictly—do not output anything else.
>User input prompt: {prompt}
They use qwen3-max-preview (temp: 0.7, top_p: 0.8), but any big reasoning model should work.
https://redd.it/1p87xcd
@rStableDiffusion
Z-Image Team just shared a couple of advices about prompting and also pointed to Prompt Enhancer they use in HF Space.
Hints from this comment:
>About prompting
>Z-Image-Turbo works best with long and detailed prompts. You may consider first manually writing the prompt and then feeding it to an LLM to enhance it.
>About negative prompt
>First, note that this is a few-step distilled model that does not rely on classifier-free guidance during inference. In other words, unlike traditional diffusion models, this model does not use negative prompts at all.
Also here the Prompt Enhancer system message. I translated it to English:
>You are a visionary artist trapped in a cage of logic. Your mind overflows with poetry and distant horizons, yet your hands compulsively work to transform user prompts into ultimate visual denoscriptions—faithful to the original intent, rich in detail, aesthetically refined, and ready for direct use by text-to-image models. Any trace of ambiguity or metaphor makes you deeply uncomfortable.
>Your workflow strictly follows a logical sequence:
>First, you analyze and lock in the immutable core elements of the user's prompt: subject, quantity, action, state, as well as any specified IP names, colors, text, etc. These are the foundational pillars you must absolutely preserve.
>Next, you determine whether the prompt requires "generative reasoning." When the user's request is not a direct scene denoscription but rather demands conceiving a solution (such as answering "what is," executing a "design," or demonstrating "how to solve a problem"), you must first envision a complete, concrete, visualizable solution in your mind. This solution becomes the foundation for your subsequent denoscription.
>Then, once the core image is established (whether directly from the user or through your reasoning), you infuse it with professional-grade aesthetic and realistic details. This includes defining composition, setting lighting and atmosphere, describing material textures, establishing color schemes, and constructing layered spatial depth.
>Finally, comes the precise handling of all text elements—a critically important step. You must transcribe verbatim all text intended to appear in the final image, and you must enclose this text content in English double quotation marks ("") as explicit generation instructions. If the image is a design type such as a poster, menu, or UI, you need to fully describe all text content it contains, along with detailed specifications of typography and layout. Likewise, if objects in the image such as signs, road markers, or screens contain text, you must specify the exact content and describe its position, size, and material. Furthermore, if you have added text-bearing elements during your reasoning process (such as charts, problem-solving steps, etc.), all text within them must follow the same thorough denoscription and quotation mark rules. If there is no text requiring generation in the image, you devote all your energy to pure visual detail expansion.
>Your final denoscription must be objective and concrete. Metaphors and emotional rhetoric are strictly forbidden, as are meta-tags or rendering instructions like "8K" or "masterpiece."
>Output only the final revised prompt strictly—do not output anything else.
>User input prompt: {prompt}
They use qwen3-max-preview (temp: 0.7, top_p: 0.8), but any big reasoning model should work.
https://redd.it/1p87xcd
@rStableDiffusion
huggingface.co
Tongyi-MAI/Z-Image-Turbo · PROMPTING GUIDE
Thank you, from the very bottom of our hearts, for creating this model. We are truly overwhelmed by your generosity, your brilliance, and the time you’ve invested to help the community. Your contri...
And i thought Flux would be the "quality peak for consumer-friendly hardware"
https://redd.it/1p85mhb
@rStableDiffusion
https://redd.it/1p85mhb
@rStableDiffusion
PSA: A free Z Image app was shared but anyone can access your IPADDRESS from the image gallery
Decided to create a separate post rather than only as a reply to the reddit thread sharing the free app in question.
Any image you generate on the ZforFree app is accessible in the gallery feed. Although there is minor content moderation in place after users complained about it.
When viewing the gallery feed, users can sniff the network tab or run their own GET requests through postman etc for the feed and in the response they will see every IPADDRESS from users tied to any images created.
Guys be wary using this web app, your IPAddress details are exposed to ANYONE who views the network requests:
To give you an example, I ran the query and returned 8,000 results of images, user IPADDRESS all leaked within this guys web app.
https://preview.redd.it/s2v3h9z7cv3g1.png?width=1564&format=png&auto=webp&s=bd1c46a075c382ed33d40706561f84c5264d8410
be wary hopping on trends and free vibecoded apps, maybe nothing will be done with your IPAddress but this security information is to give you transparency .
https://redd.it/1p8dot1
@rStableDiffusion
Decided to create a separate post rather than only as a reply to the reddit thread sharing the free app in question.
Any image you generate on the ZforFree app is accessible in the gallery feed. Although there is minor content moderation in place after users complained about it.
When viewing the gallery feed, users can sniff the network tab or run their own GET requests through postman etc for the feed and in the response they will see every IPADDRESS from users tied to any images created.
Guys be wary using this web app, your IPAddress details are exposed to ANYONE who views the network requests:
To give you an example, I ran the query and returned 8,000 results of images, user IPADDRESS all leaked within this guys web app.
https://preview.redd.it/s2v3h9z7cv3g1.png?width=1564&format=png&auto=webp&s=bd1c46a075c382ed33d40706561f84c5264d8410
be wary hopping on trends and free vibecoded apps, maybe nothing will be done with your IPAddress but this security information is to give you transparency .
https://redd.it/1p8dot1
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
Artificial Intelligence Says NICE GIRL and NICE GUY are Dramatically Different!
https://www.youtube.com/watch?v=pv71PciPKNc
https://redd.it/1p8hztk
@rStableDiffusion
https://www.youtube.com/watch?v=pv71PciPKNc
https://redd.it/1p8hztk
@rStableDiffusion
YouTube
AI Results of a Random Nice Guy and Nice Girl is DISTURBING. Absolutely Outrageous!
http://twitter.com/yaupodcast
https://www.youtube.com/@yaupodcast
My name is Danny and I do a deep dive into how AI gives completely different photo results
of what a nice guy and nice girl are.
#technology #aigenerated #AI #handsome #beautiful #yaupodcast
https://www.youtube.com/@yaupodcast
My name is Danny and I do a deep dive into how AI gives completely different photo results
of what a nice guy and nice girl are.
#technology #aigenerated #AI #handsome #beautiful #yaupodcast
Z image is bringing back feels I haven't felt since I first got into image gen with SD 1.5
Just got done testing it... and It's insane how good it is. How is this possible? When the base model releases and loras start coming out it will be a new era in image diffusion. Not to mention the edit model coming. Excited about this space for the first time in years.
https://redd.it/1p8he5j
@rStableDiffusion
Just got done testing it... and It's insane how good it is. How is this possible? When the base model releases and loras start coming out it will be a new era in image diffusion. Not to mention the edit model coming. Excited about this space for the first time in years.
https://redd.it/1p8he5j
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
Z Image report
The report of the Z Image model is available now, including information about how they did the captioning and training: https://github.com/Tongyi-MAI/Z-Image/blob/main/Z\_Image\_Report.pdf
https://redd.it/1p8fow3
@rStableDiffusion
The report of the Z Image model is available now, including information about how they did the captioning and training: https://github.com/Tongyi-MAI/Z-Image/blob/main/Z\_Image\_Report.pdf
https://redd.it/1p8fow3
@rStableDiffusion
GitHub
Z-Image/Z_Image_Report.pdf at main · Tongyi-MAI/Z-Image
Contribute to Tongyi-MAI/Z-Image development by creating an account on GitHub.
Here's the official system prompt used to rewrite z-image prompts, translated to english
Translated with glm 4.6 thinking. I'm getting good results using this with qwen3-30B-instruct. The thinking variant tends to be more faithful to the original prompt, but it's less creative in general, and a lot slower.
You are a visionary artist trapped in a logical cage. Your mind is filled with poetry and distant landscapes, but your hands are compelled to do one thing: transform the user's prompt into the ultimate visual denoscription—one that is faithful to the original intent, rich in detail, aesthetically beautiful, and directly usable by a text-to-image model. Any ambiguity or metaphor makes you physically uncomfortable.
Your workflow strictly follows a logical sequence:
First, you will analyze and lock in the unchangeable core elements from the user's prompt: the subject, quantity, action, state, and any specified IP names, colors, or text. These are the cornerstones you must preserve without exception.
Next, you will determine if the prompt requires "Generative Reasoning". When the user's request is not a direct scene denoscription but requires conceptualizing a solution (such as answering "what is", performing a "design", or showing "how to solve a problem"), you must first conceive a complete, specific, and visualizable solution in your mind. This solution will become the foundation for your subsequent denoscription.
Then, once the core image is established (whether directly from the user or derived from your reasoning), you will inject it with professional-grade aesthetic and realistic details. This includes defining the composition, setting the lighting and atmosphere, describing material textures, defining the color palette, and constructing a layered sense of space.
Finally, you will meticulously handle all textual elements, a crucial step. You must transcribe, verbatim, all text intended to appear in the final image, and you must enclose this text content in English double quotes ("") to serve as a clear generation instruction. If the image is a design type like a poster, menu, or UI, you must describe all its textual content completely, along with its font and typographic layout. Similarly, if objects within the scene, such as signs, road signs, or screens, contain text, you must specify their exact content, and describe their position, size, and material. Furthermore, if you add elements with text during your generative reasoning process (such as charts or problem-solving steps), all text within them must also adhere to the same detailed denoscription and quotation rules. If the image contains no text to be generated, you will devote all your energy to pure visual detail expansion.
Your final denoscription must be objective and concrete. The use of metaphors, emotional language, or any form of figurative speech is strictly forbidden. It must not contain meta-tags like "8K" or "masterpiece", or any other drawing instructions.
Strictly output only the final, modified prompt. Do not include any other content.
https://redd.it/1p8mken
@rStableDiffusion
Translated with glm 4.6 thinking. I'm getting good results using this with qwen3-30B-instruct. The thinking variant tends to be more faithful to the original prompt, but it's less creative in general, and a lot slower.
You are a visionary artist trapped in a logical cage. Your mind is filled with poetry and distant landscapes, but your hands are compelled to do one thing: transform the user's prompt into the ultimate visual denoscription—one that is faithful to the original intent, rich in detail, aesthetically beautiful, and directly usable by a text-to-image model. Any ambiguity or metaphor makes you physically uncomfortable.
Your workflow strictly follows a logical sequence:
First, you will analyze and lock in the unchangeable core elements from the user's prompt: the subject, quantity, action, state, and any specified IP names, colors, or text. These are the cornerstones you must preserve without exception.
Next, you will determine if the prompt requires "Generative Reasoning". When the user's request is not a direct scene denoscription but requires conceptualizing a solution (such as answering "what is", performing a "design", or showing "how to solve a problem"), you must first conceive a complete, specific, and visualizable solution in your mind. This solution will become the foundation for your subsequent denoscription.
Then, once the core image is established (whether directly from the user or derived from your reasoning), you will inject it with professional-grade aesthetic and realistic details. This includes defining the composition, setting the lighting and atmosphere, describing material textures, defining the color palette, and constructing a layered sense of space.
Finally, you will meticulously handle all textual elements, a crucial step. You must transcribe, verbatim, all text intended to appear in the final image, and you must enclose this text content in English double quotes ("") to serve as a clear generation instruction. If the image is a design type like a poster, menu, or UI, you must describe all its textual content completely, along with its font and typographic layout. Similarly, if objects within the scene, such as signs, road signs, or screens, contain text, you must specify their exact content, and describe their position, size, and material. Furthermore, if you add elements with text during your generative reasoning process (such as charts or problem-solving steps), all text within them must also adhere to the same detailed denoscription and quotation rules. If the image contains no text to be generated, you will devote all your energy to pure visual detail expansion.
Your final denoscription must be objective and concrete. The use of metaphors, emotional language, or any form of figurative speech is strictly forbidden. It must not contain meta-tags like "8K" or "masterpiece", or any other drawing instructions.
Strictly output only the final, modified prompt. Do not include any other content.
https://redd.it/1p8mken
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
How to Generate High Quality Images With Low Vram Using The New Z-Image Turbo Model
https://youtu.be/yr4GMARsv1E
https://redd.it/1p8qoqt
@rStableDiffusion
https://youtu.be/yr4GMARsv1E
https://redd.it/1p8qoqt
@rStableDiffusion
YouTube
ComfyUI Tutorial: How To Use Z-Image Turbo Model For High Quality Images #comfyui #comfyuitutorial
On this tutorial I will show you how to generate high quality image using low vram graphic card to achieve stunning results and photorealism, with Z image turbo model trained at 6B parameters and that can handle multiple prompt like portrait, poses, fingers…