Exploring how prompt templates improve AI chatbot prompts for Stable Diffusion workflows
I’ve been experimenting with different AI chatbot prompt structures to help generate better Stable Diffusion input text.
Some templates help refine ideas before translating them to text-to-image prompts.
Others guide consistency and style when working with multiple models or versions.
I’m curious how others in this subreddit think about pre-prompt strategy for image generation.
What techniques do you use to make prompt design more reliable and creative?
https://redd.it/1qz6ku4
@rStableDiffusion
I’ve been experimenting with different AI chatbot prompt structures to help generate better Stable Diffusion input text.
Some templates help refine ideas before translating them to text-to-image prompts.
Others guide consistency and style when working with multiple models or versions.
I’m curious how others in this subreddit think about pre-prompt strategy for image generation.
What techniques do you use to make prompt design more reliable and creative?
https://redd.it/1qz6ku4
@rStableDiffusion
Update Comfy for Anima - potential inference speed up
Just updated my Comfy portable, because why not. And for some reason, I have a massive speed up for Anima (using an FP8 version). On my 2080, it got around 70% faster. No idea, what the update was and if it's only relevant for people on older hardware, but thought I'd share the happy news. If anyone knows what caused this, I'd be interested to know what they did!
https://redd.it/1qz6c0q
@rStableDiffusion
Just updated my Comfy portable, because why not. And for some reason, I have a massive speed up for Anima (using an FP8 version). On my 2080, it got around 70% faster. No idea, what the update was and if it's only relevant for people on older hardware, but thought I'd share the happy news. If anyone knows what caused this, I'd be interested to know what they did!
https://redd.it/1qz6c0q
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
Simple, Effective and Fast Z-Image Headswap for characters V1
https://redd.it/1qz9lzb
@rStableDiffusion
https://redd.it/1qz9lzb
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit: Simple, Effective and Fast Z-Image Headswap for characters V1
Explore this post and more from the StableDiffusion community
I tested 11 AI image detectors on 1000+ images including SD 3.5. Here are the results.
Just finished largest test yet: **10 AI image detectors** tested on 1000+ images, 10000 checks in total.
# Key findings for Stable Diffusion users:
**The detectors that catch SD images best:**
|Detector|Overall Accuracy|False Positive Rate|
|:-|:-|:-|
|TruthScan|94.75%|0.80%|
|SightEngine|91.34%|1.20%|
|Was It AI|84.95%|7.97%|
|MyDetector|83.85%|5.50%|
**The detectors that struggle:**
|Detector|Overall Accuracy|Notes|
|:-|:-|:-|
|HF AI-image-detector|16.22%|Misses 75% of AI images|
|HF SDXL-detector|60.53%|Despite being trained for SDXL|
|Decopy|65.42%|Misses over 1/3 of AI content|
# The False Positive Problem
This is where it gets interesting for photographers and mixed-media artists:
* **Winston AI** flags **23.24%** of real photos as AI — nearly 1 in 4
* **AI or Not** flags **21.54%** — over 1 in 5
* **TruthScan** only flags **0.80%** — best in class
If you're using SD for art and worried about detection, know that:
1. The top detectors (TruthScan, SightEngine) will likely catch modern SD outputs
2. Some platforms use less accurate detectors — your mileage may vary
3. HuggingFace open-source detectors perform significantly worse than commercial ones
Test your own images: [https://aidetectarena.com/check](https://aidetectarena.com/check) — runs all available detectors simultaneously
https://redd.it/1qz9qu4
@rStableDiffusion
Just finished largest test yet: **10 AI image detectors** tested on 1000+ images, 10000 checks in total.
# Key findings for Stable Diffusion users:
**The detectors that catch SD images best:**
|Detector|Overall Accuracy|False Positive Rate|
|:-|:-|:-|
|TruthScan|94.75%|0.80%|
|SightEngine|91.34%|1.20%|
|Was It AI|84.95%|7.97%|
|MyDetector|83.85%|5.50%|
**The detectors that struggle:**
|Detector|Overall Accuracy|Notes|
|:-|:-|:-|
|HF AI-image-detector|16.22%|Misses 75% of AI images|
|HF SDXL-detector|60.53%|Despite being trained for SDXL|
|Decopy|65.42%|Misses over 1/3 of AI content|
# The False Positive Problem
This is where it gets interesting for photographers and mixed-media artists:
* **Winston AI** flags **23.24%** of real photos as AI — nearly 1 in 4
* **AI or Not** flags **21.54%** — over 1 in 5
* **TruthScan** only flags **0.80%** — best in class
If you're using SD for art and worried about detection, know that:
1. The top detectors (TruthScan, SightEngine) will likely catch modern SD outputs
2. Some platforms use less accurate detectors — your mileage may vary
3. HuggingFace open-source detectors perform significantly worse than commercial ones
Test your own images: [https://aidetectarena.com/check](https://aidetectarena.com/check) — runs all available detectors simultaneously
https://redd.it/1qz9qu4
@rStableDiffusion
AI Detector Arena
AI Image Checker - Multi-Detector Analysis | AI Detector Arena
Check if an image is AI-generated using 9 different detection services simultaneously. Get instant results from SightEngine, WasItAI, Hive, and more.
How to make Anime AI Gifs/Videos using Stable Diffusion/ComyUI?
Hello is there anyone here who knows how to make Anime AI gifs using either Web Forge UI/ ComfyUI In Stable Diffusion and would be willing to sit down and go step by step with me? Because literally every guide I have tried doesnt work and always gives a shit ton of errors. I would really appreciate it. I just do not know what to do anymore and I just know I need help.
https://redd.it/1qzfpz2
@rStableDiffusion
Hello is there anyone here who knows how to make Anime AI gifs using either Web Forge UI/ ComfyUI In Stable Diffusion and would be willing to sit down and go step by step with me? Because literally every guide I have tried doesnt work and always gives a shit ton of errors. I would really appreciate it. I just do not know what to do anymore and I just know I need help.
https://redd.it/1qzfpz2
@rStableDiffusion
Reddit
From the sdforall community on Reddit
Explore this post and more from the sdforall community
This media is not supported in your browser
VIEW IN TELEGRAM
ltx-2 I2V this one took me a few days to make properly, kept trying T2V and model kept adding phantom 3rd person on the bike, missing limbs, fused bodies with bike and it was hilarious, i2v fixed it, Heart Mula was used for the song klein9b for image.
https://redd.it/1qzffop
@rStableDiffusion
https://redd.it/1qzffop
@rStableDiffusion
Got tired of waiting for Qwen 2512 ControlNet support, so I made it myself! feedback needed.
After waiting forever for native support, I decided to just build it myself.
Good news for Qwen 2512 fans: The Qwen-Image-2512-Fun-Controlnet-Union model now works with the default ControlNet nodes in ComfyUI.
No extra nodes. No custom nodes. Just load it and go.
I've submitted a PR to the main ComfyUI repo: https://github.com/Comfy-Org/ComfyUI/pull/12359
Those who love Qwen 2512 can now have a lot more creative freedom. Enjoy!
https://redd.it/1qzht5h
@rStableDiffusion
After waiting forever for native support, I decided to just build it myself.
Good news for Qwen 2512 fans: The Qwen-Image-2512-Fun-Controlnet-Union model now works with the default ControlNet nodes in ComfyUI.
No extra nodes. No custom nodes. Just load it and go.
I've submitted a PR to the main ComfyUI repo: https://github.com/Comfy-Org/ComfyUI/pull/12359
Those who love Qwen 2512 can now have a lot more creative freedom. Enjoy!
https://redd.it/1qzht5h
@rStableDiffusion
GitHub
Add working Qwen 2512 ControlNet (Fun ControlNet) support by krigeta · Pull Request #12359 · Comfy-Org/ComfyUI
Summary
This PR adds full support for Qwen 2.5 Fun ControlNet format, enabling ControlNet functionality for Qwen image generation models.
Related Issues
Closes Please add support Qwen-Image-2512-F...
This PR adds full support for Qwen 2.5 Fun ControlNet format, enabling ControlNet functionality for Qwen image generation models.
Related Issues
Closes Please add support Qwen-Image-2512-F...