r/StableDiffusion – Telegram
Using AI relationship prompts to shape Stable Diffusion concept brainstorming

I’ve been trying a method where I use structured prompt brainstorming to clarify ideas before generating images.
Focusing on narrative and emotional cues helps refine visual concepts like mood and character expression.
Breaking prompts into smaller denoscriptive parts seems to improve composition and detail in outputs.
It’s been interesting to see how organizing ideas textually influences the end result.
Curious how others prepare concepts before feeding them into generation pipelines.

https://redd.it/1rao5jk
@rStableDiffusion
Just returned from mid-2025, what's the recommended image gen local model now?

Stopped doing image gen since mid-2025 and now came back to have fun with it again.

Last time i was here, the best recommended model that does not require beefy high end builds(ahem, flux.) are WAI-Illustrious, and NoobAI(the V-pred thingy?).

I scoured a bit in this subreddit and found some said Chroma and Anima, are these new recommended models?

And do they have capability to use old LoRAs? (like NoobAI able to load illustrious LoRAs) as i have some LoRAs with Pony, Illustrious, and NoobAI versions. Can it use some of it?

https://redd.it/1rambdn
@rStableDiffusion
lora-gym update: local GPU training for WAN LoRAs

Update on lora-gym (github.com/alvdansen/lora-gym) — added local training support.

Running on my A6000 right now. Same config structure, same hyperparameters, same dual-expert WAN 2.2 handling. No cloud setup required.

Currently validated on 48GB VRAM.

https://redd.it/1ravptl
@rStableDiffusion
This media is not supported in your browser
VIEW IN TELEGRAM
I built the first Android app in the world that detects AI content locally and offline over any app using a Quick Tile

https://redd.it/1raxdg6
@rStableDiffusion
FLUX2 Klein 9B LoKR Training – My Ostris AI Toolkit Configuration & Observations

I’d like to share my current Ostris AI Toolkit configuration for training FLUX2 Klein 9B LoKR, along with some structured insights that have worked well for me. I’m quite satisfied with the results so far and would appreciate constructive feedback from the community.

Step & Epoch Strategy

Here’s the formula I’ve been following:

• Assume you have N images (example: 32 images).

• Save every (N × 3) steps

→ 32 × 3 = 96 steps per save

• Total training steps = (Save Steps × 6)

→ 96 × 6 = 576 total steps

In short:

• Multiply your dataset size by 3 → that’s your checkpoint save interval.

• Multiply that result by 6 → that’s your total training steps.

Training Behavior Observed

• Noticeable improvements typically begin around epoch 12–13

• Best balance achieved between epoch 13–16

• Beyond that, gains appear marginal in my tests

Results & Observations

• Reduced character bleeding

• Strong resemblance to the trained character

• Decent prompt adherence

• LoKR strength works well at power = 1

Overall, this setup has given me consistent and clean outputs with minimal artifacts.



I’m open to suggestions, constructive criticism, and genuine feedback. If you’ve experimented with different step scaling or alternative strategies for Klein 9B, I’d love to hear your thoughts so we can refine this configuration further. Here is the config - https://pastebin.com/sd3xE2Z3. // Note: This configuration was tested on an RTX 5090. Depending on your GPU (especially if you’re using lower VRAM cards), you may need to adjust certain parameters such as batch size, resolution, gradient accumulation, or total steps to ensure stability and optimal performance.

https://redd.it/1rayrbj
@rStableDiffusion