r/StableDiffusion – Telegram
Flux 2 character consistency is so damn good! Most favorite model hands down
https://redd.it/1p6l7nl
@rStableDiffusion
Just so you guys know, Flux 2 doesn’t allow spicy images or IP-infringing content as per their inference filters

This is from the Huggingface page https://huggingface.co/black-forest-labs/FLUX.2-dev:

Inference filters. The repository for the FLUX.2 dev model includes filters for N S F W and IP-infringing content at input and output. Filters or manual review must be used with the model under the terms of the FLUX.2 dev Non-Commercial License. We may approach known deployers of the FLUX.2 dev model at random to verify that filters or manual review processes are in place. Additionally, we apply multiple filters to intercept text prompts, uploaded images, and output images on the API for FLUX.2 pro. We utilize both in-house and third-party supplied filters to prevent CSAM and NCII outputs, including filters provided by Hive and Microsoft. We provide filters for other categories of potentially harmful content, including gore, which can be adjusted by developers based on their specific risk profile and legitimate use cases.

So a local model is babying and policing what you can and can’t make 🙄

https://redd.it/1p6o9tl
@rStableDiffusion
Flux 2 feels too big on purpose

Anyone else feel like Flux 2 feels a bit too bloated for the quality of images generated feels like an attempt to get everyone to just use the API inference services instead of self-hosting?


Like the main model for Flux 2 fp8 is 35 GB + 18 GB = 53 GB for mistral encoder FP8. Compare that to Qwen edit fp8 which is 20.4 GB and 8GB for the vision model FP8 = 29 GB total


Feels like I'll just waiting for nunchaku to release its version before switching to it or just wait for the next qwen edit 2511 version, the current version of which seems basically same performance as flux 2

https://redd.it/1p70786
@rStableDiffusion