r/StableDiffusion – Telegram
This media is not supported in your browser
VIEW IN TELEGRAM
LTX 2 can generate 20 sec video at once with audio. They said they will open source model soon

https://redd.it/1ofqikk
@rStableDiffusion
🔥 Perplexity AI PRO - 1-Year Plan - Limited Time SUPER PROMO! 90% OFF!
https://redd.it/1ofrqcd
@rStableDiffusion
Pony 7 weights released. Yet this image tells everything about it

https://preview.redd.it/cpi9frjr0bxf1.png?width=1280&format=png&auto=webp&s=2ee6f038b91dd912cba295e024c7e21a65f46943

n3ko, 2girls, (yamato_\\(one piece\\)), (yae_miko), cat ears, pink makeup, tall, mature, seductive, standing, medium_hair, pink green glitter glossy sheer neck striped jumpsuit, lace-up straps, green_eyes, highres, absurdres, (flat colors:1.1), flat background

https://redd.it/1ofzf8n
@rStableDiffusion
FlashPack: High-throughput tensor loading for PyTorch

https://github.com/fal-ai/flashpack

FlashPack — a new, high-throughput file format and loading mechanism for PyTorch that makes model checkpoint I/O blazingly fast, even on systems without access to GPU Direct Storage (GDS).

With FlashPack, loading any model can be 3–6× faster than with the current state-of-the-art methods like accelerate or the standard load_state_dict() and to() flow — all wrapped in a lightweight, pure-Python package that works anywhere.

https://redd.it/1og1toy
@rStableDiffusion
What's the big deal about Chroma?

I am trying to understand why are people excited about Chroma. For photorealistic images I get improper faces, takes too long and quality is ok.

I use ComfyUI.

What is the use case of Chroma? Am I using it wrong?

https://redd.it/1ogbkm1
@rStableDiffusion
Genuine question, why is no one using Hunyuan video?

I'm seeing most people using WAN only. Also, Lora support for hunyuan I2V seems to not exist at all?
I really would have tested both of them but I doubt my PC can handle it. So are there specific reasons why WAN is much widely used and why there is barely any support for hunyuan (i2v)?

https://redd.it/1oge14v
@rStableDiffusion
Beginner since few weeks, i always got trouble loading other users Workflow, there's always something missing and i, often, have hard time finding these missing nodes by myself (find some with ComfyUI Manager or Google search but sometimes not ) - Any tips from long time users ?

https://redd.it/1oghw3m
@rStableDiffusion
DGX Spark Benchmarks (Stable Diffusion edition)

tl;dr: DGX Spark is slower than a RTX5090 by around 3.1 times for diffusion tasks.

I happened to procure a DGX Spark (Asus Ascent GX10 variant). This is a cheaper variant of the DGX Spark costing \~US$3k, and this price reduction was achieved by switching out the PCIe 5.0 4TB NVMe disk for a PCIe 4.0 1TB one.

Based on profiling this variant using llama.cpp, it can be determined that in spite of the cost reduction the GPU and memory bandwidth performance appears to be comparable [to the regular DGX Spark baseline](https://github.com/ggml-org/llama.cpp/discussions/16578).

./llama-bench -m ./gpt-oss-20b-mxfp4.gguf -fa 1 -d 0,4096,8192,16384

ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
Device 0: NVIDIA GB10, compute capability 12.1, VMM: yes
| model | size | params | backend | ngl | n_ubatch | fa | test | t/s |
| ------------------------------ | ---------: | ---------: | ---------- | --: | -------: | -: | --------------: | -------------------: |
| gpt-oss 20B MXFP4 MoE | 11.27 GiB | 20.91 B | CUDA | 99 | 2048 | 1 | pp2048 | 3639.61 ± 9.49 |
| gpt-oss 20B MXFP4 MoE | 11.27 GiB | 20.91 B | CUDA | 99 | 2048 | 1 | tg32 | 81.04 ± 0.49 |
| gpt-oss 20B MXFP4 MoE | 11.27 GiB | 20.91 B | CUDA | 99 | 2048 | 1 | pp2048 @ d4096 | 3382.30 ± 6.68 |
| gpt-oss 20B MXFP4 MoE | 11.27 GiB | 20.91 B | CUDA | 99 | 2048 | 1 | tg32 @ d4096 | 74.66 ± 0.94 |
| gpt-oss 20B MXFP4 MoE | 11.27 GiB | 20.91 B | CUDA | 99 | 2048 | 1 | pp2048 @ d8192 | 3140.84 ± 15.23 |
| gpt-oss 20B MXFP4 MoE | 11.27 GiB | 20.91 B | CUDA | 99 | 2048 | 1 | tg32 @ d8192 | 69.63 ± 2.31 |
| gpt-oss 20B MXFP4 MoE | 11.27 GiB | 20.91 B | CUDA | 99 | 2048 | 1 | pp2048 @ d16384 | 2657.65 ± 6.55 |
| gpt-oss 20B MXFP4 MoE | 11.27 GiB | 20.91 B | CUDA | 99 | 2048 | 1 | tg32 @ d16384 | 65.39 ± 0.07 |
| gpt-oss 20B MXFP4 MoE | 11.27 GiB | 20.91 B | CUDA | 99 | 2048 | 1 | pp2048 @ d32768 | 2032.37 ± 9.45 |
| gpt-oss 20B MXFP4 MoE | 11.27 GiB | 20.91 B | CUDA | 99 | 2048 | 1 | tg32 @ d32768 | 57.06 ± 0.08 |

Now on to the benchmarks focusing on diffusion models. Because the DGX Spark is more compute oriented, this is one of the few cases where the DGX Spark can have an advantage compared to its other competitors such as the AMD's Strix Halo and Apple Sillicon.

Involved systems:

* DGX Spark, 128GB coherent unified memory, Phison NVMe 1TB, DGX OS (6.11.0-1016-nvidia)
* AMD 5800X3D, 96GB DDR4, RTX5090, Samsung 870 QVO 4TB, Windows 11 24H2

Benchmarks were conducted using ComfyUI against the following models

* Qwen Image Edit 2509 with 4-step LoRA (fp8\_e4m3n)
* Illustrious model (SDXL)
* SD3.5 Large (fp8\_scaled)
* WAN 2.2 T2V with 4-step LoRA (fp8\_scaled)

All tests were done using the workflow templates available directly from ComfyUI, except for the Illustrious model which was a random model I took from civitai for "research" purposes.

**ComfyUI Setup**

* DGX Spark: Using v0.3.66. Flags: --use-flash-attention --highvram
* RTX 5090: Using v0.3.66, Windows build. Default settings.

**Render Duration (First Run)**

During the first execution, the model is not yet cached in memory, so it needs to be loaded from disk. Over here the disk performance of the Asus Ascent may have influence on the model load time due to using a significantly slower disk, so we expect the actual retail DGX Spark to be faster in this regard.

The following chart illustrates the time taken in seconds complete a batch size of 1.

[Render duration in seconds \(lower is