Just a quick PSA. Delete your ComfyUI prefs after big updates.
I had noticed that the new theme was quite different from the copy I had made. (Had set it to show nodes as boxes). And thought to myself, perhaps default settings are different now too.
So I deleted my prefs and, sure enough, a lot of strange issues I was having just disappeared. Just wish I had done this before filling out the survey... some of my complaints won't make sense to them 🤦
https://redd.it/1plp7ye
@rStableDiffusion
I had noticed that the new theme was quite different from the copy I had made. (Had set it to show nodes as boxes). And thought to myself, perhaps default settings are different now too.
So I deleted my prefs and, sure enough, a lot of strange issues I was having just disappeared. Just wish I had done this before filling out the survey... some of my complaints won't make sense to them 🤦
https://redd.it/1plp7ye
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
Announcing The Release of Qwen 360 Diffusion, The World's Best 360° Text-to-Image Model
https://www.reddit.com/gallery/1pltzay
https://redd.it/1plv6ry
@rStableDiffusion
https://www.reddit.com/gallery/1pltzay
https://redd.it/1plv6ry
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit: Announcing The Release of Qwen 360 Diffusion, The World's Best 360° Text-to-Image…
Explore this post and more from the StableDiffusion community
Announcing The Release of Qwen 360 Diffusion, The World's Best 360° Text-to-Image Model
https://redd.it/1pltzay
@rStableDiffusion
https://redd.it/1pltzay
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit: Announcing The Release of Qwen 360 Diffusion, The World's Best 360° Text-to-Image…
Explore this post and more from the StableDiffusion community
The official training noscript of Z-image base has been released. The model might be released pretty soon.
https://redd.it/1plv6yj
@rStableDiffusion
https://redd.it/1plv6yj
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit: The official training noscript of Z-image base has been released. The model might be…
Explore this post and more from the StableDiffusion community
Creating data I couldn't find when I was researching: Pro 6000, 5090, 4090, 5060 benchmarks
Both when I was upgrading from my 4090 to my 5090 and from my 5090 to my RTX Pro 6000, I couldn't find solid data of how Stable Diffusion would perform. So I decided to fix that as best I could with some benchmarks. Perhaps it will help you.
I'm also SUPER interested if someone has a RTX Pro 6000 Max-Q version, to compare it and add it to the data. The benchmark workflows are mostly based around the ComfyUI default workflows for ease of re-production, with a few tiny changes. Will link below.
Testing methodology was to run once to pre-cache everything (so I'm testing the cards more directly and not the PCIE lanes or hard drive speed), then run three times and take the average. Total runtime is pulled from ComfyUI queue (so includes things like image writing, etc, and is a little more true to life for your day to day generations), it/s is pulled from console reporting. I also monitored GPU usage and power draw to ensure cards were not getting bottlenecked.
https://preview.redd.it/p7n8gpz5i17g1.png?width=1341&format=png&auto=webp&s=46c58aac5f862826001d882a6fd7077b8cf47c40
https://preview.redd.it/p2e7otbgl17g1.png?width=949&format=png&auto=webp&s=4ece8d0b9db467b77abc9d68679fb1d521ac3568
Some interesting observations here:
\- The Pro 6000 can be significantly (1.5x) faster than a 5090
\- Overall a 5090 seems to be around 30% faster than a 4090
\- In terms of total power used per generation, the RTX Pro 6000 is by far the most power efficient.
I also wanted to see what power level I should run my cards at. Almost everything I read says "Turn down your power to 90/80/50%! It's almost the same speed and you use half the power!"
https://preview.redd.it/vjdu878aj17g1.png?width=925&format=png&auto=webp&s=cb1069bc86ec7b85abd4bdd7e1e46d17c46fdadc
https://preview.redd.it/u2wdsxebj17g1.png?width=954&format=png&auto=webp&s=54d8cf06ab378f0d940b3d0b60717f8270f2dee1
This appears not to be true. For both the pro and consumer card, I'm seeing a nearly linear loss in performance as you turn down the power.
Fun fact: At about 300 watts, the Pro 6000 is nearly as fast as the 5090 at 600W.
And finally, was curious about fp16 vs fp8, especially when I started running into ComfyUI offloading the model on the 5060. This needs to be explored more thoroughly, but here's my data for now:
https://preview.redd.it/0cdgw1i9k17g1.png?width=1074&format=png&auto=webp&s=776679497a671c4de3243150b4d826b6853d85b4
In my very limited experimentation, switching from fp16 to fp8 on a Pro 6000 was only a 4% speed increase. Switching on the 5060 Ti and allowing the model to run on the card only came in at 14% faster, which surprised me a little. I think the new Comfy architecture must be doing a really good job with offload management.
Benchmark workflows download (mostly the default ComfyUI workflows, with any changes noted on the spreadsheet):
http://dl.dropboxusercontent.com/scl/fi/iw9chh2nsnv9oh5imjm4g/SD\_Benchmarks.zip?rlkey=qdzy6hdpfm50d5v6jtspzythl&st=fkzgzmnr&dl=0
https://redd.it/1plwzwg
@rStableDiffusion
Both when I was upgrading from my 4090 to my 5090 and from my 5090 to my RTX Pro 6000, I couldn't find solid data of how Stable Diffusion would perform. So I decided to fix that as best I could with some benchmarks. Perhaps it will help you.
I'm also SUPER interested if someone has a RTX Pro 6000 Max-Q version, to compare it and add it to the data. The benchmark workflows are mostly based around the ComfyUI default workflows for ease of re-production, with a few tiny changes. Will link below.
Testing methodology was to run once to pre-cache everything (so I'm testing the cards more directly and not the PCIE lanes or hard drive speed), then run three times and take the average. Total runtime is pulled from ComfyUI queue (so includes things like image writing, etc, and is a little more true to life for your day to day generations), it/s is pulled from console reporting. I also monitored GPU usage and power draw to ensure cards were not getting bottlenecked.
https://preview.redd.it/p7n8gpz5i17g1.png?width=1341&format=png&auto=webp&s=46c58aac5f862826001d882a6fd7077b8cf47c40
https://preview.redd.it/p2e7otbgl17g1.png?width=949&format=png&auto=webp&s=4ece8d0b9db467b77abc9d68679fb1d521ac3568
Some interesting observations here:
\- The Pro 6000 can be significantly (1.5x) faster than a 5090
\- Overall a 5090 seems to be around 30% faster than a 4090
\- In terms of total power used per generation, the RTX Pro 6000 is by far the most power efficient.
I also wanted to see what power level I should run my cards at. Almost everything I read says "Turn down your power to 90/80/50%! It's almost the same speed and you use half the power!"
https://preview.redd.it/vjdu878aj17g1.png?width=925&format=png&auto=webp&s=cb1069bc86ec7b85abd4bdd7e1e46d17c46fdadc
https://preview.redd.it/u2wdsxebj17g1.png?width=954&format=png&auto=webp&s=54d8cf06ab378f0d940b3d0b60717f8270f2dee1
This appears not to be true. For both the pro and consumer card, I'm seeing a nearly linear loss in performance as you turn down the power.
Fun fact: At about 300 watts, the Pro 6000 is nearly as fast as the 5090 at 600W.
And finally, was curious about fp16 vs fp8, especially when I started running into ComfyUI offloading the model on the 5060. This needs to be explored more thoroughly, but here's my data for now:
https://preview.redd.it/0cdgw1i9k17g1.png?width=1074&format=png&auto=webp&s=776679497a671c4de3243150b4d826b6853d85b4
In my very limited experimentation, switching from fp16 to fp8 on a Pro 6000 was only a 4% speed increase. Switching on the 5060 Ti and allowing the model to run on the card only came in at 14% faster, which surprised me a little. I think the new Comfy architecture must be doing a really good job with offload management.
Benchmark workflows download (mostly the default ComfyUI workflows, with any changes noted on the spreadsheet):
http://dl.dropboxusercontent.com/scl/fi/iw9chh2nsnv9oh5imjm4g/SD\_Benchmarks.zip?rlkey=qdzy6hdpfm50d5v6jtspzythl&st=fkzgzmnr&dl=0
https://redd.it/1plwzwg
@rStableDiffusion