Disappointment about Qwen-Image-Layered
This is frustrating:
there is no control over the content of the layers. (Or I couldn't tell him that)
unsatisfactory filling quality
it requires a lot of resources,
the work takes a lot of time
https://preview.redd.it/iopdkwemhc8g1.png?width=720&format=png&auto=webp&s=668fe36625d35ae3cf0a1f438d461f3323b92a84
https://preview.redd.it/npkw0tythc8g1.png?width=720&format=png&auto=webp&s=a5567878f9cc8df17aa56455b4c29b42be6a2c97
https://preview.redd.it/zfku2522ic8g1.png?width=720&format=png&auto=webp&s=4f3cb91ec1e23584237f5afcef4c88321fa592f1
2 leyers \(720\*1024\), 20 steps, time 16:25
https://preview.redd.it/th9bnivuhc8g1.png?width=368&format=png&auto=webp&s=1fb5380f2db0405ea68ecbb16d72f6663a949ffb
https://preview.redd.it/b8l97oavhc8g1.png?width=368&format=png&auto=webp&s=5566a98b32223a77e9e6450ddec7ea9d28ab68a8
https://preview.redd.it/62crq6ovhc8g1.png?width=368&format=png&auto=webp&s=a527cbcffc2c5a619b41f11e349167fb20971b0f
3 leyers \(368\*512\), 20 steps, time 07:04
I tested \\"Qwen\_Image\_Layered-Q5\_K\_M.gguf\\", because I don't have a very powerful computer.
https://redd.it/1prc89p
@rStableDiffusion
This is frustrating:
there is no control over the content of the layers. (Or I couldn't tell him that)
unsatisfactory filling quality
it requires a lot of resources,
the work takes a lot of time
https://preview.redd.it/iopdkwemhc8g1.png?width=720&format=png&auto=webp&s=668fe36625d35ae3cf0a1f438d461f3323b92a84
https://preview.redd.it/npkw0tythc8g1.png?width=720&format=png&auto=webp&s=a5567878f9cc8df17aa56455b4c29b42be6a2c97
https://preview.redd.it/zfku2522ic8g1.png?width=720&format=png&auto=webp&s=4f3cb91ec1e23584237f5afcef4c88321fa592f1
2 leyers \(720\*1024\), 20 steps, time 16:25
https://preview.redd.it/th9bnivuhc8g1.png?width=368&format=png&auto=webp&s=1fb5380f2db0405ea68ecbb16d72f6663a949ffb
https://preview.redd.it/b8l97oavhc8g1.png?width=368&format=png&auto=webp&s=5566a98b32223a77e9e6450ddec7ea9d28ab68a8
https://preview.redd.it/62crq6ovhc8g1.png?width=368&format=png&auto=webp&s=a527cbcffc2c5a619b41f11e349167fb20971b0f
3 leyers \(368\*512\), 20 steps, time 07:04
I tested \\"Qwen\_Image\_Layered-Q5\_K\_M.gguf\\", because I don't have a very powerful computer.
https://redd.it/1prc89p
@rStableDiffusion
Let’s reconstruct and document the history of open generative media before we forget it
If you have been here for a while you must have noticed how fast things change. Maybe you remember that just in the past 3 years we had AUTOMATIC1111, Invoke, text embeddings, IPAdapters, Lycoris, Deforum, AnimateDiff, CogVideoX, etc. So many tools, models and techniques that seemed to pop out of nowhere on a weekly basis, many of which are now obsolete or deprecated.
Many people who have contributed to the community with models, LoRAs, noscripts, content creators that make free tutorials for everyone to learn, companies like Stability AI that released open source models, are now forgotten.
Personally, I’ve been here since the early days of SD1.5 and I’ve observed the evolution of this community together with rest of the open source AI ecosystem. I’ve seen the impact that things like ComfyUI, SDXL, Flux, Wan, Qwen, and now Z-Image had in the community and I’m noticing a shift towards things becoming more centralized, less open, less local. There are several reasons why this is happening, maybe because models are becoming increasingly bigger, maybe unsustainable businesses models are dying off, maybe the people who contribute are burning out or getting busy with other stuff, who knows? ComfyUI is focusing more on developing their business side, Invoke was acquired by Adobe, Alibaba is keeping newer versions of Wan behind APIs, Flux is getting too big for local inference while hardware is getting more expensive…
In any case, I’d like to open this discussion for documentation purposes, so that we can collectively write about our experiences with this emerging technology over the past years. Feel free to write whatever you want about what attracted you to this community, what you enjoy about it, what impact it had on you personally or professionally, projects (even if small and obscure ones) that you engaged with, extensions/custom nodes you used, platforms, content creators you learned from, people like Kijai, Ostris and many others (write their names in your replies) that you might be thankful for, anything really.
I hope many of you can contribute to this discussion with your experiences so we can have a good common source of information, publicly available, about how open generative media evolved, and we are in a better position to assess where it’s going.
https://redd.it/1prp3cz
@rStableDiffusion
If you have been here for a while you must have noticed how fast things change. Maybe you remember that just in the past 3 years we had AUTOMATIC1111, Invoke, text embeddings, IPAdapters, Lycoris, Deforum, AnimateDiff, CogVideoX, etc. So many tools, models and techniques that seemed to pop out of nowhere on a weekly basis, many of which are now obsolete or deprecated.
Many people who have contributed to the community with models, LoRAs, noscripts, content creators that make free tutorials for everyone to learn, companies like Stability AI that released open source models, are now forgotten.
Personally, I’ve been here since the early days of SD1.5 and I’ve observed the evolution of this community together with rest of the open source AI ecosystem. I’ve seen the impact that things like ComfyUI, SDXL, Flux, Wan, Qwen, and now Z-Image had in the community and I’m noticing a shift towards things becoming more centralized, less open, less local. There are several reasons why this is happening, maybe because models are becoming increasingly bigger, maybe unsustainable businesses models are dying off, maybe the people who contribute are burning out or getting busy with other stuff, who knows? ComfyUI is focusing more on developing their business side, Invoke was acquired by Adobe, Alibaba is keeping newer versions of Wan behind APIs, Flux is getting too big for local inference while hardware is getting more expensive…
In any case, I’d like to open this discussion for documentation purposes, so that we can collectively write about our experiences with this emerging technology over the past years. Feel free to write whatever you want about what attracted you to this community, what you enjoy about it, what impact it had on you personally or professionally, projects (even if small and obscure ones) that you engaged with, extensions/custom nodes you used, platforms, content creators you learned from, people like Kijai, Ostris and many others (write their names in your replies) that you might be thankful for, anything really.
I hope many of you can contribute to this discussion with your experiences so we can have a good common source of information, publicly available, about how open generative media evolved, and we are in a better position to assess where it’s going.
https://redd.it/1prp3cz
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
Final Fantasy Tactics Style LoRA for Z-Image-Turbo - Link in denoscription
https://redd.it/1prt5oj
@rStableDiffusion
https://redd.it/1prt5oj
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit: Final Fantasy Tactics Style LoRA for Z-Image-Turbo - Link in denoscription
Explore this post and more from the StableDiffusion community
I made a custom node that finds and selects images in a more convenient way.
https://redd.it/1pryutu
@rStableDiffusion
https://redd.it/1pryutu
@rStableDiffusion
Z-Image Turbo with Lenovo UltraReal LoRA, SeedVR2 & Z-Image Prompt Enhancer
https://redd.it/1ps03qc
@rStableDiffusion
https://redd.it/1ps03qc
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit: Z-Image Turbo with Lenovo UltraReal LoRA, SeedVR2 & Z-Image Prompt Enhancer
Explore this post and more from the StableDiffusion community