Z-Image first generation time
Hi, I'm using ComfyUI/Z-image with a 3060 (12GB VRAM) and 16 GB RAM. Anytime I change my prompt, the first generation takes between 250-350 seconds, but subsequent generations for the same prompt are must faster, around 25-60 seconds.
Is there a way to reduce the generation of the first picture to be equally short? Since others haven't posted this, is it something with my machine? (Not enough RAM, etc?)
https://redd.it/1pk13tx
@rStableDiffusion
Hi, I'm using ComfyUI/Z-image with a 3060 (12GB VRAM) and 16 GB RAM. Anytime I change my prompt, the first generation takes between 250-350 seconds, but subsequent generations for the same prompt are must faster, around 25-60 seconds.
Is there a way to reduce the generation of the first picture to be equally short? Since others haven't posted this, is it something with my machine? (Not enough RAM, etc?)
https://redd.it/1pk13tx
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
Old footage upscale/restoration, how to? Seedvr2 doesn't work for old footage
https://redd.it/1pk4m9m
@rStableDiffusion
https://redd.it/1pk4m9m
@rStableDiffusion
Z-Image-Turbo + SeedV2R = banger (zoom in!)
https://preview.redd.it/kvpr5uei9n6g1.png?width=4000&format=png&auto=webp&s=82d0369801caaafb5e3d21c4d6cf054a3f67163c
https://preview.redd.it/hex51uei9n6g1.png?width=4000&format=png&auto=webp&s=363c60204482429c65b97c77e3a96c4698eed661
https://preview.redd.it/on950uei9n6g1.png?width=4000&format=png&auto=webp&s=45993c5640fe5b4b05a59ee0a8d501c868ea8128
https://preview.redd.it/rpl5ruei9n6g1.png?width=4000&format=png&auto=webp&s=d498cd45dd70119ebf0bed69a7d7a729d4321932
https://preview.redd.it/msp9cuei9n6g1.png?width=4000&format=png&auto=webp&s=514783c9e54df59f68b8a1c4d8adea52efb35940
https://preview.redd.it/lqvlouei9n6g1.png?width=4000&format=png&auto=webp&s=782804d573beb9d374031118b95d4ba83b3611e8
Crazy what you can do these days on limited VRAM.
https://redd.it/1pk9u2p
@rStableDiffusion
https://preview.redd.it/kvpr5uei9n6g1.png?width=4000&format=png&auto=webp&s=82d0369801caaafb5e3d21c4d6cf054a3f67163c
https://preview.redd.it/hex51uei9n6g1.png?width=4000&format=png&auto=webp&s=363c60204482429c65b97c77e3a96c4698eed661
https://preview.redd.it/on950uei9n6g1.png?width=4000&format=png&auto=webp&s=45993c5640fe5b4b05a59ee0a8d501c868ea8128
https://preview.redd.it/rpl5ruei9n6g1.png?width=4000&format=png&auto=webp&s=d498cd45dd70119ebf0bed69a7d7a729d4321932
https://preview.redd.it/msp9cuei9n6g1.png?width=4000&format=png&auto=webp&s=514783c9e54df59f68b8a1c4d8adea52efb35940
https://preview.redd.it/lqvlouei9n6g1.png?width=4000&format=png&auto=webp&s=782804d573beb9d374031118b95d4ba83b3611e8
Crazy what you can do these days on limited VRAM.
https://redd.it/1pk9u2p
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
What are the Z-Image Character Lora dataset guidelines and parameters for training
I am looking to start training character loras for ZIT but I am not sure how many images to use, how different angles should be, how the captions should look like etc. I would be very thankful if you could point me in the right direction.
https://redd.it/1pjyzs4
@rStableDiffusion
I am looking to start training character loras for ZIT but I am not sure how many images to use, how different angles should be, how the captions should look like etc. I would be very thankful if you could point me in the right direction.
https://redd.it/1pjyzs4
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
Realtime Lora Trainer now supports Qwen Image / Qwen Edit, as well as Wan 2.2 for Musubi Trainer with advanced offloading options.
https://redd.it/1pkdrzv
@rStableDiffusion
https://redd.it/1pkdrzv
@rStableDiffusion