"Outrage" Short AI Animation (Wan22 I2V ComfyUI)
https://youtu.be/-HeVTeniWv8
https://redd.it/1pjzs92
@rStableDiffusion
https://youtu.be/-HeVTeniWv8
https://redd.it/1pjzs92
@rStableDiffusion
YouTube
177 | "Outrage" | Short AI Animation (Wan22 I2V ComfyUI) [4K]
"Outrage" Short AI Animation
Input Images - Flux1
Video - Wan 2.2 14b I2V + VACE Clip joiner + Wan 2.2 creative upscale, via ComfyUI
100% AI Generated with local open source models
____________________________________________
Let me know your feedback…
Input Images - Flux1
Video - Wan 2.2 14b I2V + VACE Clip joiner + Wan 2.2 creative upscale, via ComfyUI
100% AI Generated with local open source models
____________________________________________
Let me know your feedback…
Z-Image first generation time
Hi, I'm using ComfyUI/Z-image with a 3060 (12GB VRAM) and 16 GB RAM. Anytime I change my prompt, the first generation takes between 250-350 seconds, but subsequent generations for the same prompt are must faster, around 25-60 seconds.
Is there a way to reduce the generation of the first picture to be equally short? Since others haven't posted this, is it something with my machine? (Not enough RAM, etc?)
https://redd.it/1pk13tx
@rStableDiffusion
Hi, I'm using ComfyUI/Z-image with a 3060 (12GB VRAM) and 16 GB RAM. Anytime I change my prompt, the first generation takes between 250-350 seconds, but subsequent generations for the same prompt are must faster, around 25-60 seconds.
Is there a way to reduce the generation of the first picture to be equally short? Since others haven't posted this, is it something with my machine? (Not enough RAM, etc?)
https://redd.it/1pk13tx
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
Old footage upscale/restoration, how to? Seedvr2 doesn't work for old footage
https://redd.it/1pk4m9m
@rStableDiffusion
https://redd.it/1pk4m9m
@rStableDiffusion