"Prison City" Short AI Film (Wan22 I2V ComfyUI)
https://youtu.be/DqpOvi1ZOyk
https://redd.it/1p5j7ig
@rStableDiffusion
https://youtu.be/DqpOvi1ZOyk
https://redd.it/1p5j7ig
@rStableDiffusion
YouTube
176 | "Prison City" | Short AI Film (Wan22 I2V ComfyUI) [4K]
"Prison City"
Inputs - Flux
Video - Wan 2.2 14b I2V (First-to-last frame interpolation) via ComfyUI
100% AI Generated with local open source models
____________________________________________
Let me know your feedback in the comments, also consider giving…
Inputs - Flux
Video - Wan 2.2 14b I2V (First-to-last frame interpolation) via ComfyUI
100% AI Generated with local open source models
____________________________________________
Let me know your feedback in the comments, also consider giving…
600k 1mp+ dataset
https://huggingface.co/datasets/opendiffusionai/cc12m-1mp\_plus-realistic
I previously posted some higher-resolution datasets, but they only got up to around 200k images.
I dug deeper, including 1mp (1024x1024 or greater) sized images from CC12M, and that brings up the image count to 600k.
Disclaimer: The quality is not as good as some of our hand-curated datasets. But... when you need large amounts of data, you have to make sacrifices sometimes. sigh.
https://redd.it/1p5kjuk
@rStableDiffusion
https://huggingface.co/datasets/opendiffusionai/cc12m-1mp\_plus-realistic
I previously posted some higher-resolution datasets, but they only got up to around 200k images.
I dug deeper, including 1mp (1024x1024 or greater) sized images from CC12M, and that brings up the image count to 600k.
Disclaimer: The quality is not as good as some of our hand-curated datasets. But... when you need large amounts of data, you have to make sacrifices sometimes. sigh.
https://redd.it/1p5kjuk
@rStableDiffusion
huggingface.co
opendiffusionai/cc12m-1mp_plus-realistic · Datasets at Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
Is 16+8 VRAM and 32 GB of RAM enough for Wan 2.2?
Just bought a 5060 TI and will be using my 3060 TI in secondary slot.
So I have this question.
I don't really want to buy more RAM right now because I'm on an AM4, I was thinking of upgrading the whole system at the end of next year.
https://redd.it/1p5h1zp
@rStableDiffusion
Just bought a 5060 TI and will be using my 3060 TI in secondary slot.
So I have this question.
I don't really want to buy more RAM right now because I'm on an AM4, I was thinking of upgrading the whole system at the end of next year.
https://redd.it/1p5h1zp
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
Hunyuan 1.5 step distilled loras are out.
https://huggingface.co/Comfy-Org/HunyuanVideo_1.5_repackaged/tree/main/split_files/loras
Seems to work with T2V 720p model as well but obviously might be different results than using the 720p lora when that comes out. Using it with euler/beta 1 str, 1CFG 4-8 steps works.
I get gen times as low as (non-cold start after model is loaded and prompt is processed)
>6/6 [00:28<00:00, 4.81s/it]
>Prompt executed in 47.89 seconds
With a 3080 and the FP16 model, 49 frames 640*480, no sage or fast accumulation as the individual iterations are quite fast already and the vae decoding takes up a decent % of the time.
https://redd.it/1p5ou1s
@rStableDiffusion
https://huggingface.co/Comfy-Org/HunyuanVideo_1.5_repackaged/tree/main/split_files/loras
Seems to work with T2V 720p model as well but obviously might be different results than using the 720p lora when that comes out. Using it with euler/beta 1 str, 1CFG 4-8 steps works.
I get gen times as low as (non-cold start after model is loaded and prompt is processed)
>6/6 [00:28<00:00, 4.81s/it]
>Prompt executed in 47.89 seconds
With a 3080 and the FP16 model, 49 frames 640*480, no sage or fast accumulation as the individual iterations are quite fast already and the vae decoding takes up a decent % of the time.
https://redd.it/1p5ou1s
@rStableDiffusion