Good Ai video generators that have "mid frame"?
So I've been using pixverse to create videos because it has a start, mid, and endframe option but I'm kind of struggling to get a certain aspect down.
For simplicity sake, say I'm trying to make a video of a character punching another character.
Start frame: Both characters in stances against eachother
Mid frame: Still of one character's fist colliding with the other character
End frame: Aftermath still of the punch with character knocked back
From what I can tell, it seems like whatever happens before and whatever happens after the midframe was generated separately and spliced together without using eachother for context, there is no constant momentum carried over the mid frame. As a result, there is a short period where the fist slows down until is barely moving as it touches the other character and after the midframe, the fist doesn't move.
Anyone figured out a way to preserve momentum before and after a frame you want to use?
https://redd.it/1ot3da3
@rStableDiffusion
So I've been using pixverse to create videos because it has a start, mid, and endframe option but I'm kind of struggling to get a certain aspect down.
For simplicity sake, say I'm trying to make a video of a character punching another character.
Start frame: Both characters in stances against eachother
Mid frame: Still of one character's fist colliding with the other character
End frame: Aftermath still of the punch with character knocked back
From what I can tell, it seems like whatever happens before and whatever happens after the midframe was generated separately and spliced together without using eachother for context, there is no constant momentum carried over the mid frame. As a result, there is a short period where the fist slows down until is barely moving as it touches the other character and after the midframe, the fist doesn't move.
Anyone figured out a way to preserve momentum before and after a frame you want to use?
https://redd.it/1ot3da3
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
UniLumos: Fast and Unified Image and Video Relighting
https://github.com/alibaba-damo-academy/Lumos-Custom?tab=readme-ov-file
So many new releases set off my 'wtf are you talking about?' klaxon, so I've tried to paraphrase their jargon. Apologies if I'm misinterpreted it.
What does it do ?
UniLumos, a relighting framework for both images and videos that takes foreground objects and reinserts them into other backgrounds and relights them as appropriate to the new background. In effect making an intelligent green screen cutout that also grades the film .
iS iT fOr cOmFy ? aNd wHeN ?
No and ask on Github you lazy scamps
Is it any good ?
Like all AI , it's a tool for specific uses and some will work and some won't, if you try extreme examples, prepare to eat a box of 'Disappointment Donuts'. The examples (on Github) are for showing the relighting, not context.
Original
Processed
https://redd.it/1ota9tc
@rStableDiffusion
https://github.com/alibaba-damo-academy/Lumos-Custom?tab=readme-ov-file
So many new releases set off my 'wtf are you talking about?' klaxon, so I've tried to paraphrase their jargon. Apologies if I'm misinterpreted it.
What does it do ?
UniLumos, a relighting framework for both images and videos that takes foreground objects and reinserts them into other backgrounds and relights them as appropriate to the new background. In effect making an intelligent green screen cutout that also grades the film .
iS iT fOr cOmFy ? aNd wHeN ?
No and ask on Github you lazy scamps
Is it any good ?
Like all AI , it's a tool for specific uses and some will work and some won't, if you try extreme examples, prepare to eat a box of 'Disappointment Donuts'. The examples (on Github) are for showing the relighting, not context.
Original
Processed
https://redd.it/1ota9tc
@rStableDiffusion
GitHub
GitHub - alibaba-damo-academy/Lumos-Custom: Lumos-Custom Project: research for customized video generation in the Lumos Project.
Lumos-Custom Project: research for customized video generation in the Lumos Project. - alibaba-damo-academy/Lumos-Custom
Is there a way to edit photos inside ComfyUI? like a photoshop node or something
https://redd.it/1otdzku
@rStableDiffusion
https://redd.it/1otdzku
@rStableDiffusion
Ovi 1.1 is now 10 seconds
https://reddit.com/link/1otllcy/video/gyspbbg91h0g1/player
The Ovi 1.1 now is 10 seconds! In addition,
1. We have simplified the audio denoscription tags from
Audio Denoscription:
to
Audio Denoscription:
This makes prompt editing much easier.
2. We will also release a new 5-second base model checkpoint that was retrained using higher quality, 960x960p resolution videos, instead of the original Ovi 1.0 that was trained using 720x720p videos. The new 5-second base model also follows the simplified prompt above.
3. The 10-second video was trained using full bidirectional dense attention instead of causal or AR approach to ensure quality of generation.
We will release both 10-second & new 5-second weights very soon on our github repo - https://github.com/character-ai/Ovi
https://redd.it/1otllcy
@rStableDiffusion
https://reddit.com/link/1otllcy/video/gyspbbg91h0g1/player
The Ovi 1.1 now is 10 seconds! In addition,
1. We have simplified the audio denoscription tags from
Audio Denoscription:
<AUDCAP>Audio denoscription here<ENDAUDCAP>to
Audio Denoscription:
Audio: Audio denoscription hereThis makes prompt editing much easier.
2. We will also release a new 5-second base model checkpoint that was retrained using higher quality, 960x960p resolution videos, instead of the original Ovi 1.0 that was trained using 720x720p videos. The new 5-second base model also follows the simplified prompt above.
3. The 10-second video was trained using full bidirectional dense attention instead of causal or AR approach to ensure quality of generation.
We will release both 10-second & new 5-second weights very soon on our github repo - https://github.com/character-ai/Ovi
https://redd.it/1otllcy
@rStableDiffusion
GitHub
GitHub - character-ai/Ovi
Contribute to character-ai/Ovi development by creating an account on GitHub.
The simplest workflow for Qwen-Image-Edit-2509 that simply works
I tried Qwen-Image-Edit-2509 and got the expected result. My workflow was actually simpler than standard, as I removed any of the image resize nodes. In fact, you shouldn’t use any resize node, since the TextEncodeQwenImageEditPlus function automatically resizes all connected input images ( nodes_qwen.py lines 89–96):
if vae is not None:
total = int(1024 1024)
scale_by = math.sqrt(total / (samples.shape[3] samples.shape2))
width = round(samples.shape3 scale_by / 8.0) 8
height = round(samples.shape2 scale_by / 8.0) 8
s = comfy.utils.commonupscale(samples, width, height, "area", "disabled")
reflatents.append(vae.encode(s.movedim(1, -1):, :, :, :3))
This screenshot example shows where I directly connected the input images to the node. It addresses most of the comments, potential misunderstandings, and complications mentioned at the other post.
Image editing \(changing clothes\) using Qwen-Image-Edit-2509 model
https://redd.it/1otityx
@rStableDiffusion
I tried Qwen-Image-Edit-2509 and got the expected result. My workflow was actually simpler than standard, as I removed any of the image resize nodes. In fact, you shouldn’t use any resize node, since the TextEncodeQwenImageEditPlus function automatically resizes all connected input images ( nodes_qwen.py lines 89–96):
if vae is not None:
total = int(1024 1024)
scale_by = math.sqrt(total / (samples.shape[3] samples.shape2))
width = round(samples.shape3 scale_by / 8.0) 8
height = round(samples.shape2 scale_by / 8.0) 8
s = comfy.utils.commonupscale(samples, width, height, "area", "disabled")
reflatents.append(vae.encode(s.movedim(1, -1):, :, :, :3))
This screenshot example shows where I directly connected the input images to the node. It addresses most of the comments, potential misunderstandings, and complications mentioned at the other post.
Image editing \(changing clothes\) using Qwen-Image-Edit-2509 model
https://redd.it/1otityx
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit: Trying to use Qwen image for inpainting, but it doesn't seem to work at all.
Explore this post and more from the StableDiffusion community
[Release] New ComfyUI node – Step Audio EditX TTS
🎙️ ComfyUI-Step\_Audio\_EditX\_TTS: Zero-Shot Voice Cloning + Advanced Audio Editing
**TL;DR:** Clone any voice from 3-30 seconds of audio, then edit emotion, style, speed, and add effects—all while preserving voice identity. State-of-the-art quality, now in ComfyUI.
Currently recommend 10 -18 gb VRAM
[GitHub](https://github.com/Saganaki22/ComfyUI-Step_Audio_EditX_TTS) | [HF Model](https://huggingface.co/stepfun-ai/Step-Audio-EditX) | [Demo](https://stepaudiollm.github.io/step-audio-editx/) | [HF Spaces](https://huggingface.co/spaces/stepfun-ai/Step-Audio-EditX)
\---
This one brings Step Audio EditX to ComfyUI – state-of-the-art zero-shot voice cloning and audio editing. Unlike typical TTS nodes, this gives you two specialized nodes for different workflows:
[Clone on the left, Edit on the right](https://preview.redd.it/p33fzzhrzh0g1.png?width=1331&format=png&auto=webp&s=c5db8c5950bacd3b1ae91050bb26de52bb29b30c)
# What it does:
**🎤 Clone Node** – Zero-shot voice cloning from just 3-30 seconds of reference audio
* Feed it any voice sample + text trannoscript
* Generate unlimited new speech in that exact voice
* Smart longform chunking for texts over 2000 words (auto-splits and stitches seamlessly)
* Perfect for character voices, narration, voiceovers
**🎭 Edit Node** – Advanced audio editing while preserving voice identity
* **Emotions**: happy, sad, angry, excited, calm, fearful, surprised, disgusted
* **Styles**: whisper, gentle, serious, casual, formal, friendly
* **Speed control**: faster/slower with multiple levels
* **Paralinguistic effects**: `[Laughter]`, `[Breathing]`, `[Sigh]`, `[Gasp]`, `[Cough]`
* **Denoising**: clean up background noise or remove silence
* Multi-iteration editing for stronger effects (1=subtle, 5=extreme)
[voice clone + denoise & edit style exaggerated 1 iteration \/ float32](https://reddit.com/link/1otsbfb/video/m1c8m1nd5i0g1/player)
[voice clone + edit emotion admiration 1 iteration \/ float32](https://reddit.com/link/1otsbfb/video/dczqvi6vai0g1/player)
# Performance notes:
* Getting solid results on RTX 4090 with bfloat16 (\~11-14GB VRAM for clone, \~14-18GB for edit)
* Current quantization support (int8/int4) available but with quality trade-offs
* **Note: We're waiting on the Step AI research team to release official optimized quantized models for better lower-VRAM performance – will implement them as soon as they drop!**
* Multiple attention mechanisms (SDPA, Eager, Flash Attention, Sage Attention)
* Optional VRAM management – keeps model loaded for speed or unloads to free memory
# Quick setup:
* Install via ComfyUI Manager (search "Step Audio EditX TTS") or manually clone the repo
* Download **both** Step-Audio-EditX and Step-Audio-Tokenizer from HuggingFace
* Place them in `ComfyUI/models/Step-Audio-EditX/`
* Full folder structure and troubleshooting in the README
# Workflow ideas:
* Clone any voice → edit emotion/style for character variations
* Clean up noisy recordings with denoise mode
* Speed up/slow down existing audio without pitch shift
* Add natural-sounding paralinguistic effects to generated speech
[Advanced workflow with Whisper \/ trannoscription, clone + edit](https://preview.redd.it/wkc39r900i0g1.png?width=1379&format=png&auto=webp&s=557b8a0893fcbbb58dd957c299d8a3f8d6bed8e9)
The README has full parameter guides, VRAM recommendations, example settings, and troubleshooting tips. Works with all ComfyUI audio nodes.
If you find it useful, drop a ⭐ on GitHub
https://redd.it/1otsbfb
@rStableDiffusion
🎙️ ComfyUI-Step\_Audio\_EditX\_TTS: Zero-Shot Voice Cloning + Advanced Audio Editing
**TL;DR:** Clone any voice from 3-30 seconds of audio, then edit emotion, style, speed, and add effects—all while preserving voice identity. State-of-the-art quality, now in ComfyUI.
Currently recommend 10 -18 gb VRAM
[GitHub](https://github.com/Saganaki22/ComfyUI-Step_Audio_EditX_TTS) | [HF Model](https://huggingface.co/stepfun-ai/Step-Audio-EditX) | [Demo](https://stepaudiollm.github.io/step-audio-editx/) | [HF Spaces](https://huggingface.co/spaces/stepfun-ai/Step-Audio-EditX)
\---
This one brings Step Audio EditX to ComfyUI – state-of-the-art zero-shot voice cloning and audio editing. Unlike typical TTS nodes, this gives you two specialized nodes for different workflows:
[Clone on the left, Edit on the right](https://preview.redd.it/p33fzzhrzh0g1.png?width=1331&format=png&auto=webp&s=c5db8c5950bacd3b1ae91050bb26de52bb29b30c)
# What it does:
**🎤 Clone Node** – Zero-shot voice cloning from just 3-30 seconds of reference audio
* Feed it any voice sample + text trannoscript
* Generate unlimited new speech in that exact voice
* Smart longform chunking for texts over 2000 words (auto-splits and stitches seamlessly)
* Perfect for character voices, narration, voiceovers
**🎭 Edit Node** – Advanced audio editing while preserving voice identity
* **Emotions**: happy, sad, angry, excited, calm, fearful, surprised, disgusted
* **Styles**: whisper, gentle, serious, casual, formal, friendly
* **Speed control**: faster/slower with multiple levels
* **Paralinguistic effects**: `[Laughter]`, `[Breathing]`, `[Sigh]`, `[Gasp]`, `[Cough]`
* **Denoising**: clean up background noise or remove silence
* Multi-iteration editing for stronger effects (1=subtle, 5=extreme)
[voice clone + denoise & edit style exaggerated 1 iteration \/ float32](https://reddit.com/link/1otsbfb/video/m1c8m1nd5i0g1/player)
[voice clone + edit emotion admiration 1 iteration \/ float32](https://reddit.com/link/1otsbfb/video/dczqvi6vai0g1/player)
# Performance notes:
* Getting solid results on RTX 4090 with bfloat16 (\~11-14GB VRAM for clone, \~14-18GB for edit)
* Current quantization support (int8/int4) available but with quality trade-offs
* **Note: We're waiting on the Step AI research team to release official optimized quantized models for better lower-VRAM performance – will implement them as soon as they drop!**
* Multiple attention mechanisms (SDPA, Eager, Flash Attention, Sage Attention)
* Optional VRAM management – keeps model loaded for speed or unloads to free memory
# Quick setup:
* Install via ComfyUI Manager (search "Step Audio EditX TTS") or manually clone the repo
* Download **both** Step-Audio-EditX and Step-Audio-Tokenizer from HuggingFace
* Place them in `ComfyUI/models/Step-Audio-EditX/`
* Full folder structure and troubleshooting in the README
# Workflow ideas:
* Clone any voice → edit emotion/style for character variations
* Clean up noisy recordings with denoise mode
* Speed up/slow down existing audio without pitch shift
* Add natural-sounding paralinguistic effects to generated speech
[Advanced workflow with Whisper \/ trannoscription, clone + edit](https://preview.redd.it/wkc39r900i0g1.png?width=1379&format=png&auto=webp&s=557b8a0893fcbbb58dd957c299d8a3f8d6bed8e9)
The README has full parameter guides, VRAM recommendations, example settings, and troubleshooting tips. Works with all ComfyUI audio nodes.
If you find it useful, drop a ⭐ on GitHub
https://redd.it/1otsbfb
@rStableDiffusion
GitHub
GitHub - Saganaki22/ComfyUI-Step_Audio_EditX_TTS: ComfyUI nodes for Step Audio EditX - State-of-the-art zero-shot voice cloning…
ComfyUI nodes for Step Audio EditX - State-of-the-art zero-shot voice cloning and audio editing with emotion, style, speed control, and more. - Saganaki22/ComfyUI-Step_Audio_EditX_TTS
Best service to rent GPU and run ComfyUI and other stuff for making LORAs and image/video generation ?
I’m looking for recommendations on the best GPU rental services. Ideally, I need something that charges only for actual compute time, not for every minute the GPU is connected.
Here’s my situation: I work on two PCs, and often I’ll set up a generation task, leave it running for a while, and come back later. So if the generation itself takes 1 hour and then the GPU sits idle for another hour, I don’t want to get billed for 2 hours of usage — just the 1 hour of actual compute time.
Does anyone know of any GPU rental services that work this way? Or at least something close to that model?
https://redd.it/1ou3g8v
@rStableDiffusion
I’m looking for recommendations on the best GPU rental services. Ideally, I need something that charges only for actual compute time, not for every minute the GPU is connected.
Here’s my situation: I work on two PCs, and often I’ll set up a generation task, leave it running for a while, and come back later. So if the generation itself takes 1 hour and then the GPU sits idle for another hour, I don’t want to get billed for 2 hours of usage — just the 1 hour of actual compute time.
Does anyone know of any GPU rental services that work this way? Or at least something close to that model?
https://redd.it/1ou3g8v
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
Why are there no 4 step loras for Chroma?
Schnell (which Chroma is based on) is a 4 steps fast model and Flux Dev has multiple 4-8 step loras available. Wan and Qwen also have 4 step loras. The currently available flash loras for Chroma are made by one person and they are as far as I know just extractions from Chroma Flash models (although there is barely any info on this), so how come nobody else has made a faster lightning lora for Chroma?
Both the Chroma flash model and the Flash Loras barely speed up generation, as they need at least 16 steps, but work the best with 20-24 steps (or sometimes higher), which at that point is just a regular generation time. However for some reason they usually make outputs more stable and better (very good for art specifically).
So is there some kind of architectural difficulty with Chroma that makes it impossible to speed it up more? That would be weird since it is basically Flux.
https://redd.it/1ou4ynv
@rStableDiffusion
Schnell (which Chroma is based on) is a 4 steps fast model and Flux Dev has multiple 4-8 step loras available. Wan and Qwen also have 4 step loras. The currently available flash loras for Chroma are made by one person and they are as far as I know just extractions from Chroma Flash models (although there is barely any info on this), so how come nobody else has made a faster lightning lora for Chroma?
Both the Chroma flash model and the Flash Loras barely speed up generation, as they need at least 16 steps, but work the best with 20-24 steps (or sometimes higher), which at that point is just a regular generation time. However for some reason they usually make outputs more stable and better (very good for art specifically).
So is there some kind of architectural difficulty with Chroma that makes it impossible to speed it up more? That would be weird since it is basically Flux.
https://redd.it/1ou4ynv
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
"Nowhere to go" Short Film (Wan22 I2V ComfyUI)
https://youtu.be/2CACps38HQI
https://redd.it/1oua5v2
@rStableDiffusion
https://youtu.be/2CACps38HQI
https://redd.it/1oua5v2
@rStableDiffusion
YouTube
174 | "Nowhere to go" | Short Film (Wan22 I2V ComfyUI) [4K]
"Nowhere to go"
Inputs - SDXL
Video - Wan 2.2 14b I2V (First-to-last frame interpolation) via ComfyUI
100% AI Generated with local open source models
____________________________________________
Let me know your feedback in the comments, also consider…
Inputs - SDXL
Video - Wan 2.2 14b I2V (First-to-last frame interpolation) via ComfyUI
100% AI Generated with local open source models
____________________________________________
Let me know your feedback in the comments, also consider…
@ Heavy users, professionals and others w/ a focus on consistent generation: How do you deal with the high frequency of new model releases?
* Do you test every supposedly ‘better’ model to see if it works for your purposes?
* If so, how much time do you invest in testing/evaluating?
* Or do you stick to a model and get the best out of it?
https://redd.it/1ouajdf
@rStableDiffusion
* Do you test every supposedly ‘better’ model to see if it works for your purposes?
* If so, how much time do you invest in testing/evaluating?
* Or do you stick to a model and get the best out of it?
https://redd.it/1ouajdf
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
"Nowhere to go" Short Film (Wan22 I2V ComfyUI)
https://youtu.be/2CACps38HQI
https://redd.it/1oua616
@rStableDiffusion
https://youtu.be/2CACps38HQI
https://redd.it/1oua616
@rStableDiffusion
YouTube
174 | "Nowhere to go" | Short Film (Wan22 I2V ComfyUI) [4K]
"Nowhere to go"
Inputs - SDXL
Video - Wan 2.2 14b I2V (First-to-last frame interpolation) via ComfyUI
100% AI Generated with local open source models
____________________________________________
Let me know your feedback in the comments, also consider…
Inputs - SDXL
Video - Wan 2.2 14b I2V (First-to-last frame interpolation) via ComfyUI
100% AI Generated with local open source models
____________________________________________
Let me know your feedback in the comments, also consider…