Help How to do SFT on Wan2.2-I2V-A14B while keeping Lighting’s distillation speedups?
Hi everyone, I’m working with Wan2.2-I2V-A14B for image-to-video generation, and I’m running into issues when trying to combine SFT with the Lighting acceleration.
# Setup / context
Base model: Wan2.2-I2V-A14B.
Acceleration: Lighting LoRA.
Goal: Do SFT on Wan2.2 for my own dataset, without losing the speedup brought by Lighting.
# What I’ve tried
1. Step 1: SFT on vanilla Wan2.2
I used DiffSynth-Studio to fine-tune Wan2.2 with a LoRA
After training, this LoRA alone works reasonably well when applied to Wan2.2 (no Lighting).
2. Step 2: Add Lighting on top of SFT LoRA
At inference time, I then stacked Lightning LoRA
The result is very bad
quality drops sharply
strange colors in the video
So simply “SFT first, then slap Lighting LoRA on top” obviously doesn’t work in my case.
# What I want to do
My intuition is that Lighting should be active during training, so that the model learns under the same accelerated architecture it will use at inference. In other words, I want to:
Start from Wan2.2 + Lighting
Then run SFT on top of that
But here is the problem. I haven’t found a clean way to do SFT on “Wan2.2 + Lighting” together. DiffSynth-Studio seems to assume you fine-tune a single base model, not base + a pre-existing LoRA. And the scheduler might be a hindrance.
# Questions
So I’m looking for advice from anyone who has fine-tuned Wan2.2 with Lighting and kept the speedups after SFT.
https://redd.it/1oz9d1p
@rStableDiffusion
Hi everyone, I’m working with Wan2.2-I2V-A14B for image-to-video generation, and I’m running into issues when trying to combine SFT with the Lighting acceleration.
# Setup / context
Base model: Wan2.2-I2V-A14B.
Acceleration: Lighting LoRA.
Goal: Do SFT on Wan2.2 for my own dataset, without losing the speedup brought by Lighting.
# What I’ve tried
1. Step 1: SFT on vanilla Wan2.2
I used DiffSynth-Studio to fine-tune Wan2.2 with a LoRA
After training, this LoRA alone works reasonably well when applied to Wan2.2 (no Lighting).
2. Step 2: Add Lighting on top of SFT LoRA
At inference time, I then stacked Lightning LoRA
The result is very bad
quality drops sharply
strange colors in the video
So simply “SFT first, then slap Lighting LoRA on top” obviously doesn’t work in my case.
# What I want to do
My intuition is that Lighting should be active during training, so that the model learns under the same accelerated architecture it will use at inference. In other words, I want to:
Start from Wan2.2 + Lighting
Then run SFT on top of that
But here is the problem. I haven’t found a clean way to do SFT on “Wan2.2 + Lighting” together. DiffSynth-Studio seems to assume you fine-tune a single base model, not base + a pre-existing LoRA. And the scheduler might be a hindrance.
# Questions
So I’m looking for advice from anyone who has fine-tuned Wan2.2 with Lighting and kept the speedups after SFT.
https://redd.it/1oz9d1p
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
As of today, what is the best cloud AI Image Generator?
I’m looking for a good AI image gen with:
- A wide variety of models (Flux, Kontext, Illustrious, etc.)
- The ability to import my own LoRAs
- A clean and easy-to-use UI/UX for a smooth experience
- Upscaling
- No restrictive safety filters
- Some unexpected or extra useful features
Preferably at a fair and reasonable price!
What would you say is the best option?
https://redd.it/1ozkn46
@rStableDiffusion
I’m looking for a good AI image gen with:
- A wide variety of models (Flux, Kontext, Illustrious, etc.)
- The ability to import my own LoRAs
- A clean and easy-to-use UI/UX for a smooth experience
- Upscaling
- No restrictive safety filters
- Some unexpected or extra useful features
Preferably at a fair and reasonable price!
What would you say is the best option?
https://redd.it/1ozkn46
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
A spotlight (quick finding tool) for ComfyUI
quite possibly the most important QOL plugin of the year.
tl;dr - find anything, anywhere, anytime.
https://preview.redd.it/op4op4fsm21g1.png?width=1068&format=png&auto=webp&s=41731f6781e16b9fc93e89454726aec49a0a4d31
The (configurable) hotkeys are
https://github.com/sfinktah/ovum-spotlight or search for `spotlight` in Comfy Manager.
Hold down
Want to find where you set the width to 480? Just search for `width:480`
Want to know what 16/9 is? Search for `math 16/9`
Want to find out where "link 182" is? Search for `link 182`
Want to jump to a node inside a subgraph by number? Search for `123:456:111` and you can go straight there.
Want to write your own extensions? It's supported, and there are examples.
https://redd.it/1ozmay9
@rStableDiffusion
quite possibly the most important QOL plugin of the year.
tl;dr - find anything, anywhere, anytime.
https://preview.redd.it/op4op4fsm21g1.png?width=1068&format=png&auto=webp&s=41731f6781e16b9fc93e89454726aec49a0a4d31
The (configurable) hotkeys are
Control\+Shift \+Space or Control\+K or (if you are lazy), just /.https://github.com/sfinktah/ovum-spotlight or search for `spotlight` in Comfy Manager.
Hold down
Shift while scrolling to have the graph scroll with you to the highlighted node, that includes going inside subgraphs!Want to find where you set the width to 480? Just search for `width:480`
Want to know what 16/9 is? Search for `math 16/9`
Want to find out where "link 182" is? Search for `link 182`
Want to jump to a node inside a subgraph by number? Search for `123:456:111` and you can go straight there.
Want to write your own extensions? It's supported, and there are examples.
https://redd.it/1ozmay9
@rStableDiffusion
ULTIMATE AI VIDEO WORKFLOW — Qwen-Edit 2509 + Wan Animate 2.2 + SeedVR2
https://redd.it/1ozqjtn
@rStableDiffusion
https://redd.it/1ozqjtn
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit: ULTIMATE AI VIDEO WORKFLOW — Qwen-Edit 2509 + Wan Animate 2.2 + SeedVR2
Explore this post and more from the StableDiffusion community
Next level Realism with Qwen Image is now possible after new realism LoRA workflow - Top images are new realism workflow - Bottom ones are older default - Full tutorial published - 4+4 Steps only - Check oldest comment for more info
https://redd.it/1ozuzdx
@rStableDiffusion
https://redd.it/1ozuzdx
@rStableDiffusion
Reddit
From the sdforall community on Reddit: Next level Realism with Qwen Image is now possible after new realism LoRA workflow - Top…
Explore this post and more from the sdforall community