Version 1.0 The Easiest Way to Train Wan 2.2 LoRAs (Under $5)
https://github.com/obsxrver/wan22-lora-training
If you’ve been wanting to train your own Wan 2.2 Video LoRAs but are intimidated by the hardware requirements, parameter tweaking insanity, or the installation nightmare—I built a solution that handles it all for you.
If
https://preview.redd.it/8avncmwwbb2g1.png?width=875&format=png&auto=webp&s=71f66d615d269a03af89744285543476c7ab880e
This is currently the easiest, fastest, and cheapest way to get a high-quality training run done.
Why this method?
Zero Setup: No installing Python, CUDA, or hunting for dependencies. You launch a pre-built [Vast.AI](http://Vast.AI) template, and it's ready in minutes.
Full WebUI: Drag-and-drop your videos/images, edit captions, and click "Start." No terminal commands required.
Extremely Cheap: You can rent a dual RTX 5090 node, train a full LoRA in 2-3 hours, and auto-shutdown. Total cost is usually under $5.
Auto-Save: It automatically uploads your finished LoRA to your Cloud Storage (Google Drive/S3/Dropbox) and kills the instance so you don't pay for a second longer than necessary.
How it works:
1. Click the Vast.AI template link (in the repo).
2. Open the WebUI in your browser.
3. Upload your dataset and press Train.
4. Come back in an hour to find your LoRA in your Google Drive.
It supports both Text-to-Video and Image-to-Video, and optimizes for dual-GPU setups (training High/Low noise simultaneously) to cut training time in half.
Repo + Template Link:
https://github.com/obsxrver/wan22-lora-training
Let me know
if you have questions
https://redd.it/1p1puml
@rStableDiffusion
https://github.com/obsxrver/wan22-lora-training
If you’ve been wanting to train your own Wan 2.2 Video LoRAs but are intimidated by the hardware requirements, parameter tweaking insanity, or the installation nightmare—I built a solution that handles it all for you.
If
https://preview.redd.it/8avncmwwbb2g1.png?width=875&format=png&auto=webp&s=71f66d615d269a03af89744285543476c7ab880e
This is currently the easiest, fastest, and cheapest way to get a high-quality training run done.
Why this method?
Zero Setup: No installing Python, CUDA, or hunting for dependencies. You launch a pre-built [Vast.AI](http://Vast.AI) template, and it's ready in minutes.
Full WebUI: Drag-and-drop your videos/images, edit captions, and click "Start." No terminal commands required.
Extremely Cheap: You can rent a dual RTX 5090 node, train a full LoRA in 2-3 hours, and auto-shutdown. Total cost is usually under $5.
Auto-Save: It automatically uploads your finished LoRA to your Cloud Storage (Google Drive/S3/Dropbox) and kills the instance so you don't pay for a second longer than necessary.
How it works:
1. Click the Vast.AI template link (in the repo).
2. Open the WebUI in your browser.
3. Upload your dataset and press Train.
4. Come back in an hour to find your LoRA in your Google Drive.
It supports both Text-to-Video and Image-to-Video, and optimizes for dual-GPU setups (training High/Low noise simultaneously) to cut training time in half.
Repo + Template Link:
https://github.com/obsxrver/wan22-lora-training
Let me know
if you have questions
https://redd.it/1p1puml
@rStableDiffusion
GitHub
GitHub - obsxrver/wan22-lora-training: The easiest way to train a wan2.2 Lora. UPDATE: Now with webui!
The easiest way to train a wan2.2 Lora. UPDATE: Now with webui! - obsxrver/wan22-lora-training
角色迁移到场景的Lora
https://preview.redd.it/csium62eye2g1.png?width=2217&format=png&auto=webp&s=f768ad1c26423cb63435f42aa904494aa8dcfe53
https://preview.redd.it/hq5g80ifye2g1.png?width=6509&format=png&auto=webp&s=d306d61880fb3ad31ee28656502938097a3dc20d
https://preview.redd.it/8bmhpf5gye2g1.png?width=6134&format=png&auto=webp&s=69629ea3f65beb4d59e4ab1532b9024de1b7213f
https://preview.redd.it/0lixjergye2g1.png?width=5727&format=png&auto=webp&s=b1cd9df101639a61bf93ce0a696fca11c28cd2b0
https://preview.redd.it/f3b8bhrgye2g1.png?width=2450&format=png&auto=webp&s=d84fdb2028527b833834a2d933e221203ae5ac20
https://preview.redd.it/wcwolqfhye2g1.png?width=3848&format=png&auto=webp&s=67704d46a0fc69706298d6a26426cc61f37387c4
I used Qwen image editing 2509 + RoleScene Blend LORA, and used 5090 to complete the migration of the following characters to the scene in about 30 seconds
You can download the model here: https://civitai.com/models/2142049/rolescene-blend
Use the workflow I built here: https://www.runninghub.ai/post/1991385798813790209
You can register using my invitation link: https://www.runninghub.ai/?inviteCode=t0lfdxyz
Here is my teaching video, currently only in Chinese: https://www.bilibili.com/video/BV1afCfBFEJG/?spm\_id\_from=333.1387.homepage.video\_card.click&vd\_source=ae85ec1de21e4084d40c5d4eec667b8f
I used Qwen image editing 2509 + RoleScene Blend LORA, and used 5090 to complete the migration of the following characters to the scene in about 30 seconds
You can download the model here: https://civitai.com/models/2142049/rolescene-blend
Use the workflow I built here: https://www.runninghub.ai/post/1991385798813790209
You can register using my invitation link: https://www.runninghub.ai/?inviteCode=t0lfdxyz
Here is my teaching video, currently only in Chinese: https://www.bilibili.com/video/BV1afCfBFEJG/?spm\_id\_from=333.1387.homepage.video\_card.click&vd\_source=ae85ec1de21e4084d40c5d4eec667b8f
https://redd.it/1p233zo
@rStableDiffusion
https://preview.redd.it/csium62eye2g1.png?width=2217&format=png&auto=webp&s=f768ad1c26423cb63435f42aa904494aa8dcfe53
https://preview.redd.it/hq5g80ifye2g1.png?width=6509&format=png&auto=webp&s=d306d61880fb3ad31ee28656502938097a3dc20d
https://preview.redd.it/8bmhpf5gye2g1.png?width=6134&format=png&auto=webp&s=69629ea3f65beb4d59e4ab1532b9024de1b7213f
https://preview.redd.it/0lixjergye2g1.png?width=5727&format=png&auto=webp&s=b1cd9df101639a61bf93ce0a696fca11c28cd2b0
https://preview.redd.it/f3b8bhrgye2g1.png?width=2450&format=png&auto=webp&s=d84fdb2028527b833834a2d933e221203ae5ac20
https://preview.redd.it/wcwolqfhye2g1.png?width=3848&format=png&auto=webp&s=67704d46a0fc69706298d6a26426cc61f37387c4
I used Qwen image editing 2509 + RoleScene Blend LORA, and used 5090 to complete the migration of the following characters to the scene in about 30 seconds
You can download the model here: https://civitai.com/models/2142049/rolescene-blend
Use the workflow I built here: https://www.runninghub.ai/post/1991385798813790209
You can register using my invitation link: https://www.runninghub.ai/?inviteCode=t0lfdxyz
Here is my teaching video, currently only in Chinese: https://www.bilibili.com/video/BV1afCfBFEJG/?spm\_id\_from=333.1387.homepage.video\_card.click&vd\_source=ae85ec1de21e4084d40c5d4eec667b8f
I used Qwen image editing 2509 + RoleScene Blend LORA, and used 5090 to complete the migration of the following characters to the scene in about 30 seconds
You can download the model here: https://civitai.com/models/2142049/rolescene-blend
Use the workflow I built here: https://www.runninghub.ai/post/1991385798813790209
You can register using my invitation link: https://www.runninghub.ai/?inviteCode=t0lfdxyz
Here is my teaching video, currently only in Chinese: https://www.bilibili.com/video/BV1afCfBFEJG/?spm\_id\_from=333.1387.homepage.video\_card.click&vd\_source=ae85ec1de21e4084d40c5d4eec667b8f
https://redd.it/1p233zo
@rStableDiffusion
Is InstantID + Canny still the best method in 2025 for generating consistent LoRA reference images?
Hey everyone,
I’m building a LoRA for a custom female character and I need around 10–20 consistent face images (different angles, light, expressions, etc). I’m planning to use the InstantID + Canny ControlNet workflow in ComfyUI.
Before I finalize my setup, I want to ask:
1. Is InstantID + Canny still the most reliable method in 2025 for producing identity-consistent images for LoRA training?
2. Are there any improved workflows (InstantID + Depth, FaceID, or new consistency nodes) that give better results?
3. Does anyone have a ComfyUI graph or recommended settings they can share?
4. Anything I should avoid when generating reference shots (lighting, resolution, negative prompts, etc.)?
I’m aiming for high identity consistency (90%+), so any updated advice from 2025 users would really help.
Thanks!
https://redd.it/1p22zbb
@rStableDiffusion
Hey everyone,
I’m building a LoRA for a custom female character and I need around 10–20 consistent face images (different angles, light, expressions, etc). I’m planning to use the InstantID + Canny ControlNet workflow in ComfyUI.
Before I finalize my setup, I want to ask:
1. Is InstantID + Canny still the most reliable method in 2025 for producing identity-consistent images for LoRA training?
2. Are there any improved workflows (InstantID + Depth, FaceID, or new consistency nodes) that give better results?
3. Does anyone have a ComfyUI graph or recommended settings they can share?
4. Anything I should avoid when generating reference shots (lighting, resolution, negative prompts, etc.)?
I’m aiming for high identity consistency (90%+), so any updated advice from 2025 users would really help.
Thanks!
https://redd.it/1p22zbb
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
How do I stop female characters from dancing and bouncing their boobs in WAN 2.2 video?
Everytime I include a reference character of a woman she just starts dancing and her boobs start bouncing for literally no reason. The prompt I used for one of the videos is "the woman pulls out a gun and aims at the man" but while aiming the gun she just started doing tiktok dances and furiously shaking her hips.
I included in the negative prompts "dancing, tiktok dances, shaking hips" etc... but it doesn't seem to be having any effect.
Edit: I'm using the Wan smooth mix checkpoint. Does that affect the motion that much? The characters only bounce and dance when they are 3D models, real women just follow the prompt.
https://redd.it/1p26ebl
@rStableDiffusion
Everytime I include a reference character of a woman she just starts dancing and her boobs start bouncing for literally no reason. The prompt I used for one of the videos is "the woman pulls out a gun and aims at the man" but while aiming the gun she just started doing tiktok dances and furiously shaking her hips.
I included in the negative prompts "dancing, tiktok dances, shaking hips" etc... but it doesn't seem to be having any effect.
Edit: I'm using the Wan smooth mix checkpoint. Does that affect the motion that much? The characters only bounce and dance when they are 3D models, real women just follow the prompt.
https://redd.it/1p26ebl
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
Any suggestions on getting images that look like they came from a Sears Portrait Studio?
https://redd.it/1p261vr
@rStableDiffusion
https://redd.it/1p261vr
@rStableDiffusion
Meowbstract : Chroma 1 HD + Qwen VL randomizing 4 abstract artists in each prompt an generating prompt blending their styles together.
https://redd.it/1p2ef2k
@rStableDiffusion
https://redd.it/1p2ef2k
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit: Meowbstract : Chroma 1 HD + Qwen VL randomizing 4 abstract artists in each prompt…
Explore this post and more from the StableDiffusion community