Update to Repo for my AI Toolkit Fork + New Yaml Settings for I2V motion training
Hi, PR has already been submitted to Ostris but yeah... my last one hasn't even been looked at. So here is my fork repo:
[https://github.com/relaxis/ai-toolkit](https://github.com/relaxis/ai-toolkit)
Changes:
1. Automagic now trains separate LR per lora (high and low noise) if it detects MoE training - LR outputs now print to log and terminal. You can also train each lora according to different optimizer parameters:
​
optimizer_params:
lr_bump: 0.000005 #old
min_lr: 0.000008 #old
max_lr: 0.0003 #old
beta2: 0.999
weight_decay: 0.0001
clip_threshold: 1
high_noise_lr_bump: 0.00001 # new
high_noise_min_lr: 0.00001 # new
high_noise_max_lr: 0.0003 # new
low_noise_lr_bump: 0.000005 # new
low_noise_min_lr: 0.00001 # new
low_noise_max_lr: 0.0003 #new
2. Changed resolution bucket logic - previously this worked on SDXL bucket logic but now you can specify pixel count. The logic will allow higher dimension videos and images to be trained as long as they fit within the specified pixel count (allows for higher resolution low vram videos below your cut off resolution).
resolution: - 512
max_pixels_per_frame: 262144
https://redd.it/1oiyuzr
@rStableDiffusion
Hi, PR has already been submitted to Ostris but yeah... my last one hasn't even been looked at. So here is my fork repo:
[https://github.com/relaxis/ai-toolkit](https://github.com/relaxis/ai-toolkit)
Changes:
1. Automagic now trains separate LR per lora (high and low noise) if it detects MoE training - LR outputs now print to log and terminal. You can also train each lora according to different optimizer parameters:
​
optimizer_params:
lr_bump: 0.000005 #old
min_lr: 0.000008 #old
max_lr: 0.0003 #old
beta2: 0.999
weight_decay: 0.0001
clip_threshold: 1
high_noise_lr_bump: 0.00001 # new
high_noise_min_lr: 0.00001 # new
high_noise_max_lr: 0.0003 # new
low_noise_lr_bump: 0.000005 # new
low_noise_min_lr: 0.00001 # new
low_noise_max_lr: 0.0003 #new
2. Changed resolution bucket logic - previously this worked on SDXL bucket logic but now you can specify pixel count. The logic will allow higher dimension videos and images to be trained as long as they fit within the specified pixel count (allows for higher resolution low vram videos below your cut off resolution).
resolution: - 512
max_pixels_per_frame: 262144
https://redd.it/1oiyuzr
@rStableDiffusion
GitHub
GitHub - relaxis/ai-toolkit: The ultimate training toolkit for finetuning diffusion models
The ultimate training toolkit for finetuning diffusion models - relaxis/ai-toolkit
Labubu Generator: Open the Door to Mischief, Monsters, and Your Imagination (Qwen Image LoRA, Civitai Release, Training Details Included)
https://redd.it/1oj3lgt
@rStableDiffusion
https://redd.it/1oj3lgt
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit: Labubu Generator: Open the Door to Mischief, Monsters, and Your Imagination (Qwen…
Explore this post and more from the StableDiffusion community