r/StableDiffusion – Telegram
whats the fastest and consistent way to train loras

how can i train a lora fast and not that long, is there any way or even a way to do it on a card that isnt a 3090 or 4090, I have a 4080 ti super and i was wondering if that would work ive never done it before and i want to learn, how can i get started training on my pc.

https://redd.it/1pkiekg
@rStableDiffusion
What are the best method to keep a specific person face + body consistency when generating new images/videos

Images + Prompt to Images/Video ( using context image and prompt to change background, outfits, pose etc.)

In order to generate a specific person (let's call this person ABC) from different angles, under different light setting, different background, different outfit etc. Currently, I have following approach

(1) Create a dataset, contains various images of this person, append this person name "ABC" string as a hard-coded tag to every images' corresponding captions. Using these captions and imgs to fine-tune a lora ( cons: not generalizable and not scalable, needs lora for every different person; )

(2) Simply use a face-swap open sourced models (any recommendation of such models/workflows) ( cons: maybe not natural ? not sure if face-swap model is good enough today)

(3) Construct a workflow, where the input takes several images from this person, then adds some customized nodes (I don't know if exists already) about the face/body consistency nodes into the workflow. (so, this is also a fine-tuned lora, but not specific to a person, but a lora about keep face consistent)

(4) any other approaches?

https://redd.it/1pki4e3
@rStableDiffusion
We upgraded Z-Image-Turbo-Fun-Controlnet-Union-2.0! Better quality and the inpainting mode is supported as well.

Models and demos: https://huggingface.co/alibaba-pai/Z-Image-Turbo-Fun-Controlnet-Union-2.0

Codes: https://github.com/aigc-apps/VideoX-Fun (If our model is helpful to you, please star our repo :)

https://redd.it/1pknfku
@rStableDiffusion