10 releases
0.1.9 | Mar 8, 2024 |
---|---|
0.1.8 | Mar 8, 2024 |
0.1.5 | Feb 15, 2024 |
#253 in Machine learning
Used in stable-diffusion-cli
35KB
574 lines
Disclaimer Stable Diffusion is a trademark owned by Stability AI. Original repos: Stable Diffusion 1.5, Stable Diffusion 2.1, Stable Diffusion XL and XL-Turbo
Stable Diffusion XL LoRA Trainer
Welcome to the official codebase for the Sensorial System's Stable Diffusion projects. For now, this only hosts the codebase for our Stable Diffusion XL LoRA Trainer, designed to make it easier to automate all the steps of finetuning Stable Diffusion models.
Requirements
- kohya_ss: Follow the installation guidelines here https://github.com/bmaltais/kohya_ss.
Setting up
If you don't want to set the KOHYA_SS_PATH
environment variable everytime you want to run the trainer, you can run the CLI
to set it up once and for all:
stable-diffusion train setup
If you don't have the CLI
installed, install it with:
cargo install stable-diffusion-cli
Examples
We have a dataset with photos of Bacana, a Coton de Tuléar, conceptualized as bacana white dog
to not mix with the existing Coton de Tuléar
concept in the Stable Diffusion XL
model.
Some of the training images in examples/training/lora/bacana/images:
The training code example looks like this:
use stable_diffusion_trainer::*;
fn main() {
let kohya_ss = std::env::var("KOHYA_SS_PATH").expect("KOHYA_SS_PATH not set");
let environment = Environment::new().with_kohya_ss(kohya_ss);
let prompt = Prompt::new("bacana", "white dog");
let image_data_set = ImageDataSet::from_dir("examples/training/lora/bacana/images");
let data_set = TrainingDataSet::new(image_data_set);
let output = Output::new("{prompt.instance}({prompt.class})d{network.dimension}a{network.alpha}", "examples/training/lora/bacana/output");
let parameters = Parameters::new(prompt, data_set, output);
Trainer::new()
.with_environment(environment)
.start(¶meters);
}
Note that the Output::name
is a format string that captures the parameters values. This is useful for experimenting with different parameters and keeping track of them in the model file name.
Train the example with:
KOHYA_SS_PATH=<your kohya_ss path here> cargo run --example train-lora
The LoRA safetensor file will be generated as
examples/training/lora/bacana/output/bacana(white dog)d8a1-000001.safetensors
examples/training/lora/bacana/output/bacana(white dog)d8a1.safetensors
Where, in this case, bacana(white dog)d8a1-000001.safetensors
is the first epoch and bacana(white dog)d8a1.safetensors
is the final epoch.
You can then
cd examples/training/lora/bacana/generation
and run
python generate.py
to test image generation with the LoRA model. The generated images will be present in examples/training/lora/bacana/generation.
Some of the generated images:
Dependencies
~3–12MB
~149K SLoC