-
-
Notifications
You must be signed in to change notification settings - Fork 253
Onboarding Guide for Newcomers
Welcome to OneTrainer!
OneTrainer (OT) is your all-in-one solution for training diffusion models.
This is a targeted introduction for new users. It is not a walkthrough. You still need to read the tab explainers and label tooltips in the GUI.
In the top left, next to the "OneTrainer" logo, you'll find a blank dropdown list for 'configs' (presets). Presets act as a save file for all your training presets. They are not your model name. As a beginner, select one of the defaults for the model you want to train.

Below that, there's a tab bar with the active tab highlighted in blue. Click on the general tab.

The Workspace Directory is where your backups, intermediate saves and other things go, final output goes into models
If you have an RTX 4090, consider increasing the dataloader threads to 8 (be cautious, as setting this too high can cause VRAM issues).
Navigate to the model tab and observe what is within, for now we will leave it default. If you want to use a custom model set the Base model with its path to HF link or a local directory.
Before training you may want to set the Model Output Destination. This will be the filename of your trained output, for example: models/ModelMyTry1.safetensors
Navigate to the data tab, and ensure everything is toggled on (these should be on by default). As a beginner, you want all of these options enabled.
Navigate to the concept tab. This is where you would configure your dataset, either as separate text files or in the image names. While captions are optional, they are recommended. 90% of the work is gathering quality, diverse images and creating high quality (and varied) captions.
You can also use the Tools tab to open your dataset and generate captions using auto captioners/taggers, but this is beyond the scope of this guide.
Click on add concept, then click on the newly added item. This will open a new modal (window).

In Path provide the path to your dataset. In the Prompt Source, indicate how you captioned your images. As a beginner you should do img-txt file pairs, which is targeted by setting "From text file per sample" and creating the file pairs i.e 001.jpeg & 001.txt
For more information on concept options, check the dedicated Concept page.
For detailed information on aspect ratios and bucketing, check the AR Buckets page.
The training tab is where you would adjust your training settings. Later on when you have your dataset defined we recommend sticking with the default values for your initial run. Check this page for more information
Sampling generates images using your currently-being-trained model, allowing you to visually observe its progress. As a beginner, you might not know what to look for yet but it’s important to utilise.
See Sampling for more info
Moving onto the LoRA tab
LoRA rank: Leave it at the default value of 16 for SD1.5, for SDXL try 8 or 16, bigger does not equal better, larger ranks more easily overtrain.
Leave the LoRA alpha at the default value of 1.0, it only multiplies the weights of the model. Whenever you modify it, you must also modify the Learning Rate.
There is a big Start Training in the bottom right of the UI
![]()
When you have your concepts defined and are ready to begin training you would first click it and then monitor training via training progression bar in bottom left of the UI, in the CLI or by clicking Tensorboard button.
Lastly let’s imagine you have a trained LoRA, you would want to test it. Does it perform as you expect? Congraluations! If not, welcome to the world of machine learning. Its an interative process. Whilst extensive testing is beyond the scope of this guide here is a keyword to search for:
XYZ grid extension (generates grids of images for eval) A111 or SwarmUI
This concludes the very high level overview of OneTrainer. You are now expected to read the individual tab wiki pages to learn more.