generated from ArneBinder/pytorch-ie-hydra-template-1
-
Notifications
You must be signed in to change notification settings - Fork 0
Open
Description
learning rate tuning for frozen models:
- decision: use same lr for aggregation and head!
lr tuning for:
frozen pre-trained target-model + frozen bert(see this PR for the configs: add frozen-pretrained+frozen-bert experiment configs #81)
tuning this:
model.learning_rate
report mean metric over 5 seeds per learning rate
learning rate candidates (maybe):
- ~5 in total, more if the best result is at one of the ends
- stepping: factor of 3
append to your base command to execute all runs at once (replace v* and s* with useful values):
model.learning_rate=v1,v2,v3,v4,v5 seed=s1,s2,s3,s4,s5 --multirun
append this to your srun command to run for 3 days (if it is a partition from your department you can also increase to 5):
--time=03-00:00:00
Metadata
Metadata
Assignees
Labels
No labels