Adding multiple optimizers to mlx lm (#1315)

* initial commmit

* adding more customized YAML configuartion

* update YAML example file

* Changed the switch to set opt_class

* removing muon

* using default arguments

* udpate
This commit is contained in:
Gökdeniz Gülmez
2025-03-05 22:54:54 +01:00
committed by GitHub
parent 56d2db23e1
commit e150621095
2 changed files with 36 additions and 7 deletions

View File

@@ -7,6 +7,15 @@ train: true
# The fine-tuning method: "lora", "dora", or "full".
fine_tune_type: lora
# The Optimizer with its possible inputs
optimizer: adamw
# optimizer_config:
# adamw:
# betas: [0.9, 0.98]
# eps: 1e-6
# weight_decay: 0.05
# bias_correction: true
# Directory with {train, valid, test}.jsonl files
data: "/path/to/training/data"