update lora.md and lora_config.yaml

This commit is contained in:
Goekdeniz-Guelmez 2025-03-12 14:46:19 +01:00
parent 0e28fdb345
commit a608ae99bc
2 changed files with 7 additions and 0 deletions

View File

@ -387,6 +387,10 @@ tokens-per-second, using the MLX Example
[`wikisql`](https://github.com/ml-explore/mlx-examples/tree/main/lora/data) [`wikisql`](https://github.com/ml-explore/mlx-examples/tree/main/lora/data)
data set. data set.
## Logging
You can log training metrics to Weights & Biases by adding the `--report-to-wandb` flag. This requires installing wandb manually with `pip install wandb`. When enabled, all training and validation metrics will be logged to your wandb account.
[^lora]: Refer to the [arXiv paper](https://arxiv.org/abs/2106.09685) for more details on LoRA. [^lora]: Refer to the [arXiv paper](https://arxiv.org/abs/2106.09685) for more details on LoRA.
[^qlora]: Refer to the paper [QLoRA: Efficient Finetuning of Quantized LLMs](https://arxiv.org/abs/2305.14314) [^qlora]: Refer to the paper [QLoRA: Efficient Finetuning of Quantized LLMs](https://arxiv.org/abs/2305.14314)

View File

@ -37,6 +37,9 @@ val_batches: 25
# Adam learning rate. # Adam learning rate.
learning_rate: 1e-5 learning_rate: 1e-5
# to report the loggs to WandB
report_to_wand: true
# Number of training steps between loss reporting. # Number of training steps between loss reporting.
steps_per_report: 10 steps_per_report: 10