mirror of
https://github.com/ml-explore/mlx-examples.git
synced 2025-06-24 17:31:18 +08:00
update lora.md and lora_config.yaml
This commit is contained in:
parent
0e28fdb345
commit
a608ae99bc
@ -387,6 +387,10 @@ tokens-per-second, using the MLX Example
|
|||||||
[`wikisql`](https://github.com/ml-explore/mlx-examples/tree/main/lora/data)
|
[`wikisql`](https://github.com/ml-explore/mlx-examples/tree/main/lora/data)
|
||||||
data set.
|
data set.
|
||||||
|
|
||||||
|
## Logging
|
||||||
|
|
||||||
|
You can log training metrics to Weights & Biases by adding the `--report-to-wandb` flag. This requires installing wandb manually with `pip install wandb`. When enabled, all training and validation metrics will be logged to your wandb account.
|
||||||
|
|
||||||
[^lora]: Refer to the [arXiv paper](https://arxiv.org/abs/2106.09685) for more details on LoRA.
|
[^lora]: Refer to the [arXiv paper](https://arxiv.org/abs/2106.09685) for more details on LoRA.
|
||||||
|
|
||||||
[^qlora]: Refer to the paper [QLoRA: Efficient Finetuning of Quantized LLMs](https://arxiv.org/abs/2305.14314)
|
[^qlora]: Refer to the paper [QLoRA: Efficient Finetuning of Quantized LLMs](https://arxiv.org/abs/2305.14314)
|
||||||
|
@ -37,6 +37,9 @@ val_batches: 25
|
|||||||
# Adam learning rate.
|
# Adam learning rate.
|
||||||
learning_rate: 1e-5
|
learning_rate: 1e-5
|
||||||
|
|
||||||
|
# to report the loggs to WandB
|
||||||
|
report_to_wand: true
|
||||||
|
|
||||||
# Number of training steps between loss reporting.
|
# Number of training steps between loss reporting.
|
||||||
steps_per_report: 10
|
steps_per_report: 10
|
||||||
|
|
||||||
|
Loading…
Reference in New Issue
Block a user