diff --git a/llms/mlx_lm/LORA.md b/llms/mlx_lm/LORA.md index b8530708..d332bfaa 100644 --- a/llms/mlx_lm/LORA.md +++ b/llms/mlx_lm/LORA.md @@ -77,9 +77,9 @@ You can resume fine-tuning with an existing adapter with `--resume-adapter-file `. ### Input Masking -There are custom functions for masking the input sequence of tokens during the loss calculation to ensure -the model is not being penalized for not recreating the prompt. To fine-tune with masked input sequences, -use the `--mask-inputs` argument. +There are custom functions for masking the sequence of tokens associated with the `prompt` in a completion dataset +during the loss calculation to ensure the model is not being penalized for not recreating the prompt. To fine-tune +with masked input sequences, use the `--mask-inputs` argument. ### Evaluate