From 5ce58e4b6a827e6fe8514e372c7bb59b5abf7a0a Mon Sep 17 00:00:00 2001 From: Chime Ogbuji Date: Sun, 10 Nov 2024 10:10:04 -0500 Subject: [PATCH] Update documentation --- llms/mlx_lm/LORA.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/llms/mlx_lm/LORA.md b/llms/mlx_lm/LORA.md index b8530708..d332bfaa 100644 --- a/llms/mlx_lm/LORA.md +++ b/llms/mlx_lm/LORA.md @@ -77,9 +77,9 @@ You can resume fine-tuning with an existing adapter with `--resume-adapter-file `. ### Input Masking -There are custom functions for masking the input sequence of tokens during the loss calculation to ensure -the model is not being penalized for not recreating the prompt. To fine-tune with masked input sequences, -use the `--mask-inputs` argument. +There are custom functions for masking the sequence of tokens associated with the `prompt` in a completion dataset +during the loss calculation to ensure the model is not being penalized for not recreating the prompt. To fine-tune +with masked input sequences, use the `--mask-inputs` argument. ### Evaluate