Update documentation

This commit is contained in:
Chime Ogbuji 2024-11-10 10:10:04 -05:00
parent 791727fa1c
commit 4ddbb988ce

View File

@ -77,9 +77,9 @@ You can resume fine-tuning with an existing adapter with
`--resume-adapter-file <path_to_adapters.safetensors>`. `--resume-adapter-file <path_to_adapters.safetensors>`.
### Input Masking ### Input Masking
There are custom functions for masking the input sequence of tokens during the loss calculation to ensure There are custom functions for masking the sequence of tokens associated with the `prompt` in a completion dataset
the model is not being penalized for not recreating the prompt. To fine-tune with masked input sequences, during the loss calculation to ensure the model is not being penalized for not recreating the prompt. To fine-tune
use the `--mask-inputs` argument. with masked input sequences, use the `--mask-inputs` argument.
### Evaluate ### Evaluate