diff --git a/llms/mlx_lm/LORA.md b/llms/mlx_lm/LORA.md index 32c7c607..43765b89 100644 --- a/llms/mlx_lm/LORA.md +++ b/llms/mlx_lm/LORA.md @@ -223,11 +223,16 @@ data formats. Here are examples of these formats: ``` -The format for defining the `arguments` field in a function can vary depending on the model being used. Common formats include JSON strings and dictionaries. -The example provided follows the format used by [OpenAI](https://platform.openai.com/docs/guides/fine-tuning/fine-tuning-examples), which is also adopted by MistralAI [mistral-finetune](https://github.com/mistralai/mistral-finetune?tab=readme-ov-file#instruct). -However, in Hugging Face's [chat_template](https://huggingface.co/docs/transformers/main/en/chat_templating#a-complete-tool-use-example), a dictionary format is used. - -The choice of format should depend on the base model you are fine-tuning. It is recommended to refer to the documentation of the base model for detailed instructions regarding this aspect. +The format for the `arguments` field in a function varies for different models. +Common formats include JSON strings and dictionaries. The example provided +follows the format used by +[OpenAI](https://platform.openai.com/docs/guides/fine-tuning/fine-tuning-examples), +and [Mistral +AI](https://github.com/mistralai/mistral-finetune?tab=readme-ov-file#instruct). +In Hugging Face's [chat +templates](https://huggingface.co/docs/transformers/main/en/chat_templating#a-complete-tool-use-example), +a dictionary format is used. Refer to the documentation for the model you are +fine-tuning for more details. @@ -248,7 +253,7 @@ each line not expected by the loader will be ignored. > [!NOTE] > Each example in the datasets must be on a single line. Do not put more than -> one example per line and do not split an example accross multiple lines. +> one example per line and do not split an example across multiple lines. ### Hugging Face Datasets