From feb7f1088804e509e0258d19017eaa03c0519a21 Mon Sep 17 00:00:00 2001 From: Awni Hannun Date: Tue, 3 Dec 2024 17:10:41 -0800 Subject: [PATCH] simplify --- llms/README.md | 4 +++- llms/mlx_lm/README.md | 2 -- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/llms/README.md b/llms/README.md index c93b5627..94493701 100644 --- a/llms/README.md +++ b/llms/README.md @@ -77,7 +77,9 @@ to see how to use the API in more detail. The `mlx-lm` package also comes with functionality to quantize and optionally upload models to the Hugging Face Hub. -You can convert models directly via the Hugging Face Space [here](https://huggingface.co/spaces/mlx-community/mlx-my-repo) or in the Python API with: +You can convert models in the [Hugging Face +Space](https://huggingface.co/spaces/mlx-community/mlx-my-repo) or using the +Python API: ```python from mlx_lm import convert diff --git a/llms/mlx_lm/README.md b/llms/mlx_lm/README.md index a36de960..66f2b5e9 100644 --- a/llms/mlx_lm/README.md +++ b/llms/mlx_lm/README.md @@ -8,5 +8,3 @@ parent directory. This package also supports fine tuning with LoRA or QLoRA. For more information see the [LoRA documentation](LORA.md). - -🆕 You can directly convert models in Q4/ Q8 via the Hugging Face Space [here](https://huggingface.co/spaces/mlx-community/mlx-my-repo). \ No newline at end of file