From 7c59bfeff2d2e6ccbdba0fb10fdb32a52657b8fd Mon Sep 17 00:00:00 2001 From: Vaibhav Srivastav Date: Fri, 29 Nov 2024 14:51:23 +0100 Subject: [PATCH] Add mentions of MLX-my-repo. --- llms/README.md | 2 +- llms/mlx_lm/README.md | 2 ++ 2 files changed, 3 insertions(+), 1 deletion(-) diff --git a/llms/README.md b/llms/README.md index 60f68353..c93b5627 100644 --- a/llms/README.md +++ b/llms/README.md @@ -77,7 +77,7 @@ to see how to use the API in more detail. The `mlx-lm` package also comes with functionality to quantize and optionally upload models to the Hugging Face Hub. -You can convert models in the Python API with: +You can convert models directly via the Hugging Face Space [here](https://huggingface.co/spaces/mlx-community/mlx-my-repo) or in the Python API with: ```python from mlx_lm import convert diff --git a/llms/mlx_lm/README.md b/llms/mlx_lm/README.md index 66f2b5e9..a36de960 100644 --- a/llms/mlx_lm/README.md +++ b/llms/mlx_lm/README.md @@ -8,3 +8,5 @@ parent directory. This package also supports fine tuning with LoRA or QLoRA. For more information see the [LoRA documentation](LORA.md). + +🆕 You can directly convert models in Q4/ Q8 via the Hugging Face Space [here](https://huggingface.co/spaces/mlx-community/mlx-my-repo). \ No newline at end of file