From 1727959a27f2fb7b459387084b59f066296757a5 Mon Sep 17 00:00:00 2001 From: vb Date: Wed, 4 Dec 2024 04:21:39 +0100 Subject: [PATCH] Add mentions of MLX-my-repo. (#1129) * Add mentions of MLX-my-repo. * simplify * move * move --------- Co-authored-by: Awni Hannun --- llms/README.md | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/llms/README.md b/llms/README.md index 60f68353..4fff4207 100644 --- a/llms/README.md +++ b/llms/README.md @@ -77,7 +77,7 @@ to see how to use the API in more detail. The `mlx-lm` package also comes with functionality to quantize and optionally upload models to the Hugging Face Hub. -You can convert models in the Python API with: +You can convert models using the Python API: ```python from mlx_lm import convert @@ -163,6 +163,10 @@ mlx_lm.convert \ --upload-repo mlx-community/my-4bit-mistral ``` +Models can also be converted and quantized directly in the +[mlx-my-repo]https://huggingface.co/spaces/mlx-community/mlx-my-repo) Hugging +Face Space. + ### Long Prompts and Generations `mlx-lm` has some tools to scale efficiently to long prompts and generations: