Add mentions of MLX-my-repo. (#1129)

* Add mentions of MLX-my-repo.

* simplify

* move

* move

---------

Co-authored-by: Awni Hannun <awni@apple.com>
This commit is contained in:
vb 2024-12-04 04:21:39 +01:00 committed by GitHub
parent 1963df8565
commit 1727959a27
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

View File

@ -77,7 +77,7 @@ to see how to use the API in more detail.
The `mlx-lm` package also comes with functionality to quantize and optionally
upload models to the Hugging Face Hub.
You can convert models in the Python API with:
You can convert models using the Python API:
```python
from mlx_lm import convert
@ -163,6 +163,10 @@ mlx_lm.convert \
--upload-repo mlx-community/my-4bit-mistral
```
Models can also be converted and quantized directly in the
[mlx-my-repo]https://huggingface.co/spaces/mlx-community/mlx-my-repo) Hugging
Face Space.
### Long Prompts and Generations
`mlx-lm` has some tools to scale efficiently to long prompts and generations: