mirror of
https://github.com/ml-explore/mlx-examples.git
synced 2025-09-01 12:49:50 +08:00
Create executables for generate, lora, server, merge, convert (#682)
* feat: create executables mlx_lm.<cmd> * nits in docs --------- Co-authored-by: Awni Hannun <awni@apple.com>
This commit is contained in:
@@ -66,7 +66,7 @@ To see a description of all the arguments you can do:
|
||||
You can also use `mlx-lm` from the command line with:
|
||||
|
||||
```
|
||||
python -m mlx_lm.generate --model mistralai/Mistral-7B-Instruct-v0.1 --prompt "hello"
|
||||
mlx_lm.generate --model mistralai/Mistral-7B-Instruct-v0.1 --prompt "hello"
|
||||
```
|
||||
|
||||
This will download a Mistral 7B model from the Hugging Face Hub and generate
|
||||
@@ -75,19 +75,19 @@ text using the given prompt.
|
||||
For a full list of options run:
|
||||
|
||||
```
|
||||
python -m mlx_lm.generate --help
|
||||
mlx_lm.generate --help
|
||||
```
|
||||
|
||||
To quantize a model from the command line run:
|
||||
|
||||
```
|
||||
python -m mlx_lm.convert --hf-path mistralai/Mistral-7B-Instruct-v0.1 -q
|
||||
mlx_lm.convert --hf-path mistralai/Mistral-7B-Instruct-v0.1 -q
|
||||
```
|
||||
|
||||
For more options run:
|
||||
|
||||
```
|
||||
python -m mlx_lm.convert --help
|
||||
mlx_lm.convert --help
|
||||
```
|
||||
|
||||
You can upload new models to Hugging Face by specifying `--upload-repo` to
|
||||
@@ -95,7 +95,7 @@ You can upload new models to Hugging Face by specifying `--upload-repo` to
|
||||
[MLX Hugging Face community](https://huggingface.co/mlx-community) you can do:
|
||||
|
||||
```
|
||||
python -m mlx_lm.convert \
|
||||
mlx_lm.convert \
|
||||
--hf-path mistralai/Mistral-7B-v0.1 \
|
||||
-q \
|
||||
--upload-repo mlx-community/my-4bit-mistral
|
||||
|
Reference in New Issue
Block a user