mirror of
https://github.com/ml-explore/mlx-examples.git
synced 2025-09-01 12:49:50 +08:00
Add notes about conversion
This commit is contained in:
@@ -6,7 +6,7 @@ parameters[^1].
|
|||||||
|
|
||||||
### Setup
|
### Setup
|
||||||
|
|
||||||
First, install the dependencies.
|
First, install the dependencies:
|
||||||
|
|
||||||
```
|
```
|
||||||
pip install -r requirements.txt
|
pip install -r requirements.txt
|
||||||
@@ -19,6 +19,27 @@ Install [`ffmpeg`](https://ffmpeg.org/):
|
|||||||
brew install ffmpeg
|
brew install ffmpeg
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Next, download the Whisper PyTorch checkpoint and convert the weights to MLX format:
|
||||||
|
|
||||||
|
```
|
||||||
|
# Take the "tiny" model as an example. Note that you can also convert a local PyTorch checkpoint in OpenAI's format.
|
||||||
|
python convert.py --torch-name-or-path tiny --mlx-path mlx_models/tiny
|
||||||
|
```
|
||||||
|
|
||||||
|
To generate a 4-bit quantized model, use ``-q`` for a full list of options:
|
||||||
|
|
||||||
|
```
|
||||||
|
python convert.py --help
|
||||||
|
```
|
||||||
|
|
||||||
|
By default, the conversion script will make the directory `mlx_models/tiny` and save
|
||||||
|
the converted `weights.npz` and `config.json` there.
|
||||||
|
|
||||||
|
> [!TIP]
|
||||||
|
> Alternatively, you can also download a few converted checkpoints from the
|
||||||
|
> [MLX Community](https://huggingface.co/mlx-community) organization on Hugging
|
||||||
|
> Face and skip the conversion step.
|
||||||
|
|
||||||
### Run
|
### Run
|
||||||
|
|
||||||
Transcribe audio with:
|
Transcribe audio with:
|
||||||
|
Reference in New Issue
Block a user