mirror of
https://github.com/ml-explore/mlx-examples.git
synced 2025-08-31 11:54:37 +08:00
Add notes about conversion
This commit is contained in:
parent
27e9c3de06
commit
fcacc57950
@ -6,7 +6,7 @@ parameters[^1].
|
||||
|
||||
### Setup
|
||||
|
||||
First, install the dependencies.
|
||||
First, install the dependencies:
|
||||
|
||||
```
|
||||
pip install -r requirements.txt
|
||||
@ -19,6 +19,27 @@ Install [`ffmpeg`](https://ffmpeg.org/):
|
||||
brew install ffmpeg
|
||||
```
|
||||
|
||||
Next, download the Whisper PyTorch checkpoint and convert the weights to MLX format:
|
||||
|
||||
```
|
||||
# Take the "tiny" model as an example. Note that you can also convert a local PyTorch checkpoint in OpenAI's format.
|
||||
python convert.py --torch-name-or-path tiny --mlx-path mlx_models/tiny
|
||||
```
|
||||
|
||||
To generate a 4-bit quantized model, use ``-q`` for a full list of options:
|
||||
|
||||
```
|
||||
python convert.py --help
|
||||
```
|
||||
|
||||
By default, the conversion script will make the directory `mlx_models/tiny` and save
|
||||
the converted `weights.npz` and `config.json` there.
|
||||
|
||||
> [!TIP]
|
||||
> Alternatively, you can also download a few converted checkpoints from the
|
||||
> [MLX Community](https://huggingface.co/mlx-community) organization on Hugging
|
||||
> Face and skip the conversion step.
|
||||
|
||||
### Run
|
||||
|
||||
Transcribe audio with:
|
||||
|
Loading…
Reference in New Issue
Block a user