[Whisper] Load customized MLX model & Quantization (#191)

* Add option to load customized mlx model

* Add quantization

* Apply reviews

* Separate model conversion and loading

* Update test

* Fix benchmark

* Add notes about conversion

* Improve doc
This commit is contained in:
bofeng huang
2023-12-29 19:22:15 +01:00
committed by GitHub
parent 1cdbf9e886
commit 581a5733a1
6 changed files with 421 additions and 211 deletions

View File

@@ -6,7 +6,7 @@ parameters[^1].
### Setup
First, install the dependencies.
First, install the dependencies:
```
pip install -r requirements.txt
@@ -19,6 +19,28 @@ Install [`ffmpeg`](https://ffmpeg.org/):
brew install ffmpeg
```
Next, download the Whisper PyTorch checkpoint and convert the weights to the MLX format. For example, to convert the `tiny` model use:
```
python convert.py --torch-name-or-path tiny --mlx-path mlx_models/tiny
```
Note you can also convert a local PyTorch checkpoint which is in the original OpenAI format.
To generate a 4-bit quantized model, use `-q`. For a full list of options:
```
python convert.py --help
```
By default, the conversion script will make the directory `mlx_models/tiny` and save
the converted `weights.npz` and `config.json` there.
> [!TIP]
> Alternatively, you can also download a few converted checkpoints from the
> [MLX Community](https://huggingface.co/mlx-community) organization on Hugging
> Face and skip the conversion step.
### Run
Transcribe audio with: