mirror of
https://github.com/ml-explore/mlx-examples.git
synced 2025-09-01 12:49:50 +08:00
1.4 KiB
1.4 KiB
Whisper
Speech recognition with Whisper in MLX. Whisper is a set of open source speech recognition models from OpenAI, ranging from 39 million to 1.5 billion parameters1 .
Setup
First, install the dependencies:
pip install -r requirements.txt
Install ffmpeg
:
# on macOS using Homebrew (https://brew.sh/)
brew install ffmpeg
Next, download the Whisper PyTorch checkpoint and convert the weights to MLX format:
# Take the "tiny" model as an example. Note that you can also convert a local PyTorch checkpoint in OpenAI's format.
python convert.py --torch-name-or-path tiny --mlx-path mlx_models/tiny
To generate a 4-bit quantized model, use -q
for a full list of options:
python convert.py --help
By default, the conversion script will make the directory mlx_models/tiny
and save
the converted weights.npz
and config.json
there.
Tip
Alternatively, you can also download a few converted checkpoints from the MLX Community organization on Hugging Face and skip the conversion step.
Run
Transcribe audio with:
import whisper
text = whisper.transcribe(speech_file)["text"]
-
Refer to the arXiv paper, blog post, and code for more details. ↩︎