mlx-examples/whisper
Dimo 07c163d9d9
[Whisper] Large-v3 requires 128 Mel frequency bins (#193)
* Large-v3 requires 128 Mel frequency bins

* extract correct model dimensions and use argparse

* format

* format

---------

Co-authored-by: Awni Hannun <awni@apple.com>
2023-12-28 13:50:35 -08:00
..
whisper Add llms subdir + update README (#145) 2023-12-20 10:22:25 -08:00
benchmark.py [Whisper] Large-v3 requires 128 Mel frequency bins (#193) 2023-12-28 13:50:35 -08:00
README.md Corrected spelling of terms in whisper/README.md 2023-12-14 08:15:26 +08:00
requirements.txt update whisper readme and requirements 2023-12-07 13:01:44 -08:00
test.py Add llms subdir + update README (#145) 2023-12-20 10:22:25 -08:00

Whisper

Speech recognition with Whisper in MLX. Whisper is a set of open source speech recognition models from OpenAI, ranging from 39 million to 1.5 billion parameters1.

Setup

First, install the dependencies.

pip install -r requirements.txt

Install ffmpeg:

# on macOS using Homebrew (https://brew.sh/)
brew install ffmpeg

Run

Transcribe audio with:

import whisper

text = whisper.transcribe(speech_file)["text"]

  1. Refer to the arXiv paper, blog post, and code for more details. ↩︎