mlx-examples/whisper
Awni Hannun 27c0a8c002
Add llms subdir + update README (#145)
* add llms subdir + update README

* nits

* use same pre-commit as mlx

* update readmes a bit

* format
2023-12-20 10:22:25 -08:00
..
whisper Add llms subdir + update README (#145) 2023-12-20 10:22:25 -08:00
benchmark.py Add llms subdir + update README (#145) 2023-12-20 10:22:25 -08:00
README.md Corrected spelling of terms in whisper/README.md 2023-12-14 08:15:26 +08:00
requirements.txt update whisper readme and requirements 2023-12-07 13:01:44 -08:00
test.py Add llms subdir + update README (#145) 2023-12-20 10:22:25 -08:00

Whisper

Speech recognition with Whisper in MLX. Whisper is a set of open source speech recognition models from OpenAI, ranging from 39 million to 1.5 billion parameters1.

Setup

First, install the dependencies.

pip install -r requirements.txt

Install ffmpeg:

# on macOS using Homebrew (https://brew.sh/)
brew install ffmpeg

Run

Transcribe audio with:

import whisper

text = whisper.transcribe(speech_file)["text"]

  1. Refer to the arXiv paper, blog post, and code for more details. ↩︎