mlx-examples/whisper
2023-12-07 13:01:44 -08:00
..
whisper Fix timestamp extraction bug in transcribe function 2023-12-06 20:34:30 +08:00
benchmark.py Benchmark all models if user allows. 2023-12-07 00:07:42 +05:30
README.md update whisper readme and requirements 2023-12-07 13:01:44 -08:00
requirements.txt update whisper readme and requirements 2023-12-07 13:01:44 -08:00
test.py add copyright in source 2023-11-30 11:08:53 -08:00

whisper

Speech recognition with Whisper in MLX. Whisper is a set of open source speech recognition models from Open AI, ranging from 39 million to 1.5 billion parameters1.

Setup

First, install the dependencies.

pip install -r requirements.txt

Install ffmpeg:

# on MacOS using Homebrew (https://brew.sh/)
brew install ffmpeg

Run

Transcribe audio with:

import whisper

text = whisper.transcribe(speech_file)["text"]

  1. Refer to the arXiv paper, blog post, and code for more details. ↩︎