mirror of
https://github.com/ml-explore/mlx-examples.git
synced 2025-06-24 09:21:18 +08:00
![]() * add llms subdir + update README * nits * use same pre-commit as mlx * update readmes a bit * format |
||
---|---|---|
.. | ||
whisper | ||
benchmark.py | ||
README.md | ||
requirements.txt | ||
test.py |
Whisper
Speech recognition with Whisper in MLX. Whisper is a set of open source speech recognition models from OpenAI, ranging from 39 million to 1.5 billion parameters1.
Setup
First, install the dependencies.
pip install -r requirements.txt
Install ffmpeg
:
# on macOS using Homebrew (https://brew.sh/)
brew install ffmpeg
Run
Transcribe audio with:
import whisper
text = whisper.transcribe(speech_file)["text"]
-
Refer to the arXiv paper, blog post, and code for more details. ↩︎