mirror of
https://github.com/ml-explore/mlx-examples.git
synced 2025-06-25 01:41:19 +08:00
update whisper readme and requirements
This commit is contained in:
parent
54952a0d80
commit
172a60056f
@ -1,10 +1,12 @@
|
||||
# whisper
|
||||
|
||||
Whisper in MLX.
|
||||
Speech recognition with Whisper in MLX. Whisper is a set of open source speech
|
||||
recognition models from Open AI, ranging from 39 million to 1.5 billion
|
||||
parameters[^1].
|
||||
|
||||
First install the dependencies:
|
||||
### Setup
|
||||
|
||||
(TODO, MLX install link / command / add to requirements.txt)
|
||||
First, install the dependencies.
|
||||
|
||||
```
|
||||
pip install -r requirements.txt
|
||||
@ -12,12 +14,14 @@ pip install -r requirements.txt
|
||||
|
||||
Install [`ffmpeg`](https://ffmpeg.org/):
|
||||
|
||||
```bash
|
||||
```
|
||||
# on MacOS using Homebrew (https://brew.sh/)
|
||||
brew install ffmpeg
|
||||
```
|
||||
|
||||
Then transcribe audio with:
|
||||
### Run
|
||||
|
||||
Transcribe audio with:
|
||||
|
||||
```
|
||||
import whisper
|
||||
@ -25,3 +29,4 @@ import whisper
|
||||
text = whisper.transcribe(speech_file)["text"]
|
||||
```
|
||||
|
||||
[^1]: Refer to the [arXiv paper](https://arxiv.org/abs/2212.04356), [blog post](https://openai.com/research/whisper), and [code](https://github.com/openai/whisper) for more details.
|
||||
|
@ -1,3 +1,4 @@
|
||||
mlx
|
||||
numba
|
||||
numpy
|
||||
torch
|
||||
|
Loading…
Reference in New Issue
Block a user