From c1342b8e89f0e24a5108d7185b7507300c0ad8ac Mon Sep 17 00:00:00 2001 From: Awni Hannun Date: Fri, 12 Jan 2024 11:06:33 -0800 Subject: [PATCH] Use pip for mlx data with speech commands (#307) * update to use pypi mlx data * nit in readme --- speechcommands/README.md | 4 ---- speechcommands/requirements.txt | 3 ++- whisper/README.md | 6 +++--- 3 files changed, 5 insertions(+), 8 deletions(-) diff --git a/speechcommands/README.md b/speechcommands/README.md index 79bbb9b4..07002594 100644 --- a/speechcommands/README.md +++ b/speechcommands/README.md @@ -8,10 +8,6 @@ dataset. ## Pre-requisites -Follow the [installation -instructions](https://ml-explore.github.io/mlx-data/build/html/install.html) -for MLX Data. - Install the remaining python requirements: ``` diff --git a/speechcommands/requirements.txt b/speechcommands/requirements.txt index 5ca13284..4e6a06dd 100644 --- a/speechcommands/requirements.txt +++ b/speechcommands/requirements.txt @@ -1 +1,2 @@ -mlx>=0.0.5 +mlx +mlx-data diff --git a/whisper/README.md b/whisper/README.md index e785d9bb..847d5b8b 100644 --- a/whisper/README.md +++ b/whisper/README.md @@ -52,13 +52,13 @@ import whisper text = whisper.transcribe(speech_file)["text"] ``` -Choose the model by setting `hf_path_or_repo`. For example: +Choose the model by setting `path_or_hf_repo`. For example: ```python -result = whisper.transcribe(speech_file, hf_path_or_repo="models/large") +result = whisper.transcribe(speech_file, path_or_hf_repo="models/large") ``` -This will load the model contained in `models/large`. The `hf_path_or_repo` +This will load the model contained in `models/large`. The `path_or_hf_repo` can also point to an MLX-style Whisper model on the Hugging Face Hub. In this case, the model will be automatically downloaded.