diff --git a/speechcommands/README.md b/speechcommands/README.md index 79bbb9b4..07002594 100644 --- a/speechcommands/README.md +++ b/speechcommands/README.md @@ -8,10 +8,6 @@ dataset. ## Pre-requisites -Follow the [installation -instructions](https://ml-explore.github.io/mlx-data/build/html/install.html) -for MLX Data. - Install the remaining python requirements: ``` diff --git a/speechcommands/requirements.txt b/speechcommands/requirements.txt index 5ca13284..4e6a06dd 100644 --- a/speechcommands/requirements.txt +++ b/speechcommands/requirements.txt @@ -1 +1,2 @@ -mlx>=0.0.5 +mlx +mlx-data diff --git a/whisper/README.md b/whisper/README.md index e785d9bb..847d5b8b 100644 --- a/whisper/README.md +++ b/whisper/README.md @@ -52,13 +52,13 @@ import whisper text = whisper.transcribe(speech_file)["text"] ``` -Choose the model by setting `hf_path_or_repo`. For example: +Choose the model by setting `path_or_hf_repo`. For example: ```python -result = whisper.transcribe(speech_file, hf_path_or_repo="models/large") +result = whisper.transcribe(speech_file, path_or_hf_repo="models/large") ``` -This will load the model contained in `models/large`. The `hf_path_or_repo` +This will load the model contained in `models/large`. The `path_or_hf_repo` can also point to an MLX-style Whisper model on the Hugging Face Hub. In this case, the model will be automatically downloaded.