diff --git a/whisper/README.md b/whisper/README.md index 847d5b8b..4d41a69f 100644 --- a/whisper/README.md +++ b/whisper/README.md @@ -39,8 +39,19 @@ To generate a 4-bit quantized model, use `-q`. For a full list of options: python convert.py --help ``` -By default, the conversion script will make the directory `mlx_models/tiny` -and save the converted `weights.npz` and `config.json` there. +By default, the conversion script will make the directory `mlx_models` +and save the converted `weights.npz` and `config.json` there. + +Each time it is run, `convert.py` will overwrite any model in the provided +path. To save different models, make sure to set `--mlx-path` to a unique +directory for each converted model. For example: + +```bash +model="tiny" +python convert.py --torch-name-or-path ${model} --mlx-path mlx_models/${model}_fp16 +python convert.py --torch-name-or-path ${model} --dtype float32 --mlx-path mlx_models/${model}_fp32 +python convert.py --torch-name-or-path ${model} -q --q_bits 4 --mlx-path mlx_models/${model}_quantized_4bits +``` ### Run