Update README.md (#530)

* Update README.md

The default behaviour of where the convert.py saved files was wrong. It also was inconsistent with how the later script test.py is trying to use them (and assuming naming convention). 

I don't actually see a quick way to automate this since--as written--the  target directory is set directly by an argument. It would probably be best to rewrite it so that the argument is used as an override variable, but the default behaviour is to construct a file path based on set and unset arugments. This also is complex because "defaults" are assumed in the naming convention as well.

* Update README.md

Created an actual script that'll run and do this correctly.

* Update README.md

Typo fix: mlx-models should have been mlx_models. This conforms with standard later in the mlx-examples/whisper code.

* Update README.md

Removed the larger script and changed it back to the simpler script as before.

* nits in readme

---------

Co-authored-by: Awni Hannun <awni@apple.com>
This commit is contained in:
amcox886 2024-03-07 14:23:43 +00:00 committed by GitHub
parent 8a178f8716
commit ef32379bc6
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

View File

@ -39,8 +39,19 @@ To generate a 4-bit quantized model, use `-q`. For a full list of options:
python convert.py --help python convert.py --help
``` ```
By default, the conversion script will make the directory `mlx_models/tiny` By default, the conversion script will make the directory `mlx_models`
and save the converted `weights.npz` and `config.json` there. and save the converted `weights.npz` and `config.json` there.
Each time it is run, `convert.py` will overwrite any model in the provided
path. To save different models, make sure to set `--mlx-path` to a unique
directory for each converted model. For example:
```bash
model="tiny"
python convert.py --torch-name-or-path ${model} --mlx-path mlx_models/${model}_fp16
python convert.py --torch-name-or-path ${model} --dtype float32 --mlx-path mlx_models/${model}_fp32
python convert.py --torch-name-or-path ${model} -q --q_bits 4 --mlx-path mlx_models/${model}_quantized_4bits
```
### Run ### Run