Update README for mlx-examples repo

This commit is contained in:
Joe Barrow 2023-12-08 10:20:50 -05:00 committed by GitHub
parent 4e5b8ceafe
commit e05ee57bab
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

View File

@ -1,20 +1,10 @@
# mlxbert # mlxbert
A BERT implementation in Apple's new MLX framework. An implementation of BERT [(Devlin, et al., 2019)](https://aclanthology.org/N19-1423/) within mlx.
## Dependency Installation ## Downloading and Converting Weights
```sh The `convert.py` script relies on `transformers` to download the weights, and exports them as a single `.npz` file.
poetry install --no-root
```
If you don't want to do that, simply make sure you have the following dependencies installed:
- `mlx`
- `transformers`
- `numpy`
## Download and Convert
``` ```
python convert.py \ python convert.py \
@ -24,7 +14,7 @@ python convert.py \
## Run the Model ## Run the Model
Right now, this is just a test to show tha the outputs from mlx and huggingface don't change all that much. In order to run the model, and have it forward inference on a batch of examples:
```sh ```sh
python model.py \ python model.py \
@ -60,9 +50,3 @@ Which will show:
[ 0.946011 0.13582966 -0.29456618 ... 0.00868565 -0.90271175 [ 0.946011 0.13582966 -0.29456618 ... 0.00868565 -0.90271175
-0.27854213]]] -0.27854213]]]
``` ```
## To do's
- [x] fix position encodings
- [x] bert large and cased variants loaded
- [x] example usage