mlx-examples/mistral
Awni Hannun 2652b4f055
Merge pull request #52 from ricardo-larosa/mistral_batch_size
Mistral: Pass argument --tokens_per_eval for token generation
2023-12-10 11:25:23 -08:00
..
.gitignore mistral 2023-12-05 11:02:52 -08:00
convert.py mistral 2023-12-05 11:02:52 -08:00
mistral.py Add arg tokens_per_eval for token generation 2023-12-10 11:09:13 +01:00
README.md nits 2023-12-05 11:24:30 -08:00
requirements.txt Add missing numpy dependency 2023-12-06 15:34:55 -08:00
test.py mistral 2023-12-05 11:02:52 -08:00

Mistral

An example of generating text with Mistral using MLX.

Mistral 7B is one of the top large language models in its size class. It is also fully open source with a permissive license1.

Setup

Install the dependencies:

pip install -r requirements.txt

Next, download the model and tokenizer:

curl -O https://files.mistral-7b-v0-1.mistral.ai/mistral-7B-v0.1.tar
tar -xf mistral-7B-v0.1.tar

Then, convert the weights with:

python convert.py

Run

Once you've converted the weights to MLX format, you can generate text with the Mistral model:

python mistral.py --prompt "It is a truth universally acknowledged,"  --temp 0

Run python mistral.py --help for more details.


  1. Refer to the blog post and github repository for more details. ↩︎