mlx-examples/llms/mixtral
Daniel Strobusch 43b6522af2
rename --model_path to --model-path (#151)
use same argument convention for mistral/mixtral as for llama convert.
2023-12-21 06:28:57 -08:00
..
convert.py rename --model_path to --model-path (#151) 2023-12-21 06:28:57 -08:00
mixtral.py rename --model_path to --model-path (#151) 2023-12-21 06:28:57 -08:00
params.json Add llms subdir + update README (#145) 2023-12-20 10:22:25 -08:00
README.md rename --model_path to --model-path (#151) 2023-12-21 06:28:57 -08:00
requirements.txt Add llms subdir + update README (#145) 2023-12-20 10:22:25 -08:00

Mixtral 8x7B

Run the Mixtral1 8x7B mixture-of-experts (MoE) model in MLX on Apple silicon.

This example also supports the instruction fine-tuned Mixtral model.[^instruct]

Note, for 16-bit precision this model needs a machine with substantial RAM (~100GB) to run.

Setup

Install Git Large File Storage. For example with Homebrew:

brew install git-lfs

Download the models from Hugging Face:

For the base model use:

export MIXTRAL_MODEL=Mixtral-8x7B-v0.1

For the instruction fine-tuned model use:

export MIXTRAL_MODEL=Mixtral-8x7B-Instruct-v0.1

Then run:

GIT_LFS_SKIP_SMUDGE=1 git clone https://huggingface.co/mistralai/${MIXTRAL_MODEL}/
cd $MIXTRAL_MODEL/ && \
  git lfs pull --include "consolidated.*.pt" && \
  git lfs pull --include "tokenizer.model"

Now from mlx-exmaples/mixtral convert and save the weights as NumPy arrays so MLX can read them:

python convert.py --model-path $MIXTRAL_MODEL/

The conversion script will save the converted weights in the same location.

Generate

As easy as:

python mixtral.py --model-path $MIXTRAL_MODEL/

For more options including how to prompt the model, run:

python mixtral.py --help

For the Instruction model, make sure to follow the prompt format:

[INST] Instruction prompt [/INST]

  1. Refer to Mistral's blog post and the Hugging Face blog post for more details. ↩︎