Files
mlx-examples/llms/llama
Awni Hannun 2146bcd7ee Quantize embedding / Update quantize API (#680)
* more async eval

* quantize embedding / update quantize api

* more updates for quantize

* update for quantize embeddings

* update sd quant API

* update sdxl quants

* error for datasets < batch_size

* async

* fix config loading

* fix quant

* fix tests

* fix req

* remove lm head if tie weights is true

* fix test
2024-04-18 18:16:10 -07:00
..
2023-12-21 12:59:37 -08:00

Llama

An example of generating text with Llama (1 or 2) using MLX.

Llama is a set of open source language models from Meta AI Research1 2 ranging from 7B to 70B parameters. This example also supports Meta's Llama Chat and Code Llama models, as well as the 1.1B TinyLlama models from SUTD.3

Setup

Install the dependencies:

pip install -r requirements.txt

Next, download and convert the model. If you do not have access to the model weights you will need to request access from Meta:

[!TIP] Alternatively, you can also download a few converted checkpoints from the MLX Community organization on Hugging Face and skip the conversion step.

You can download the TinyLlama models directly from Hugging Face.

Convert the weights with:

python convert.py --torch-path <path_to_torch_model>

To generate a 4-bit quantized model use the -q flag:

python convert.py --torch-path <path_to_torch_model> -q

For TinyLlama use

python convert.py --torch-path <path_to_torch_model> --model-name tiny_llama

By default, the conversion script will make the directory mlx_model and save the converted weights.npz, tokenizer.model, and config.json there.

Run

Once you've converted the weights to MLX format, you can interact with the LlamA model:

python llama.py --prompt "hello"

Run python llama.py --help for more details.


  1. For Llama v1 refer to the arXiv paper and blog post for more details. ↩︎

  2. For Llama v2 refer to the blob post ↩︎

  3. For TinyLlama refer to the gihub repository ↩︎