mlx-examples/llms/llama
Param Thakkar 4c9f9f9be7
Made llama and mistral files mypy compatible (#1359)
* Made mypy compatible

* reformatted

* Added more fixes

* Added fixes to speculative-decoding

* Fixes

* fix circle

* revert some stuff

---------

Co-authored-by: Awni Hannun <awni@apple.com>
2025-04-23 14:23:46 -07:00
..
convert.py Made llama and mistral files mypy compatible (#1359) 2025-04-23 14:23:46 -07:00
llama.py Quantize embedding / Update quantize API (#680) 2024-04-18 18:16:10 -07:00
README.md Quantize example (#162) 2023-12-21 12:59:37 -08:00
requirements.txt Quantize embedding / Update quantize API (#680) 2024-04-18 18:16:10 -07:00
sample_prompt.txt Add llms subdir + update README (#145) 2023-12-20 10:22:25 -08:00

Llama

An example of generating text with Llama (1 or 2) using MLX.

Llama is a set of open source language models from Meta AI Research12 ranging from 7B to 70B parameters. This example also supports Meta's Llama Chat and Code Llama models, as well as the 1.1B TinyLlama models from SUTD.3

Setup

Install the dependencies:

pip install -r requirements.txt

Next, download and convert the model. If you do not have access to the model weights you will need to request access from Meta:

[!TIP] Alternatively, you can also download a few converted checkpoints from the MLX Community organization on Hugging Face and skip the conversion step.

You can download the TinyLlama models directly from Hugging Face.

Convert the weights with:

python convert.py --torch-path <path_to_torch_model>

To generate a 4-bit quantized model use the -q flag:

python convert.py --torch-path <path_to_torch_model> -q

For TinyLlama use

python convert.py --torch-path <path_to_torch_model> --model-name tiny_llama

By default, the conversion script will make the directory mlx_model and save the converted weights.npz, tokenizer.model, and config.json there.

Run

Once you've converted the weights to MLX format, you can interact with the LlamA model:

python llama.py --prompt "hello"

Run python llama.py --help for more details.


  1. For Llama v1 refer to the arXiv paper and blog post for more details. ↩︎

  2. For Llama v2 refer to the blob post ↩︎

  3. For TinyLlama refer to the gihub repository ↩︎