mlx-examples/llms/mlx_lm
Awni Hannun d4666615bb
Lazy import + refactor Lora layer addition (#426)
* lazy model import in mlx_lm

* change lora loading

* fix olmo lora

* remove a bunch of unused stuff from plamo

* move phixtral to mlx-lm and out of llms/
2024-02-12 10:51:02 -08:00
..
models Lazy import + refactor Lora layer addition (#426) 2024-02-12 10:51:02 -08:00
tuner Lazy import + refactor Lora layer addition (#426) 2024-02-12 10:51:02 -08:00
__init__.py Mlx llm package (#301) 2024-01-12 10:25:56 -08:00
convert.py feat: move lora into mlx-lm (#337) 2024-01-23 08:44:37 -08:00
fuse.py feat(mlx-lm): add de-quant for fuse.py (#365) 2024-01-25 18:59:32 -08:00
generate.py fix the chinese character generation as same as PR #321 (#342) 2024-01-23 12:44:23 -08:00
LORA.md feat: move lora into mlx-lm (#337) 2024-01-23 08:44:37 -08:00
lora.py Lazy import + refactor Lora layer addition (#426) 2024-02-12 10:51:02 -08:00
py.typed Add py.typed to support PEP-561 (type-hinting) (#389) 2024-01-30 21:17:38 -08:00
README.md feat: move lora into mlx-lm (#337) 2024-01-23 08:44:37 -08:00
requirements.txt Update a few examples to use compile (#420) 2024-02-08 13:00:41 -08:00
UPLOAD.md Mlx llm package (#301) 2024-01-12 10:25:56 -08:00
utils.py Lazy import + refactor Lora layer addition (#426) 2024-02-12 10:51:02 -08:00

Generate Text with MLX and 🤗 Hugging Face

This an example of large language model text generation that can pull models from the Hugging Face Hub.

For more information on this example, see the README in the parent directory.

This package also supports fine tuning with LoRA or QLoRA. For more information see the LoRA documentation.