mlx-examples/llms/mlx_lm
Chime Ogbuji e56d9015ef
LoRA on all linear transformer block layers (#546)
* Add --lora-all-linear option to apply LoRa to all linear transfer block layers

* Moved to YAML config and added specification of rank & alpha

* nits in conifg, more tests

* nit

* run tests for prs

---------

Co-authored-by: Awni Hannun <awni@apple.com>
2024-03-12 07:37:40 -07:00
..
examples LoRA on all linear transformer block layers (#546) 2024-03-12 07:37:40 -07:00
models [mlx-lm] Use sdpa in llama / mistral model (#515) 2024-03-07 17:41:23 -08:00
tuner LoRA on all linear transformer block layers (#546) 2024-03-12 07:37:40 -07:00
__init__.py Fix import warning (#479) 2024-02-27 08:47:56 -08:00
convert.py Fix import warning (#479) 2024-02-27 08:47:56 -08:00
fuse.py feat(mlx-lm): add de-quant for fuse.py (#365) 2024-01-25 18:59:32 -08:00
generate.py chore(mlx-lm): add adapter support in generate.py (#494) 2024-02-28 07:49:25 -08:00
LORA.md YAML configuration for mlx_lm.lora (#503) 2024-03-08 07:57:52 -08:00
lora.py LoRA on all linear transformer block layers (#546) 2024-03-12 07:37:40 -07:00
MERGE.md Support for slerp merging models (#455) 2024-02-19 20:37:15 -08:00
merge.py Refactoring of mlx_lm example (#501) 2024-03-06 06:24:31 -08:00
py.typed Add py.typed to support PEP-561 (type-hinting) (#389) 2024-01-30 21:17:38 -08:00
README.md feat: move lora into mlx-lm (#337) 2024-01-23 08:44:37 -08:00
requirements.txt [mlx-lm] Use sdpa in llama / mistral model (#515) 2024-03-07 17:41:23 -08:00
SERVER.md Prevent llms/mlx_lm from serving the local directory as a webserver (#498) 2024-02-27 19:40:42 -08:00
server.py Refactoring of mlx_lm example (#501) 2024-03-06 06:24:31 -08:00
UPLOAD.md Mlx llm package (#301) 2024-01-12 10:25:56 -08:00
utils.py Refactoring of mlx_lm example (#501) 2024-03-06 06:24:31 -08:00
version.py [mlx-lm] Use sdpa in llama / mistral model (#515) 2024-03-07 17:41:23 -08:00

Generate Text with MLX and 🤗 Hugging Face

This an example of large language model text generation that can pull models from the Hugging Face Hub.

For more information on this example, see the README in the parent directory.

This package also supports fine tuning with LoRA or QLoRA. For more information see the LoRA documentation.