mlx-examples/llms/mlx_lm
Awni Hannun e4b19bb9e1
Make attention faster for a some models (#574)
* make attention faster for a couple models

* remove unused generation flags

* add comment on lora

* include text files as well
2024-03-14 21:35:54 -07:00
..
examples Some improvements to LoRA (#528) 2024-03-12 20:02:03 -07:00
models Make attention faster for a some models (#574) 2024-03-14 21:35:54 -07:00
tuner add peak_memory info to training callback (#572) 2024-03-13 20:17:10 -07:00
__init__.py Fix import warning (#479) 2024-02-27 08:47:56 -08:00
convert.py Fix import warning (#479) 2024-02-27 08:47:56 -08:00
fuse.py feat: add update_config functionality (#531) 2024-03-14 06:36:05 -07:00
generate.py chore(mlx-lm): add adapter support in generate.py (#494) 2024-02-28 07:49:25 -08:00
LORA.md Make attention faster for a some models (#574) 2024-03-14 21:35:54 -07:00
lora.py Make attention faster for a some models (#574) 2024-03-14 21:35:54 -07:00
MERGE.md Support for slerp merging models (#455) 2024-02-19 20:37:15 -08:00
merge.py feat: add update_config functionality (#531) 2024-03-14 06:36:05 -07:00
py.typed Add py.typed to support PEP-561 (type-hinting) (#389) 2024-01-30 21:17:38 -08:00
README.md feat: move lora into mlx-lm (#337) 2024-01-23 08:44:37 -08:00
requirements.txt [mlx-lm] Use sdpa in llama / mistral model (#515) 2024-03-07 17:41:23 -08:00
SERVER.md Prevent llms/mlx_lm from serving the local directory as a webserver (#498) 2024-02-27 19:40:42 -08:00
server.py Update server.py to add --trust-remote-code to server (#578) 2024-03-14 07:05:19 -07:00
UPLOAD.md Mlx llm package (#301) 2024-01-12 10:25:56 -08:00
utils.py Make attention faster for a some models (#574) 2024-03-14 21:35:54 -07:00
version.py version (#570) 2024-03-13 10:09:36 -07:00

Generate Text with MLX and 🤗 Hugging Face

This an example of large language model text generation that can pull models from the Hugging Face Hub.

For more information on this example, see the README in the parent directory.

This package also supports fine tuning with LoRA or QLoRA. For more information see the LoRA documentation.