mlx-examples/llms/mlx_lm
Awni Hannun 39084e81c2
Some improvements to LoRA (#528)
* set cache_limit

* remove set cache_limit

* cleanup

* add gradient checkpointing

* fix sort

* mokey patch call for checkpoint

* fix example config
2024-03-12 20:02:03 -07:00
..
examples Some improvements to LoRA (#528) 2024-03-12 20:02:03 -07:00
models [mlx-lm] Use sdpa in llama / mistral model (#515) 2024-03-07 17:41:23 -08:00
tuner Some improvements to LoRA (#528) 2024-03-12 20:02:03 -07:00
__init__.py Fix import warning (#479) 2024-02-27 08:47:56 -08:00
convert.py Fix import warning (#479) 2024-02-27 08:47:56 -08:00
fuse.py feat(mlx-lm): add de-quant for fuse.py (#365) 2024-01-25 18:59:32 -08:00
generate.py chore(mlx-lm): add adapter support in generate.py (#494) 2024-02-28 07:49:25 -08:00
LORA.md YAML configuration for mlx_lm.lora (#503) 2024-03-08 07:57:52 -08:00
lora.py Some improvements to LoRA (#528) 2024-03-12 20:02:03 -07:00
MERGE.md Support for slerp merging models (#455) 2024-02-19 20:37:15 -08:00
merge.py Refactoring of mlx_lm example (#501) 2024-03-06 06:24:31 -08:00
py.typed Add py.typed to support PEP-561 (type-hinting) (#389) 2024-01-30 21:17:38 -08:00
README.md feat: move lora into mlx-lm (#337) 2024-01-23 08:44:37 -08:00
requirements.txt [mlx-lm] Use sdpa in llama / mistral model (#515) 2024-03-07 17:41:23 -08:00
SERVER.md Prevent llms/mlx_lm from serving the local directory as a webserver (#498) 2024-02-27 19:40:42 -08:00
server.py Refactoring of mlx_lm example (#501) 2024-03-06 06:24:31 -08:00
UPLOAD.md Mlx llm package (#301) 2024-01-12 10:25:56 -08:00
utils.py Refactoring of mlx_lm example (#501) 2024-03-06 06:24:31 -08:00
version.py [mlx-lm] Use sdpa in llama / mistral model (#515) 2024-03-07 17:41:23 -08:00

Generate Text with MLX and 🤗 Hugging Face

This an example of large language model text generation that can pull models from the Hugging Face Hub.

For more information on this example, see the README in the parent directory.

This package also supports fine tuning with LoRA or QLoRA. For more information see the LoRA documentation.