mlx-examples/llms/mlx_lm
Anchen 949f63f309
chore(mlx-lm): fix print_trainable_parameters for quant models (#581)
* chore(mlx-lm): fix print_trainable_parameters for quant models

* chore: clean up

* refactor: use layer type to check quant bits

* chore: address comment
2024-03-20 08:41:03 -07:00
..
examples Some improvements to LoRA (#528) 2024-03-12 20:02:03 -07:00
models Make attention faster for a some models (#574) 2024-03-14 21:35:54 -07:00
tuner LoRA: report last train info (#595) 2024-03-19 17:29:50 -07:00
__init__.py Fix import warning (#479) 2024-02-27 08:47:56 -08:00
convert.py add dequantize option to mlx_lm/convert.py (#547) 2024-03-19 19:50:08 -07:00
fuse.py feat: add update_config functionality (#531) 2024-03-14 06:36:05 -07:00
generate.py chore(mlx-lm): add adapter support in generate.py (#494) 2024-02-28 07:49:25 -08:00
LORA.md Support for OpenAI’s fine-tuning dataset format (#548) 2024-03-19 16:45:46 -07:00
lora.py chore(mlx-lm): fix print_trainable_parameters for quant models (#581) 2024-03-20 08:41:03 -07:00
MERGE.md Support for slerp merging models (#455) 2024-02-19 20:37:15 -08:00
merge.py feat: add update_config functionality (#531) 2024-03-14 06:36:05 -07:00
py.typed Add py.typed to support PEP-561 (type-hinting) (#389) 2024-01-30 21:17:38 -08:00
README.md feat: move lora into mlx-lm (#337) 2024-01-23 08:44:37 -08:00
requirements.txt Support for OpenAI’s fine-tuning dataset format (#548) 2024-03-19 16:45:46 -07:00
SERVER.md Prevent llms/mlx_lm from serving the local directory as a webserver (#498) 2024-02-27 19:40:42 -08:00
server.py Set finish_reason in response (#592) 2024-03-19 20:21:26 -07:00
UPLOAD.md Mlx llm package (#301) 2024-01-12 10:25:56 -08:00
utils.py add dequantize option to mlx_lm/convert.py (#547) 2024-03-19 19:50:08 -07:00
version.py version (#570) 2024-03-13 10:09:36 -07:00

Generate Text with MLX and 🤗 Hugging Face

This an example of large language model text generation that can pull models from the Hugging Face Hub.

For more information on this example, see the README in the parent directory.

This package also supports fine tuning with LoRA or QLoRA. For more information see the LoRA documentation.