mlx-examples/llms/mlx_lm
Anchen fe96ef342f
feat(mlx-lm): export the GGUF (fp16) format model weights from fuse.py (#555)
* wip

* wip

* feat: convert mlx model to gguf f16

* chore: conver norm layer to float32 to avoid overflow issue

* chore: add support for mixtral

* chore: clean up

* chore: remove unused import statement

* chore: clean up weight name mapping

* version and readme

* actual version bump

---------

Co-authored-by: Awni Hannun <awni@apple.com>
2024-03-21 10:34:11 -07:00
..
examples Add dropout parameter to lora configuration (#599) 2024-03-20 08:44:40 -07:00
models Make attention faster for a some models (#574) 2024-03-14 21:35:54 -07:00
tuner Add dropout parameter to lora configuration (#599) 2024-03-20 08:44:40 -07:00
__init__.py Fix import warning (#479) 2024-02-27 08:47:56 -08:00
convert.py add dequantize option to mlx_lm/convert.py (#547) 2024-03-19 19:50:08 -07:00
fuse.py feat(mlx-lm): export the GGUF (fp16) format model weights from fuse.py (#555) 2024-03-21 10:34:11 -07:00
generate.py chore(mlx-lm): enable to apply default chat template (#577) 2024-03-20 21:39:39 -07:00
gguf.py feat(mlx-lm): export the GGUF (fp16) format model weights from fuse.py (#555) 2024-03-21 10:34:11 -07:00
LORA.md feat(mlx-lm): export the GGUF (fp16) format model weights from fuse.py (#555) 2024-03-21 10:34:11 -07:00
lora.py Add dropout parameter to lora configuration (#599) 2024-03-20 08:44:40 -07:00
MERGE.md Support for slerp merging models (#455) 2024-02-19 20:37:15 -08:00
merge.py feat: add update_config functionality (#531) 2024-03-14 06:36:05 -07:00
py.typed Add py.typed to support PEP-561 (type-hinting) (#389) 2024-01-30 21:17:38 -08:00
README.md feat: move lora into mlx-lm (#337) 2024-01-23 08:44:37 -08:00
requirements.txt Support for OpenAI’s fine-tuning dataset format (#548) 2024-03-19 16:45:46 -07:00
SERVER.md Prevent llms/mlx_lm from serving the local directory as a webserver (#498) 2024-02-27 19:40:42 -08:00
server.py Set finish_reason in response (#592) 2024-03-19 20:21:26 -07:00
UPLOAD.md Mlx llm package (#301) 2024-01-12 10:25:56 -08:00
utils.py add dequantize option to mlx_lm/convert.py (#547) 2024-03-19 19:50:08 -07:00
version.py feat(mlx-lm): export the GGUF (fp16) format model weights from fuse.py (#555) 2024-03-21 10:34:11 -07:00

Generate Text with MLX and 🤗 Hugging Face

This an example of large language model text generation that can pull models from the Hugging Face Hub.

For more information on this example, see the README in the parent directory.

This package also supports fine tuning with LoRA or QLoRA. For more information see the LoRA documentation.