mlx-examples/llms/mlx_lm
Madroid Ma e5dfef5d9a
LoRA: Extract the run function for easy use in scripts file (#482)
* LoRA: Extract the run_lora function for easy use in scripts

* LoRA: run_lora function adds a TrainingCallback pass.

* LoRA: change run_lora to run
2024-02-26 19:35:04 -08:00
..
examples Support for slerp merging models (#455) 2024-02-19 20:37:15 -08:00
models [mlx-lm] Add precompiled normalizations (#451) 2024-02-22 12:40:55 -08:00
tuner Gemma support (#474) 2024-02-21 08:47:13 -08:00
__init__.py [mlx-lm] Add precompiled normalizations (#451) 2024-02-22 12:40:55 -08:00
convert.py Lazy loading models for faster convert and merge (#462) 2024-02-20 13:36:55 -08:00
fuse.py feat(mlx-lm): add de-quant for fuse.py (#365) 2024-01-25 18:59:32 -08:00
generate.py Add top-p sampling for text generation (#486) 2024-02-26 06:18:11 -08:00
LORA.md Support for slerp merging models (#455) 2024-02-19 20:37:15 -08:00
lora.py LoRA: Extract the run function for easy use in scripts file (#482) 2024-02-26 19:35:04 -08:00
MERGE.md Support for slerp merging models (#455) 2024-02-19 20:37:15 -08:00
merge.py Lazy loading models for faster convert and merge (#462) 2024-02-20 13:36:55 -08:00
py.typed Add py.typed to support PEP-561 (type-hinting) (#389) 2024-01-30 21:17:38 -08:00
README.md feat: move lora into mlx-lm (#337) 2024-01-23 08:44:37 -08:00
requirements.txt [mlx-lm] Add precompiled normalizations (#451) 2024-02-22 12:40:55 -08:00
SERVER.md Support for slerp merging models (#455) 2024-02-19 20:37:15 -08:00
server.py feat(mlx-lm): add openAI like api server (#429) 2024-02-18 14:01:28 -08:00
UPLOAD.md Mlx llm package (#301) 2024-01-12 10:25:56 -08:00
utils.py Add top-p sampling for text generation (#486) 2024-02-26 06:18:11 -08:00

Generate Text with MLX and 🤗 Hugging Face

This an example of large language model text generation that can pull models from the Hugging Face Hub.

For more information on this example, see the README in the parent directory.

This package also supports fine tuning with LoRA or QLoRA. For more information see the LoRA documentation.