mlx-examples/llms/mlx_lm
da-z 5a4cad34ef
Always resume downloads (#674)
* Always resume downloads

* format

---------

Co-authored-by: Awni Hannun <awni@apple.com>
2024-04-11 06:52:32 -07:00
..
examples Configurable LR schedulers (#604) 2024-03-29 13:41:10 -07:00
models Stable lm 2 (#666) 2024-04-08 14:18:55 -07:00
tuner Save lora config (#636) 2024-04-02 13:52:53 -07:00
__init__.py Fix import warning (#479) 2024-02-27 08:47:56 -08:00
convert.py add dequantize option to mlx_lm/convert.py (#547) 2024-03-19 19:50:08 -07:00
fuse.py Save lora config (#636) 2024-04-02 13:52:53 -07:00
generate.py Save lora config (#636) 2024-04-02 13:52:53 -07:00
gguf.py fix(mlx-lm): type hints in gguf.py (#621) 2024-03-26 07:56:01 -07:00
LORA.md Save lora config (#636) 2024-04-02 13:52:53 -07:00
lora.py Save lora config (#636) 2024-04-02 13:52:53 -07:00
MERGE.md Support for slerp merging models (#455) 2024-02-19 20:37:15 -08:00
merge.py Save lora config (#636) 2024-04-02 13:52:53 -07:00
py.typed Add py.typed to support PEP-561 (type-hinting) (#389) 2024-01-30 21:17:38 -08:00
README.md feat: move lora into mlx-lm (#337) 2024-01-23 08:44:37 -08:00
requirements.txt Switch to fast RMS/LN Norm (#603) 2024-03-23 07:13:51 -07:00
sample_utils.py fix(mlx-lm): sorted probs in top_p implementation. (#610) 2024-03-25 15:07:55 -07:00
SERVER.md Prevent llms/mlx_lm from serving the local directory as a webserver (#498) 2024-02-27 19:40:42 -08:00
server.py Save lora config (#636) 2024-04-02 13:52:53 -07:00
tokenizer_utils.py Add streaming detokenizers (#651) 2024-04-08 22:36:01 -07:00
UPLOAD.md Mlx llm package (#301) 2024-01-12 10:25:56 -08:00
utils.py Always resume downloads (#674) 2024-04-11 06:52:32 -07:00
version.py Stable lm 2 (#666) 2024-04-08 14:18:55 -07:00

Generate Text with MLX and 🤗 Hugging Face

This an example of large language model text generation that can pull models from the Hugging Face Hub.

For more information on this example, see the README in the parent directory.

This package also supports fine tuning with LoRA or QLoRA. For more information see the LoRA documentation.