mlx-examples/llms/mlx_lm
Awni Hannun 93c5cfd781
Add a speculative decoding generator (#1155)
* add a speculative decoding generator

* fix

* fixes

* optional kwarg pop
2025-01-10 15:27:08 -08:00
..
examples deepseek v3 model with pipeline parallelism (#1191) 2025-01-09 15:55:53 -08:00
models deepseek v3 model with pipeline parallelism (#1191) 2025-01-09 15:55:53 -08:00
tuner fix encoding with special tokens + chat template (#1189) 2025-01-03 10:50:59 -08:00
__init__.py Fix detokenizer space match for quote (#1072) 2024-10-27 15:06:07 -07:00
_version.py deepseek v3 model with pipeline parallelism (#1191) 2025-01-09 15:55:53 -08:00
cache_prompt.py fix encoding with special tokens + chat template (#1189) 2025-01-03 10:50:59 -08:00
chat.py fix encoding with special tokens + chat template (#1189) 2025-01-03 10:50:59 -08:00
convert.py override dtype with quant (#1062) 2024-10-22 09:56:45 -07:00
evaluate.py Add support for fewshot and apply chat template lm_eval functionality (#1180) 2025-01-06 07:58:43 -08:00
fuse.py Adding full finetuning (#903) 2024-09-29 17:12:47 -07:00
generate.py Add a speculative decoding generator (#1155) 2025-01-10 15:27:08 -08:00
gguf.py Fix export to gguf (#993) 2024-09-20 13:33:45 -07:00
LORA.md LoRA: update tools datasets docs (#1063) 2024-10-22 12:19:11 -07:00
lora.py fix(lora): config yaml & arg default merge bug (#1196) 2025-01-09 11:33:54 -08:00
MANAGE.md Add model management functionality for local caches (#736) 2024-05-03 12:20:13 -07:00
manage.py Improvements to mlx_lm.manage (#1178) 2025-01-01 07:25:57 -08:00
MERGE.md Create executables for generate, lora, server, merge, convert (#682) 2024-04-16 16:08:49 -07:00
merge.py Create executables for generate, lora, server, merge, convert (#682) 2024-04-16 16:08:49 -07:00
py.typed Add py.typed to support PEP-561 (type-hinting) (#389) 2024-01-30 21:17:38 -08:00
README.md feat: move lora into mlx-lm (#337) 2024-01-23 08:44:37 -08:00
requirements.txt deepseek v3 model with pipeline parallelism (#1191) 2025-01-09 15:55:53 -08:00
sample_utils.py Fix no template prompt + top_k sampling (#1166) 2024-12-18 18:46:50 -08:00
SERVER.md Fix object property value in mlx_lm.server chat completions response to match OpenAI spec (#1119) 2024-11-24 16:37:37 -08:00
server.py fix encoding with special tokens + chat template (#1189) 2025-01-03 10:50:59 -08:00
tokenizer_utils.py Change the eos-token argument for mlx_lm.generate (#1176) 2025-01-05 22:26:05 -08:00
UPLOAD.md Mlx llm package (#301) 2024-01-12 10:25:56 -08:00
utils.py Add a speculative decoding generator (#1155) 2025-01-10 15:27:08 -08:00

Generate Text with MLX and 🤗 Hugging Face

This an example of large language model text generation that can pull models from the Hugging Face Hub.

For more information on this example, see the README in the parent directory.

This package also supports fine tuning with LoRA or QLoRA. For more information see the LoRA documentation.