mlx-examples/llms/mlx_lm
Shunta Saito c37e26a1a3
Add plamo-2-1b model (#1283)
* Add pfnet/plamo-2-1b

* Fix cache.py to support non-top level layers

* Use mlx's BaseModelArgs

* Fix model

* Use sanitize()

* Remove unnecessary changes

* Add plamo2.py

* Apply formatter

* Fix some part

* Allow a cache obj defined externally

* Fix channel first weights to channel last for right use of MLX's conv1d

* Remove unused code part

* Give all inputs when it's the first time call of model

* Fix import

* Include .jsonl files to download from Huggingface hub

* Fix reference to layers

* Remove unnecessary code and add a test for plamo2

* Do not pass mask to prepare_inputs_for_generation

* Fix to use repeat instead of tile

* Add state property to PlamoCache

* Add __iter__ and __next__ methods to PlamoCache

* cleanup

* cleanup

* fix

---------

Co-authored-by: Awni Hannun <awni.hannun@gmail.com>
2025-02-24 19:24:43 -08:00
..
examples rm temp argument (#1267) 2025-02-09 11:39:11 -08:00
models Add plamo-2-1b model (#1283) 2025-02-24 19:24:43 -08:00
tuner Fix num layers in fine tune (#1294) 2025-02-20 13:32:01 -08:00
__init__.py Fix detokenizer space match for quote (#1072) 2024-10-27 15:06:07 -07:00
_version.py Fix logits processor bugs with spec dec (#1291) 2025-02-20 15:55:55 -08:00
cache_prompt.py Fix prompt cache for models without chat template (#1250) 2025-02-06 11:10:58 -08:00
chat.py fix encoding with special tokens + chat template (#1189) 2025-01-03 10:50:59 -08:00
convert.py override dtype with quant (#1062) 2024-10-22 09:56:45 -07:00
evaluate.py fix generation evaluations (#1277) 2025-02-11 16:10:30 -08:00
fuse.py Adding full finetuning (#903) 2024-09-29 17:12:47 -07:00
generate.py Add IBM granite model (#1265) 2025-02-08 15:46:15 -08:00
gguf.py Fix export to gguf (#993) 2024-09-20 13:33:45 -07:00
LORA.md Completion only fine-tuning of instruction models with collections of HF datasets (#1103) 2025-02-09 20:12:34 -08:00
lora.py Fix num layers in fine tune (#1294) 2025-02-20 13:32:01 -08:00
MANAGE.md Add model management functionality for local caches (#736) 2024-05-03 12:20:13 -07:00
manage.py Improvements to mlx_lm.manage (#1178) 2025-01-01 07:25:57 -08:00
MERGE.md Create executables for generate, lora, server, merge, convert (#682) 2024-04-16 16:08:49 -07:00
merge.py Create executables for generate, lora, server, merge, convert (#682) 2024-04-16 16:08:49 -07:00
py.typed Add py.typed to support PEP-561 (type-hinting) (#389) 2024-01-30 21:17:38 -08:00
README.md feat: move lora into mlx-lm (#337) 2024-01-23 08:44:37 -08:00
requirements.txt deepseek v3 model with pipeline parallelism (#1191) 2025-01-09 15:55:53 -08:00
sample_utils.py batched min p and fix spec gen sampling (#1222) 2025-01-27 15:40:31 -08:00
SERVER.md Fix object property value in mlx_lm.server chat completions response to match OpenAI spec (#1119) 2024-11-24 16:37:37 -08:00
server.py chore(mlx-lm): support text type content in messages (#1225) 2025-01-27 17:13:50 -08:00
tokenizer_utils.py Completion only fine-tuning of instruction models with collections of HF datasets (#1103) 2025-02-09 20:12:34 -08:00
UPLOAD.md Mlx llm package (#301) 2024-01-12 10:25:56 -08:00
utils.py Add plamo-2-1b model (#1283) 2025-02-24 19:24:43 -08:00

Generate Text with MLX and 🤗 Hugging Face

This an example of large language model text generation that can pull models from the Hugging Face Hub.

For more information on this example, see the README in the parent directory.

This package also supports fine tuning with LoRA or QLoRA. For more information see the LoRA documentation.