mlx-examples/llms/mlx_lm
Anchen f51e98fcf1
chore(mlx-lm): truncate the input sentence to max seq len in lora iterate_batches (#373)
* chore(mlx-lm): pass max seq len to evaluate in training loop

* chore: make sure the batch seq not exceed max len

* chore: update comment

* chore: add warning before truncate input
2024-01-25 12:38:04 -08:00
..
models feat: move lora into mlx-lm (#337) 2024-01-23 08:44:37 -08:00
tuner chore(mlx-lm): truncate the input sentence to max seq len in lora iterate_batches (#373) 2024-01-25 12:38:04 -08:00
__init__.py Mlx llm package (#301) 2024-01-12 10:25:56 -08:00
convert.py feat: move lora into mlx-lm (#337) 2024-01-23 08:44:37 -08:00
fuse.py feat: move lora into mlx-lm (#337) 2024-01-23 08:44:37 -08:00
generate.py fix the chinese character generation as same as PR #321 (#342) 2024-01-23 12:44:23 -08:00
LORA.md feat: move lora into mlx-lm (#337) 2024-01-23 08:44:37 -08:00
lora.py feat(mlx-lm): add lora hypeparameters in lora layer (#366) 2024-01-24 08:11:25 -08:00
README.md feat: move lora into mlx-lm (#337) 2024-01-23 08:44:37 -08:00
requirements.txt refactor(qwen): moving qwen into mlx-lm (#312) 2024-01-22 15:00:07 -08:00
UPLOAD.md Mlx llm package (#301) 2024-01-12 10:25:56 -08:00
utils.py feat(mlx-lm): add lora hypeparameters in lora layer (#366) 2024-01-24 08:11:25 -08:00

Generate Text with MLX and 🤗 Hugging Face

This an example of large language model text generation that can pull models from the Hugging Face Hub.

For more information on this example, see the README in the parent directory.

This package also supports fine tuning with LoRA or QLoRA. For more information see the LoRA documentation.