mlx-examples/llms/mlx_lm/tuner
Anchen f51e98fcf1
chore(mlx-lm): truncate the input sentence to max seq len in lora iterate_batches (#373)
* chore(mlx-lm): pass max seq len to evaluate in training loop

* chore: make sure the batch seq not exceed max len

* chore: update comment

* chore: add warning before truncate input
2024-01-25 12:38:04 -08:00
..
__init__.py feat: move lora into mlx-lm (#337) 2024-01-23 08:44:37 -08:00
lora.py feat(mlx-lm): add lora hypeparameters in lora layer (#366) 2024-01-24 08:11:25 -08:00
trainer.py chore(mlx-lm): truncate the input sentence to max seq len in lora iterate_batches (#373) 2024-01-25 12:38:04 -08:00
utils.py chore(mlx-lm): add load model with adapter and fix bug in sample (#360) 2024-01-23 19:47:39 -08:00