mlx-examples/llms/mlx_lm/tuner
Awni Hannun d4666615bb
Lazy import + refactor Lora layer addition (#426)
* lazy model import in mlx_lm

* change lora loading

* fix olmo lora

* remove a bunch of unused stuff from plamo

* move phixtral to mlx-lm and out of llms/
2024-02-12 10:51:02 -08:00
..
__init__.py feat: move lora into mlx-lm (#337) 2024-01-23 08:44:37 -08:00
lora.py feat(mlx-lm): add de-quant for fuse.py (#365) 2024-01-25 18:59:32 -08:00
trainer.py Add checkpoints directory for adapter weights (#431) 2024-02-12 10:50:05 -08:00
utils.py Lazy import + refactor Lora layer addition (#426) 2024-02-12 10:51:02 -08:00