mlx-examples/llms/mlx_lm/tuner
Chime Ogbuji e56d9015ef
LoRA on all linear transformer block layers (#546)
* Add --lora-all-linear option to apply LoRa to all linear transfer block layers

* Moved to YAML config and added specification of rank & alpha

* nits in conifg, more tests

* nit

* run tests for prs

---------

Co-authored-by: Awni Hannun <awni@apple.com>
2024-03-12 07:37:40 -07:00
..
__init__.py feat: move lora into mlx-lm (#337) 2024-01-23 08:44:37 -08:00
lora.py LoRA on all linear transformer block layers (#546) 2024-03-12 07:37:40 -07:00
trainer.py LoRA on all linear transformer block layers (#546) 2024-03-12 07:37:40 -07:00
utils.py LoRA on all linear transformer block layers (#546) 2024-03-12 07:37:40 -07:00