mlx-examples/llms/mlx_lm/tuner
Awni Hannun 39084e81c2
Some improvements to LoRA (#528)
* set cache_limit

* remove set cache_limit

* cleanup

* add gradient checkpointing

* fix sort

* mokey patch call for checkpoint

* fix example config
2024-03-12 20:02:03 -07:00
..
__init__.py feat: move lora into mlx-lm (#337) 2024-01-23 08:44:37 -08:00
lora.py LoRA on all linear transformer block layers (#546) 2024-03-12 07:37:40 -07:00
trainer.py Some improvements to LoRA (#528) 2024-03-12 20:02:03 -07:00
utils.py Some improvements to LoRA (#528) 2024-03-12 20:02:03 -07:00