mlx-examples/llms/mlx_lm/tuner
Madroid Ma 8eee4399f4
LoRA: Add printing and callbacks for learning rate during training (#457)
* LoRA:Refactor TrainingCallback to enhance flexibility and extensibility

This commit refactors the TrainingCallback class to accept a dictionary parameter for both on_train_loss_report and on_val_loss_report methods. By switching from multiple parameters to a single dict parameter, this change significantly improves the class's flexibility and makes it easier to extend with new training or validation metrics in the future without altering the method signatures. This approach simplifies the addition of new information to be logged or processed and aligns with best practices for scalable and maintainable code design.

* LoRA: Add printing and callbacks for learning rate during training
2024-02-20 13:07:21 -08:00
..
__init__.py feat: move lora into mlx-lm (#337) 2024-01-23 08:44:37 -08:00
lora.py feat(mlx-lm): add de-quant for fuse.py (#365) 2024-01-25 18:59:32 -08:00
trainer.py LoRA: Add printing and callbacks for learning rate during training (#457) 2024-02-20 13:07:21 -08:00
utils.py fix: check LoRA layers number error (#446) 2024-02-16 06:03:33 -08:00