mlx-examples/llms/mlx_lm/tuner
Prince Canuma b5e18ef1e3
Add Phi-3.5-MoE (#946)
* add phimoe

* add phimoe to tunner

* add switch_mlp

* fix SuScaled args

* nits

---------

Co-authored-by: Awni Hannun <awni@apple.com>
2024-08-24 06:52:33 -07:00
..
__init__.py LoRA: Extract small function (#614) 2024-06-02 06:38:42 -07:00
datasets.py Configuration-based use of HF hub-hosted datasets for training (#701) 2024-06-26 10:20:50 -07:00
dora.py Allow the entire model to be targed for LoRA and DoRA fine tuning: LoRA and DoRA embeddings with small DoRALinear bug fix (#914) 2024-08-16 07:38:36 -07:00
lora.py Allow the entire model to be targed for LoRA and DoRA fine tuning: LoRA and DoRA embeddings with small DoRALinear bug fix (#914) 2024-08-16 07:38:36 -07:00
trainer.py Add eos token to lora fine-tunes (#818) 2024-06-12 07:44:21 -07:00
utils.py Add Phi-3.5-MoE (#946) 2024-08-24 06:52:33 -07:00