mlx-examples/llms/mlx_lm/tuner
Prince Canuma b044ce2acf
Add support for ibm granite (#758)
* add support for granite 3-8B config

* add gpt_bigcode

* add positional embedding condition.

* add support for granite 3-8B config

* add gpt_bigcode

* add positional embedding condition.

* remove unused function

* rebase fix

* move position emebedding to mask creation

* add to tuner and format

* add support for granite 3-8B config

* add gpt_bigcode

* add positional embedding condition.

* add support for granite 3-8B config

* add gpt_bigcode

* add positional embedding condition.

* rebase fix

* move position emebedding to mask creation

* add to tuner and format

* refactor mask

* remove dropout layers
2024-05-21 20:16:31 -07:00
..
__init__.py feat: move lora into mlx-lm (#337) 2024-01-23 08:44:37 -08:00
datasets.py Support for OpenAI’s fine-tuning dataset format (#548) 2024-03-19 16:45:46 -07:00
dora.py support dora finetune in mlx-examples/llms/mlx_lm (#779) 2024-05-16 08:21:26 -07:00
lora.py Block sparse MM MoEs (#782) 2024-05-21 15:58:08 -07:00
trainer.py fix lora for openelm (#773) 2024-05-10 09:51:41 -07:00
utils.py Add support for ibm granite (#758) 2024-05-21 20:16:31 -07:00