Anchen
0a49ba0697
fix(mlx-lm): apply lora layer doesn't update the lora weights ( #396 )
2024-01-31 11:51:26 -08:00
Anchen
614de6652f
chore(mlx-lm): add reset lora layers helper ( #377 )
...
* chore(mlx-lm): add reset lora layers helper
* chore: rename the func
* chore: update docstring
* Update llms/mlx_lm/tuner/utils.py
Co-authored-by: Awni Hannun <awni.hannun@gmail.com>
---------
Co-authored-by: Awni Hannun <awni.hannun@gmail.com>
2024-01-29 20:54:49 -08:00
Anchen
854ad8747a
feat(mlx-lm): add de-quant for fuse.py ( #365 )
...
* feat(mlx-lm): add de-quant for fuse
* chore: disable quant in to linear when de-quant enabled
* chore: add better error handling for adapter file not found
2024-01-25 18:59:32 -08:00
Anchen
ab91ac1075
chore(mlx-lm): add load model with adapter and fix bug in sample ( #360 )
...
* chore: add load model with adapter support and fix bug in sample
* chore: ignore temp during calculating prob in sample
2024-01-23 19:47:39 -08:00
Anchen
362e88a744
feat: move lora into mlx-lm ( #337 )
...
* feat: Add lora and qlora training to mlx-lm
---------
Co-authored-by: Awni Hannun <awni@apple.com>
2024-01-23 08:44:37 -08:00