Awni Hannun
ecbc6ff1e3
one more quant fix ( #708 )
2024-04-22 18:12:52 -07:00
Awni Hannun
b8a348c1b8
Switch to fast RMS/LN Norm ( #603 )
...
* use nn.RMSNorm, use sdpa, cleanup
* bump mlx versions
* minor update
* use fast layer norm
* version bump
* update requirement for whisper
* update requirement for gguf
2024-03-23 07:13:51 -07:00
Anchen
1415595409
chore(lora): support mixtral in lora example ( #343 )
2024-01-20 06:07:45 -08:00
Yousif
7575125d5d
Added lora support for Phi-2 ( #302 )
...
* Added lora support for Phi-2
* Added Phi-2 support in fuse and convert
* format + readme
---------
Co-authored-by: Awni Hannun <awni@apple.com>
2024-01-12 13:45:30 -08:00
Awni Hannun
7b258f33ac
Move lora example to use the same model format / conversion as hf_llm
( #252 )
...
* huffing face the lora example to allow more models
* fixes
* comments
* more readme nits
* fusion + works better for qlora
* nits'
* comments
2024-01-09 11:14:52 -08:00
Awni Hannun
485fb9ac0f
quantize linear ( #250 )
2024-01-07 18:48:59 -08:00
mc0ps
25ebd36112
Fix typo in lora convert.py ( #245 )
2024-01-07 03:30:30 -08:00
Awni Hannun
37b41cec60
Qlora ( #219 )
...
qlora
2024-01-04 21:05:59 -08:00
Awni Hannun
27c0a8c002
Add llms subdir + update README ( #145 )
...
* add llms subdir + update README
* nits
* use same pre-commit as mlx
* update readmes a bit
* format
2023-12-20 10:22:25 -08:00
Awni Hannun
1e7f4a5921
fix use for llama 2 from meta ( #144 )
2023-12-18 19:33:17 -08:00
Awni Hannun
d108c558fc
more nits
2023-12-15 10:06:14 -08:00
Awni Hannun
a4d932bf26
fix conversion
2023-12-10 16:56:41 -08:00
Awni Hannun
98f4346c81
black format
2023-12-09 14:15:25 -08:00
Awni Hannun
b8332a1e66
generalize lora finetuning for llama and mistral
2023-12-09 14:13:55 -08:00
张嘉豪
4018aed335
fix: Unsupported BFloat16 Data Type Issue with MPS Backend
2023-12-08 16:19:35 +08:00
Awni Hannun
31bc57c4ff
add copyright in source
2023-11-30 11:08:53 -08:00
Awni Hannun
5d6353aab7
lora
2023-11-29 14:14:11 -08:00