Commit Graph

9 Commits

Author SHA1 Message Date
Awni Hannun
b8a348c1b8
Switch to fast RMS/LN Norm (#603)
* use nn.RMSNorm, use sdpa, cleanup

* bump mlx versions

* minor update

* use fast layer norm

* version bump

* update requirement for whisper

* update requirement for gguf
2024-03-23 07:13:51 -07:00
Anchen
1415595409
chore(lora): support mixtral in lora example (#343) 2024-01-20 06:07:45 -08:00
Anchen
7cfda327fd
fix(lora): tokenizer return incompatible mx array (#271)
* fix(lora): tokenizer return incompatible encodeing mx array

* add readme nit

---------

Co-authored-by: Awni Hannun <awni@apple.com>
2024-01-09 19:46:38 -08:00
Awni Hannun
7b258f33ac
Move lora example to use the same model format / conversion as hf_llm (#252)
* huffing face the lora example to allow more models

* fixes

* comments

* more readme nits

* fusion + works better for qlora

* nits'

* comments
2024-01-09 11:14:52 -08:00
Awni Hannun
37b41cec60
Qlora (#219)
qlora
2024-01-04 21:05:59 -08:00
Awni Hannun
27c0a8c002
Add llms subdir + update README (#145)
* add llms subdir + update README

* nits

* use same pre-commit as mlx

* update readmes a bit

* format
2023-12-20 10:22:25 -08:00
Awni Hannun
8c8f9d6440 keep base weights in fp16 2023-12-15 10:42:18 -08:00
Awni Hannun
84f02ef58b use lower precision base weights 2023-12-15 10:29:42 -08:00
Awni Hannun
b8332a1e66 generalize lora finetuning for llama and mistral 2023-12-09 14:13:55 -08:00