wyanzhao
|
22620de3ee
|
1. Add user warning for sequences over 2048 tokens in iterate_batches. (#166)
|
2023-12-21 06:29:31 -08:00 |
|
Awni Hannun
|
27c0a8c002
|
Add llms subdir + update README (#145)
* add llms subdir + update README
* nits
* use same pre-commit as mlx
* update readmes a bit
* format
|
2023-12-20 10:22:25 -08:00 |
|
Awni Hannun
|
1e7f4a5921
|
fix use for llama 2 from meta (#144)
|
2023-12-18 19:33:17 -08:00 |
|
Awni Hannun
|
ff0f172363
|
32 GB example
|
2023-12-15 12:20:15 -08:00 |
|
Awni Hannun
|
ee2ee0f8e5
|
32 GB example
|
2023-12-15 12:18:29 -08:00 |
|
Awni Hannun
|
8c8f9d6440
|
keep base weights in fp16
|
2023-12-15 10:42:18 -08:00 |
|
Awni Hannun
|
84f02ef58b
|
use lower precision base weights
|
2023-12-15 10:29:42 -08:00 |
|
Awni Hannun
|
d108c558fc
|
more nits
|
2023-12-15 10:06:14 -08:00 |
|
Awni Hannun
|
fa51553f09
|
fix readme
|
2023-12-15 09:59:07 -08:00 |
|
Awni Hannun
|
985f413f99
|
custom data with lora
|
2023-12-15 09:56:10 -08:00 |
|
Daniel Strobusch
|
5515c2a75b
|
fix "request access" form url for Llama models
|
2023-12-13 10:19:29 +01:00 |
|
Awni Hannun
|
a4d932bf26
|
fix conversion
|
2023-12-10 16:56:41 -08:00 |
|
Awni Hannun
|
036090f508
|
few more nits
|
2023-12-09 14:20:19 -08:00 |
|
Awni Hannun
|
98f4346c81
|
black format
|
2023-12-09 14:15:25 -08:00 |
|
Awni Hannun
|
b8332a1e66
|
generalize lora finetuning for llama and mistral
|
2023-12-09 14:13:55 -08:00 |
|
张嘉豪
|
4018aed335
|
fix: Unsupported BFloat16 Data Type Issue with MPS Backend
|
2023-12-08 16:19:35 +08:00 |
|
waterstone
|
ec97c7531b
|
Update README.md
|
2023-12-07 16:44:29 +08:00 |
|
Awni Hannun
|
31bc57c4ff
|
add copyright in source
|
2023-11-30 11:08:53 -08:00 |
|
Awni Hannun
|
5d6353aab7
|
lora
|
2023-11-29 14:14:11 -08:00 |
|