Anchen
|
7cfda327fd
|
fix(lora): tokenizer return incompatible mx array (#271)
* fix(lora): tokenizer return incompatible encodeing mx array
* add readme nit
---------
Co-authored-by: Awni Hannun <awni@apple.com>
|
2024-01-09 19:46:38 -08:00 |
|
Awni Hannun
|
7b258f33ac
|
Move lora example to use the same model format / conversion as hf_llm (#252)
* huffing face the lora example to allow more models
* fixes
* comments
* more readme nits
* fusion + works better for qlora
* nits'
* comments
|
2024-01-09 11:14:52 -08:00 |
|
Awni Hannun
|
37b41cec60
|
Qlora (#219)
qlora
|
2024-01-04 21:05:59 -08:00 |
|
Daniel Strobusch
|
188a91074b
|
fix typo (#169)
|
2023-12-21 14:17:11 -08:00 |
|
Awni Hannun
|
ff0f172363
|
32 GB example
|
2023-12-15 12:20:15 -08:00 |
|
Awni Hannun
|
ee2ee0f8e5
|
32 GB example
|
2023-12-15 12:18:29 -08:00 |
|
Awni Hannun
|
d108c558fc
|
more nits
|
2023-12-15 10:06:14 -08:00 |
|
Awni Hannun
|
fa51553f09
|
fix readme
|
2023-12-15 09:59:07 -08:00 |
|
Awni Hannun
|
985f413f99
|
custom data with lora
|
2023-12-15 09:56:10 -08:00 |
|
Daniel Strobusch
|
5515c2a75b
|
fix "request access" form url for Llama models
|
2023-12-13 10:19:29 +01:00 |
|
Awni Hannun
|
036090f508
|
few more nits
|
2023-12-09 14:20:19 -08:00 |
|
Awni Hannun
|
b8332a1e66
|
generalize lora finetuning for llama and mistral
|
2023-12-09 14:13:55 -08:00 |
|
waterstone
|
ec97c7531b
|
Update README.md
|
2023-12-07 16:44:29 +08:00 |
|
Awni Hannun
|
5d6353aab7
|
lora
|
2023-11-29 14:14:11 -08:00 |
|