Commit Graph

18 Commits

Author SHA1 Message Date
Benjamin Anderson
09566c7257
add speculative decoding example for llama (#149)
* speculative decoding

* add sample 0

* spec decode gives same results as regular decode

* rebase

* use accept reject criteria

* switch to t5

* update readme

* readme nit

* nits

* nits

* nits

---------

Co-authored-by: Benjamin Anderson <benjamin@Benjamins-MBP.lan>
Co-authored-by: Awni Hannun <awni@apple.com>
2023-12-28 15:20:43 -08:00
Sunbir Gill
78d207fe27
Fix generate example in README (#197) 2023-12-27 13:11:10 -08:00
Sushant
a516f4635d
Fixed the return type for the __call__ method in Attention (#190) 2023-12-26 09:32:43 -08:00
Daniel Strobusch
2bd20ef0e0
shard llama model after conversion and unshard on loading (#174) 2023-12-25 11:19:43 -08:00
Yifan
738448c2d4
QWEN: Fix unsupported ScalarType BFloat16 (#187)
Fix unsupported ScalarType BFloat16.
2023-12-25 06:10:01 -08:00
devonthomas35
939086e6a3
Mixtral: Stop at EOS token (#183)
* Stop at EOS token

* Precommit format files

* Fix precommit hooks

* Fix precommit hooks
2023-12-23 21:25:42 -08:00
Daniel Strobusch
848f118ac5
use non-zero exit code on error (#177) 2023-12-23 07:10:13 -08:00
Daniel Strobusch
092e87211e
fix bad convert parameter (#178) 2023-12-23 07:09:49 -08:00
Alvaro Bartolome
f4709cb807
Align CLI args and some smaller fixes (#167)
* Add `.DS_Store` files to `.gitignore`

* Fix variable naming of `config` in `mixtral/convert.py`

* Align CLI args and minor fixes

* standardize

* one more

---------

Co-authored-by: Awni Hannun <awni@apple.com>
2023-12-22 14:34:32 -08:00
Vaibhav Srivastav
0eaa323c10
Fix conversion + inference errors. - Mistral (#176)
* Fix conversion + inference errors.

* wire rope_theta throuugh to nn.RoPE

---------

Co-authored-by: Awni Hannun <awni@apple.com>
2023-12-22 14:10:25 -08:00
Todsaporn Banjerdkit
7ae445f6c7
feat: add mistral tps (#173)
* feat: add mistral tps

* eval params before timing + format

---------

Co-authored-by: Awni Hannun <awni@apple.com>
2023-12-22 07:55:57 -08:00
Awni Hannun
3cf436b529
Quantize example (#162)
* testing quantization

* conversion + quantization working

* one config processor

* quantization in mistral / nits in llama

* args for quantization

* llama / mistral conversion in good shape

* phi2 quantized

* mixtral

* qwen conversion
2023-12-21 12:59:37 -08:00
Deven Mistry
6c574dbecf
update path to load weights (#164) 2023-12-21 06:31:17 -08:00
Daniel Strobusch
43b6522af2
rename --model_path to --model-path (#151)
use same argument convention for mistral/mixtral as for llama convert.
2023-12-21 06:28:57 -08:00
Deven Mistry
3efb1cc2cc
fix typo in readme (#163) 2023-12-20 19:47:41 -08:00
Pedro Cuenca
ce30cc3d8f
Use config.json in llama (#159)
* Use config.json in llama

* Fix pop

* Fix convert

* Typo
2023-12-20 10:34:44 -08:00
Awni Hannun
27c0a8c002
Add llms subdir + update README (#145)
* add llms subdir + update README

* nits

* use same pre-commit as mlx

* update readmes a bit

* format
2023-12-20 10:22:25 -08:00
Junyi Mei
62b455f801
Add Qwen example (#134)
* Add qwen model draft

* Add readme and requirements for qwen example

* Add model and tokenizer options

* Fix convert and tokenizer

* some updates / style consistency

* move to llm subdir

* readme nit

---------

Co-authored-by: Awni Hannun <awni@apple.com>
2023-12-19 13:06:19 -08:00