Goekdeniz-Guelmez
d044db959d
update
2024-12-27 15:17:45 +01:00
Goekdeniz-Guelmez
0ae536c423
update: using einsum on som elines making it faster, but still generates Gibberish on Codestral
2024-12-18 19:32:22 +01:00
Gökdeniz Gülmez
68533e2a8f
Merge branch 'ml-explore:main' into adding-support-for-mamba2
2024-12-17 11:14:40 +01:00
Prince Canuma
dfa4dd6c93
Add support for cohere2 ( #1157 )
...
* add support for cohere2
* revert to act_fn to silu
* fix tests and sliding window attention
* add tests
* add to tuner
* fix sliding window
* add coauthor :)
Co-authored-by: n8programs <43304488+N8python@users.noreply.github.com>
* Add rotating kvcache to save space
* some nits
* style
* nits
---------
Co-authored-by: n8programs <43304488+N8python@users.noreply.github.com>
Co-authored-by: N8 <n8@n8programs.com>
Co-authored-by: Awni Hannun <awni@apple.com>
2024-12-16 08:01:03 -08:00
Goekdeniz-Guelmez
dff4e52910
adding the modelnames in the LORA.md file and removing unused functions from mamba2.py
2024-12-12 22:52:00 +01:00
Goekdeniz-Guelmez
a883e39f41
optimizing the code for faster inference but still generates giberish
2024-12-12 21:08:33 +01:00
Goekdeniz-Guelmez
184d3d3267
clean up
2024-12-10 18:20:13 +01:00
Goekdeniz-Guelmez
80e88b4f4d
nits
2024-12-10 18:18:59 +01:00
Goekdeniz-Guelmez
b10afe3662
nits
2024-12-10 18:15:12 +01:00
Goekdeniz-Guelmez
9f8a6a3509
inference on codestral works but is giberish
2024-12-10 17:34:44 +01:00
Gökdeniz Gülmez
ddad2105ef
Merge branch 'main' into adding-support-for-mamba2
2024-12-10 14:32:44 +01:00
n8programs
5687d5b99b
Adds EXAONE architecture. ( #1145 )
...
* Adds EXAONE architecture.
* nits + format
* format
* clean up and fix rope
* clean up and fix rope
---------
Co-authored-by: Awni Hannun <awni@apple.com>
2024-12-09 07:58:25 -08:00
Awni Hannun
8801beb66f
Add olmo2 ( #1128 )
...
* add olmo2
* add olmo2
2024-12-02 11:42:58 -08:00
Goekdeniz-Guelmez
38e5801edb
loading codestral works but no tinference
2024-11-24 16:26:45 +01:00
Awni Hannun
004eb4cc9d
Tencent HunYuan MOE model ( #1100 )
...
* hunyuan
* fix
* format str
* default trust remote code for tokenizer, allow system prompt to be configurable
2024-11-23 11:06:26 -08:00
Goekdeniz-Guelmez
a6ddc27a4e
removing last checkpoint file
2024-11-21 22:33:56 +01:00
Goekdeniz-Guelmez
57b1717cf5
inference fixed
2024-11-21 22:25:58 +01:00
Goekdeniz-Guelmez
117ffd3909
removing some files
2024-11-21 22:05:42 +01:00
Goekdeniz-Guelmez
e22b2dbf27
Fixed streaming generation and got rid of generating gibberish, but is still a litle slow: 0.222 tokens-per-sec
2024-11-21 22:01:28 +01:00
Goekdeniz-Guelmez
1d851069ea
nits
2024-11-10 17:21:18 +01:00
Goekdeniz-Guelmez
1a6688384d
imopemented multi Token inputs, but still generating Gibberish
2024-11-10 17:19:00 +01:00
Goekdeniz-Guelmez
2f95b361a8
removed the custom Mamba2Cache adn updated the existing MambaCache but still only one input Token and outputs gibberish
2024-11-10 16:57:03 +01:00
Gökdeniz Gülmez
49d3f188f8
Merge branch 'ml-explore:main' into adding-support-for-mamba2
2024-11-10 16:36:02 +01:00
Goekdeniz-Guelmez
3a499f9735
fixed inference slowness but it cant handle multible Token inputs and is generateing gibberish
2024-11-10 16:35:07 +01:00
Goekdeniz-Guelmez
800b60239c
save checkpoint
2024-11-10 14:36:26 +01:00
Goekdeniz-Guelmez
906f972d36
save push
2024-11-06 16:35:46 +01:00
Angelos Katharopoulos
ed9e81dd58
Fix rotating kv cache size ( #1093 )
2024-11-05 10:24:24 -08:00
ilyasch2
3b526f0aa1
Add support for falcon-mamba ( #1074 )
...
* Add support for falcon-mamba
* nits
* nit
---------
Co-authored-by: Awni Hannun <awni@apple.com>
2024-11-04 12:23:30 -08:00
Alex Barron
85ffd2c96a
Quantized KV Cache ( #1075 )
...
* add QuantizedKVCache
* simplify
* add tests
* single sdpa function
* fix sed
* in place
* fix tests
* support different k and v head dims
2024-10-31 16:59:52 -07:00
Goekdeniz-Guelmez
58b448dc0b
updates
2024-10-30 21:23:13 +01:00
Gökdeniz Gülmez
3b70708201
Merge branch 'ml-explore:main' into adding-support-for-mamba2
2024-10-25 08:57:37 +02:00
Goekdeniz-Guelmez
7c8849e795
update
2024-10-24 16:16:42 +02:00
Awni Hannun
9000e280ae
fix mamba models conversion ( #1065 )
2024-10-22 15:44:08 -07:00
Goekdeniz-Guelmez
a677638c4b
inference works but is hella slow
2024-10-22 23:06:06 +02:00
Goekdeniz-Guelmez
9ab581d678
notes
2024-10-22 22:10:53 +02:00
Goekdeniz-Guelmez
e43a2ab229
not working, incorrect handling with cache probably
2024-10-22 22:04:25 +02:00
Goekdeniz-Guelmez
55485b98e8
update
2024-10-22 21:23:47 +02:00
Goekdeniz-Guelmez
758597eaa8
adding multi token input and correct cache handling in ssm step
2024-10-22 20:44:23 +02:00
Awni Hannun
66e7bcb886
override dtype with quant ( #1062 )
2024-10-22 09:56:45 -07:00
Goekdeniz-Guelmez
b9c57cd429
generation works! trying training now
2024-10-22 18:25:59 +02:00
Goekdeniz-Guelmez
c1634ce81b
still generating gibberish
2024-10-20 18:41:28 +02:00
Goekdeniz-Guelmez
ab4cf1d1cf
generation works but outputs gibberish
2024-10-20 18:04:34 +02:00
Goekdeniz-Guelmez
4ab5139c05
quick save
2024-10-20 16:11:39 +02:00
Goekdeniz-Guelmez
cd036ccfb5
fix generation works too (almost)
2024-10-16 21:13:36 +02:00
Goekdeniz-Guelmez
181d6abedc
Merge branch 'adding-support-for-mamba2' of https://github.com/Goekdeniz-Guelmez/mlx-examples into adding-support-for-mamba2
2024-10-16 21:09:42 +02:00
Goekdeniz-Guelmez
8073cb486c
adding debug statements (somehiw generating only goes through the fist MambaMixer block pass)
2024-10-16 21:09:30 +02:00
Gökdeniz Gülmez
855fcc4327
Merge branch 'ml-explore:main' into adding-support-for-mamba2
2024-10-16 18:57:55 +02:00
Awni Hannun
8dca1a2f60
Tokenizer updates + tests ( #1024 )
...
* tokenizer updates + tests
* nit
* add can_trim_prompt_cache
* nits
2024-10-14 10:48:46 -07:00
Gökdeniz Gülmez
3f1c1dde6a
Merge branch 'ml-explore:main' into adding-support-for-mamba2
2024-10-14 16:32:00 +02:00
Shunta Saito
7612c646f3
Fix PLaMo model to support Grouped Query Attention ( #1037 )
2024-10-12 15:26:50 -07:00