Commit Graph

37 Commits

Author SHA1 Message Date
Goekdeniz-Guelmez
2ed51946ab still gibberish 2024-12-27 15:36:37 +01:00
Goekdeniz-Guelmez
f4cbe27b0f new set but still gibberish 2024-12-27 15:27:09 +01:00
Goekdeniz-Guelmez
d044db959d update 2024-12-27 15:17:45 +01:00
Goekdeniz-Guelmez
0ae536c423 update: using einsum on som elines making it faster, but still generates Gibberish on Codestral 2024-12-18 19:32:22 +01:00
Goekdeniz-Guelmez
dff4e52910 adding the modelnames in the LORA.md file and removing unused functions from mamba2.py 2024-12-12 22:52:00 +01:00
Goekdeniz-Guelmez
a883e39f41 optimizing the code for faster inference but still generates giberish 2024-12-12 21:08:33 +01:00
Goekdeniz-Guelmez
184d3d3267 clean up 2024-12-10 18:20:13 +01:00
Goekdeniz-Guelmez
80e88b4f4d nits 2024-12-10 18:18:59 +01:00
Goekdeniz-Guelmez
b10afe3662 nits 2024-12-10 18:15:12 +01:00
Goekdeniz-Guelmez
9f8a6a3509 inference on codestral works but is giberish 2024-12-10 17:34:44 +01:00
Goekdeniz-Guelmez
38e5801edb loading codestral works but no tinference 2024-11-24 16:26:45 +01:00
Goekdeniz-Guelmez
57b1717cf5 inference fixed 2024-11-21 22:25:58 +01:00
Goekdeniz-Guelmez
e22b2dbf27 Fixed streaming generation and got rid of generating gibberish, but is still a litle slow: 0.222 tokens-per-sec 2024-11-21 22:01:28 +01:00
Goekdeniz-Guelmez
1d851069ea nits 2024-11-10 17:21:18 +01:00
Goekdeniz-Guelmez
1a6688384d imopemented multi Token inputs, but still generating Gibberish 2024-11-10 17:19:00 +01:00
Goekdeniz-Guelmez
2f95b361a8 removed the custom Mamba2Cache adn updated the existing MambaCache but still only one input Token and outputs gibberish 2024-11-10 16:57:03 +01:00
Goekdeniz-Guelmez
3a499f9735 fixed inference slowness but it cant handle multible Token inputs and is generateing gibberish 2024-11-10 16:35:07 +01:00
Goekdeniz-Guelmez
800b60239c save checkpoint 2024-11-10 14:36:26 +01:00
Goekdeniz-Guelmez
906f972d36 save push 2024-11-06 16:35:46 +01:00
Goekdeniz-Guelmez
58b448dc0b updates 2024-10-30 21:23:13 +01:00
Goekdeniz-Guelmez
7c8849e795 update 2024-10-24 16:16:42 +02:00
Goekdeniz-Guelmez
a677638c4b inference works but is hella slow 2024-10-22 23:06:06 +02:00
Goekdeniz-Guelmez
9ab581d678 notes 2024-10-22 22:10:53 +02:00
Goekdeniz-Guelmez
e43a2ab229 not working, incorrect handling with cache probably 2024-10-22 22:04:25 +02:00
Goekdeniz-Guelmez
55485b98e8 update 2024-10-22 21:23:47 +02:00
Goekdeniz-Guelmez
758597eaa8 adding multi token input and correct cache handling in ssm step 2024-10-22 20:44:23 +02:00
Goekdeniz-Guelmez
b9c57cd429 generation works! trying training now 2024-10-22 18:25:59 +02:00
Goekdeniz-Guelmez
c1634ce81b still generating gibberish 2024-10-20 18:41:28 +02:00
Goekdeniz-Guelmez
ab4cf1d1cf generation works but outputs gibberish 2024-10-20 18:04:34 +02:00
Goekdeniz-Guelmez
4ab5139c05 quick save 2024-10-20 16:11:39 +02:00
Goekdeniz-Guelmez
cd036ccfb5 fix generation works too (almost) 2024-10-16 21:13:36 +02:00
Goekdeniz-Guelmez
8073cb486c adding debug statements (somehiw generating only goes through the fist MambaMixer block pass) 2024-10-16 21:09:30 +02:00
Goekdeniz-Guelmez
00ba27fe6c adding debug statements 2024-10-11 21:36:41 +02:00
Goekdeniz-Guelmez
6f88dd59d7 quick clean up and fix 2024-10-11 21:08:13 +02:00
Goekdeniz-Guelmez
4e1236cbf6 fixing loading the model 2024-10-11 20:53:29 +02:00
Goekdeniz-Guelmez
264ba43707 update trainer/lora.py and adding DepthWiseConv1d because mlx 0.18.0 doesnt axepts groups parameter 2024-10-02 19:19:32 +02:00
Gökdeniz Gülmez
49b9fc1a4c
Create mamba2.py 2024-10-02 12:48:15 +02:00