Goekdeniz-Guelmez
|
12e9f34524
|
removing unnessesairy lines and cleaning up
|
2025-01-21 23:06:40 +01:00 |
|
Goekdeniz-Guelmez
|
c13de475f6
|
removing custom RMSNorm class
|
2025-01-21 22:52:45 +01:00 |
|
Goekdeniz-Guelmez
|
a6a92cb91f
|
codestral inference exxtually works now
|
2025-01-21 21:01:39 +01:00 |
|
Goekdeniz-Guelmez
|
5a6ada2df0
|
getting reall closer:
python -m mlx_lm.generate --model /Users/gokdenizgulmez/Desktop/Mamba-Codestral-7B-v0.1-4bit --prompt "# A function that computes fibonacci
def fibonacci(" -m 64
==========
n):
print(f"{os.path.abspath(".")/data/data/data/com.android.launcher.png)
## 🙌🏼 🙌🙌🙌🙌🙌🙌
class _State(Enum):
def __init__ (self
==========
Prompt: 16 tokens, 84.547 tokens-per-sec
Generation: 64 tokens, 13.774 tokens-per-sec
Peak memory: 4.139 GB
|
2025-01-21 20:44:51 +01:00 |
|
Goekdeniz-Guelmez
|
eb432f4b7d
|
inference with the origional mamba2 model woirks but still not with codestral. working:
rokyang/mamba2-130m-hf
rokyang/mamba2-370m-hf
rokyang/mamba2-780m-hf
rokyang/mamba2-1.3b-hf
rokyang/mamba2-2.7b-hf
|
2025-01-21 19:38:07 +01:00 |
|
Goekdeniz-Guelmez
|
db514f24c8
|
update
|
2025-01-20 19:44:05 +01:00 |
|
Goekdeniz-Guelmez
|
531ac96481
|
fixing cache
|
2025-01-20 18:26:21 +01:00 |
|
Goekdeniz-Guelmez
|
dd4957f3da
|
adding correct initialisation of dt, A and D
|
2025-01-13 21:28:43 +01:00 |
|
Goekdeniz-Guelmez
|
8deada9d11
|
optimizations
|
2024-12-27 17:52:14 +01:00 |
|
Goekdeniz-Guelmez
|
4e94e87f57
|
nits
|
2024-12-27 15:41:54 +01:00 |
|
Goekdeniz-Guelmez
|
3384d38a83
|
nits
|
2024-12-27 15:37:41 +01:00 |
|
Goekdeniz-Guelmez
|
2ed51946ab
|
still gibberish
|
2024-12-27 15:36:37 +01:00 |
|
Goekdeniz-Guelmez
|
f4cbe27b0f
|
new set but still gibberish
|
2024-12-27 15:27:09 +01:00 |
|
Goekdeniz-Guelmez
|
d044db959d
|
update
|
2024-12-27 15:17:45 +01:00 |
|
Goekdeniz-Guelmez
|
0ae536c423
|
update: using einsum on som elines making it faster, but still generates Gibberish on Codestral
|
2024-12-18 19:32:22 +01:00 |
|
Goekdeniz-Guelmez
|
dff4e52910
|
adding the modelnames in the LORA.md file and removing unused functions from mamba2.py
|
2024-12-12 22:52:00 +01:00 |
|
Goekdeniz-Guelmez
|
a883e39f41
|
optimizing the code for faster inference but still generates giberish
|
2024-12-12 21:08:33 +01:00 |
|
Goekdeniz-Guelmez
|
184d3d3267
|
clean up
|
2024-12-10 18:20:13 +01:00 |
|
Goekdeniz-Guelmez
|
80e88b4f4d
|
nits
|
2024-12-10 18:18:59 +01:00 |
|
Goekdeniz-Guelmez
|
b10afe3662
|
nits
|
2024-12-10 18:15:12 +01:00 |
|
Goekdeniz-Guelmez
|
9f8a6a3509
|
inference on codestral works but is giberish
|
2024-12-10 17:34:44 +01:00 |
|
Goekdeniz-Guelmez
|
38e5801edb
|
loading codestral works but no tinference
|
2024-11-24 16:26:45 +01:00 |
|
Goekdeniz-Guelmez
|
57b1717cf5
|
inference fixed
|
2024-11-21 22:25:58 +01:00 |
|
Goekdeniz-Guelmez
|
e22b2dbf27
|
Fixed streaming generation and got rid of generating gibberish, but is still a litle slow: 0.222 tokens-per-sec
|
2024-11-21 22:01:28 +01:00 |
|
Goekdeniz-Guelmez
|
1d851069ea
|
nits
|
2024-11-10 17:21:18 +01:00 |
|
Goekdeniz-Guelmez
|
1a6688384d
|
imopemented multi Token inputs, but still generating Gibberish
|
2024-11-10 17:19:00 +01:00 |
|
Goekdeniz-Guelmez
|
2f95b361a8
|
removed the custom Mamba2Cache adn updated the existing MambaCache but still only one input Token and outputs gibberish
|
2024-11-10 16:57:03 +01:00 |
|
Goekdeniz-Guelmez
|
3a499f9735
|
fixed inference slowness but it cant handle multible Token inputs and is generateing gibberish
|
2024-11-10 16:35:07 +01:00 |
|
Goekdeniz-Guelmez
|
800b60239c
|
save checkpoint
|
2024-11-10 14:36:26 +01:00 |
|
Goekdeniz-Guelmez
|
906f972d36
|
save push
|
2024-11-06 16:35:46 +01:00 |
|
Goekdeniz-Guelmez
|
58b448dc0b
|
updates
|
2024-10-30 21:23:13 +01:00 |
|
Goekdeniz-Guelmez
|
7c8849e795
|
update
|
2024-10-24 16:16:42 +02:00 |
|
Goekdeniz-Guelmez
|
a677638c4b
|
inference works but is hella slow
|
2024-10-22 23:06:06 +02:00 |
|
Goekdeniz-Guelmez
|
9ab581d678
|
notes
|
2024-10-22 22:10:53 +02:00 |
|
Goekdeniz-Guelmez
|
e43a2ab229
|
not working, incorrect handling with cache probably
|
2024-10-22 22:04:25 +02:00 |
|
Goekdeniz-Guelmez
|
55485b98e8
|
update
|
2024-10-22 21:23:47 +02:00 |
|
Goekdeniz-Guelmez
|
758597eaa8
|
adding multi token input and correct cache handling in ssm step
|
2024-10-22 20:44:23 +02:00 |
|
Goekdeniz-Guelmez
|
b9c57cd429
|
generation works! trying training now
|
2024-10-22 18:25:59 +02:00 |
|
Goekdeniz-Guelmez
|
c1634ce81b
|
still generating gibberish
|
2024-10-20 18:41:28 +02:00 |
|
Goekdeniz-Guelmez
|
ab4cf1d1cf
|
generation works but outputs gibberish
|
2024-10-20 18:04:34 +02:00 |
|
Goekdeniz-Guelmez
|
4ab5139c05
|
quick save
|
2024-10-20 16:11:39 +02:00 |
|
Goekdeniz-Guelmez
|
cd036ccfb5
|
fix generation works too (almost)
|
2024-10-16 21:13:36 +02:00 |
|
Goekdeniz-Guelmez
|
8073cb486c
|
adding debug statements (somehiw generating only goes through the fist MambaMixer block pass)
|
2024-10-16 21:09:30 +02:00 |
|
Goekdeniz-Guelmez
|
00ba27fe6c
|
adding debug statements
|
2024-10-11 21:36:41 +02:00 |
|
Goekdeniz-Guelmez
|
6f88dd59d7
|
quick clean up and fix
|
2024-10-11 21:08:13 +02:00 |
|
Goekdeniz-Guelmez
|
4e1236cbf6
|
fixing loading the model
|
2024-10-11 20:53:29 +02:00 |
|
Goekdeniz-Guelmez
|
264ba43707
|
update trainer/lora.py and adding DepthWiseConv1d because mlx 0.18.0 doesnt axepts groups parameter
|
2024-10-02 19:19:32 +02:00 |
|
Gökdeniz Gülmez
|
49b9fc1a4c
|
Create mamba2.py
|
2024-10-02 12:48:15 +02:00 |
|