Gökdeniz Gülmez
dd29e74b89
Merge branch 'ml-explore:main' into adding-support-for-mamba2
2025-01-22 14:19:06 +01:00
Awni Hannun
9a3ddc3e65
some fixes for pipeline parallel deep seek r1 ( #1216 )
2025-01-21 19:40:29 -08:00
Goekdeniz-Guelmez
a4b716e65d
small optimization
2025-01-22 00:15:02 +01:00
Goekdeniz-Guelmez
12e9f34524
removing unnessesairy lines and cleaning up
2025-01-21 23:06:40 +01:00
Goekdeniz-Guelmez
c13de475f6
removing custom RMSNorm class
2025-01-21 22:52:45 +01:00
Goekdeniz-Guelmez
a6a92cb91f
codestral inference exxtually works now
2025-01-21 21:01:39 +01:00
Goekdeniz-Guelmez
5a6ada2df0
getting reall closer:
...
python -m mlx_lm.generate --model /Users/gokdenizgulmez/Desktop/Mamba-Codestral-7B-v0.1-4bit --prompt "# A function that computes fibonacci
def fibonacci(" -m 64
==========
n):
print(f"{os.path.abspath(".")/data/data/data/com.android.launcher.png)
## 🙌🏼 🙌 🙌 🙌 🙌 🙌 🙌
class _State(Enum):
def __init__ (self
==========
Prompt: 16 tokens, 84.547 tokens-per-sec
Generation: 64 tokens, 13.774 tokens-per-sec
Peak memory: 4.139 GB
2025-01-21 20:44:51 +01:00
Goekdeniz-Guelmez
eb432f4b7d
inference with the origional mamba2 model woirks but still not with codestral. working:
...
rokyang/mamba2-130m-hf
rokyang/mamba2-370m-hf
rokyang/mamba2-780m-hf
rokyang/mamba2-1.3b-hf
rokyang/mamba2-2.7b-hf
2025-01-21 19:38:07 +01:00
Gökdeniz Gülmez
be4bc7a090
Merge branch 'ml-explore:main' into adding-support-for-mamba2
2025-01-21 10:57:21 +01:00
Goekdeniz-Guelmez
e96c17d061
inference works
2025-01-20 19:50:08 +01:00
Goekdeniz-Guelmez
db514f24c8
update
2025-01-20 19:44:05 +01:00
Goekdeniz-Guelmez
531ac96481
fixing cache
2025-01-20 18:26:21 +01:00
Awni Hannun
50f0a7f6d9
add internlm3 ( #1206 )
2025-01-15 14:55:41 -08:00
Goekdeniz-Guelmez
dd4957f3da
adding correct initialisation of dt, A and D
2025-01-13 21:28:43 +01:00
Gökdeniz Gülmez
5509ef8e52
Merge branch 'ml-explore:main' into adding-support-for-mamba2
2025-01-13 20:16:04 +01:00
Awni Hannun
c117af83b8
fix gpt bigcode ( #1204 )
2025-01-13 10:22:32 -08:00
Prince Canuma
bf2da36fc6
Fix Cohere2: mask shape error (long context) ( #1202 )
...
* fix mask shape error (long context)
* Update llms/mlx_lm/models/cohere2.py
Co-authored-by: Awni Hannun <awni.hannun@gmail.com>
* revert layer_idx
* black formatting
* Update cohere2.py
* format
---------
Co-authored-by: Awni Hannun <awni.hannun@gmail.com>
Co-authored-by: Awni Hannun <awni@apple.com>
2025-01-12 12:58:08 -08:00
Awni Hannun
5cae0a60e6
deepseek v3 model with pipeline parallelism ( #1191 )
...
* deepseekv3
* use upload_large_file instead of deprecated multi comit
* add pipeline generation and example
* comment
* get fp16 working
* use mlx==0.22
2025-01-09 15:55:53 -08:00
Goekdeniz-Guelmez
8deada9d11
optimizations
2024-12-27 17:52:14 +01:00
Goekdeniz-Guelmez
4e94e87f57
nits
2024-12-27 15:41:54 +01:00
Goekdeniz-Guelmez
3384d38a83
nits
2024-12-27 15:37:41 +01:00
Goekdeniz-Guelmez
2ed51946ab
still gibberish
2024-12-27 15:36:37 +01:00
Goekdeniz-Guelmez
f4cbe27b0f
new set but still gibberish
2024-12-27 15:27:09 +01:00
Goekdeniz-Guelmez
d044db959d
update
2024-12-27 15:17:45 +01:00
Alex Barron
d4ef909d4a
Length masking for batch inputs ( #1173 )
...
* length masking
* add mask to mlx_lm model interface
* remove lengths
* fix test:
* comment + fix
2024-12-18 19:43:52 -08:00
Goekdeniz-Guelmez
0ae536c423
update: using einsum on som elines making it faster, but still generates Gibberish on Codestral
2024-12-18 19:32:22 +01:00
Gökdeniz Gülmez
68533e2a8f
Merge branch 'ml-explore:main' into adding-support-for-mamba2
2024-12-17 11:14:40 +01:00
Prince Canuma
dfa4dd6c93
Add support for cohere2 ( #1157 )
...
* add support for cohere2
* revert to act_fn to silu
* fix tests and sliding window attention
* add tests
* add to tuner
* fix sliding window
* add coauthor :)
Co-authored-by: n8programs <43304488+N8python@users.noreply.github.com>
* Add rotating kvcache to save space
* some nits
* style
* nits
---------
Co-authored-by: n8programs <43304488+N8python@users.noreply.github.com>
Co-authored-by: N8 <n8@n8programs.com>
Co-authored-by: Awni Hannun <awni@apple.com>
2024-12-16 08:01:03 -08:00
Goekdeniz-Guelmez
dff4e52910
adding the modelnames in the LORA.md file and removing unused functions from mamba2.py
2024-12-12 22:52:00 +01:00
Goekdeniz-Guelmez
a883e39f41
optimizing the code for faster inference but still generates giberish
2024-12-12 21:08:33 +01:00
Goekdeniz-Guelmez
184d3d3267
clean up
2024-12-10 18:20:13 +01:00
Goekdeniz-Guelmez
80e88b4f4d
nits
2024-12-10 18:18:59 +01:00
Goekdeniz-Guelmez
b10afe3662
nits
2024-12-10 18:15:12 +01:00
Goekdeniz-Guelmez
9f8a6a3509
inference on codestral works but is giberish
2024-12-10 17:34:44 +01:00
Gökdeniz Gülmez
ddad2105ef
Merge branch 'main' into adding-support-for-mamba2
2024-12-10 14:32:44 +01:00
n8programs
5687d5b99b
Adds EXAONE architecture. ( #1145 )
...
* Adds EXAONE architecture.
* nits + format
* format
* clean up and fix rope
* clean up and fix rope
---------
Co-authored-by: Awni Hannun <awni@apple.com>
2024-12-09 07:58:25 -08:00
Awni Hannun
8801beb66f
Add olmo2 ( #1128 )
...
* add olmo2
* add olmo2
2024-12-02 11:42:58 -08:00
Goekdeniz-Guelmez
38e5801edb
loading codestral works but no tinference
2024-11-24 16:26:45 +01:00
Awni Hannun
004eb4cc9d
Tencent HunYuan MOE model ( #1100 )
...
* hunyuan
* fix
* format str
* default trust remote code for tokenizer, allow system prompt to be configurable
2024-11-23 11:06:26 -08:00
Goekdeniz-Guelmez
a6ddc27a4e
removing last checkpoint file
2024-11-21 22:33:56 +01:00
Goekdeniz-Guelmez
57b1717cf5
inference fixed
2024-11-21 22:25:58 +01:00
Goekdeniz-Guelmez
117ffd3909
removing some files
2024-11-21 22:05:42 +01:00
Goekdeniz-Guelmez
e22b2dbf27
Fixed streaming generation and got rid of generating gibberish, but is still a litle slow: 0.222 tokens-per-sec
2024-11-21 22:01:28 +01:00
Goekdeniz-Guelmez
1d851069ea
nits
2024-11-10 17:21:18 +01:00
Goekdeniz-Guelmez
1a6688384d
imopemented multi Token inputs, but still generating Gibberish
2024-11-10 17:19:00 +01:00
Goekdeniz-Guelmez
2f95b361a8
removed the custom Mamba2Cache adn updated the existing MambaCache but still only one input Token and outputs gibberish
2024-11-10 16:57:03 +01:00
Gökdeniz Gülmez
49d3f188f8
Merge branch 'ml-explore:main' into adding-support-for-mamba2
2024-11-10 16:36:02 +01:00
Goekdeniz-Guelmez
3a499f9735
fixed inference slowness but it cant handle multible Token inputs and is generateing gibberish
2024-11-10 16:35:07 +01:00
Goekdeniz-Guelmez
800b60239c
save checkpoint
2024-11-10 14:36:26 +01:00
Goekdeniz-Guelmez
906f972d36
save push
2024-11-06 16:35:46 +01:00