Goekdeniz-Guelmez
313d4a2ac9
summarize segsum
2025-02-28 15:04:03 +01:00
Goekdeniz-Guelmez
932b196b48
updates
2025-02-26 16:51:18 +01:00
Goekdeniz-Guelmez
61fad00892
updates
2025-02-26 15:16:45 +01:00
Goekdeniz-Guelmez
a683344450
correct segsum function
2025-02-26 14:46:46 +01:00
Goekdeniz-Guelmez
b7c0bdfd49
adding pytorch implementation
2025-02-25 16:31:19 +01:00
Gökdeniz Gülmez
42c3cd2084
Merge branch 'ml-explore:main' into adding-support-for-mamba2
2025-02-25 13:27:45 +01:00
Shunta Saito
c37e26a1a3
Add plamo-2-1b model ( #1283 )
...
* Add pfnet/plamo-2-1b
* Fix cache.py to support non-top level layers
* Use mlx's BaseModelArgs
* Fix model
* Use sanitize()
* Remove unnecessary changes
* Add plamo2.py
* Apply formatter
* Fix some part
* Allow a cache obj defined externally
* Fix channel first weights to channel last for right use of MLX's conv1d
* Remove unused code part
* Give all inputs when it's the first time call of model
* Fix import
* Include .jsonl files to download from Huggingface hub
* Fix reference to layers
* Remove unnecessary code and add a test for plamo2
* Do not pass mask to prepare_inputs_for_generation
* Fix to use repeat instead of tile
* Add state property to PlamoCache
* Add __iter__ and __next__ methods to PlamoCache
* cleanup
* cleanup
* fix
---------
Co-authored-by: Awni Hannun <awni.hannun@gmail.com>
2025-02-24 19:24:43 -08:00
Gökdeniz Gülmez
c26e188417
Merge branch 'ml-explore:main' into adding-support-for-mamba2
2025-02-12 11:09:20 +01:00
Awni Hannun
f8cbf159e0
fix sharding for more even number of layers ( #1276 )
2025-02-11 16:26:59 -08:00
Awni Hannun
1503bd4f55
support hunyuan 7b ( #1263 )
2025-02-08 15:46:47 -08:00
Awni Hannun
31611b62d7
Add IBM granite model ( #1265 )
...
* add granite
* add thinking option
2025-02-08 15:46:15 -08:00
Awni Hannun
6120a5f376
Faster DSv2/3 expert score computation ( #1257 )
...
* fix deepseek sharding (#1242 )
* compile and use put along axis in deep seek routing function
2025-02-07 10:24:57 -08:00
Awni Hannun
52c41b5b5a
Fix prompt cache for models without chat template ( #1250 )
...
* fix deepseek sharding (#1242 )
* fix prompt cache with no chat template
2025-02-06 11:10:58 -08:00
Awni Hannun
21d0ab6e8a
fix deepseek sharding ( #1242 )
2025-02-03 16:59:50 -08:00
Gökdeniz Gülmez
0989c073b0
Optimizations for mamba1 ( #1213 )
...
* added mx.einsum() operations: before: 41.293 tokens-per-sec, after: 57.822 tokens-per-sec
* Fused Operations in delta, B, C = ... :. Before: 57.822 tokens-per-sec, after: 83.890 tokens-per-sec
* Pre-computing A_log. After: 83.890 tokens-per-sec, before: 85.848 tokens-per-sec
* Update MambaBlock, Batched Input Processing, Improved Cache Handling, Pre-computed Constants, Cleaner State Management, Explicit Return Values:. Before: 82.442 tokens-per-sec, after: 129.130 tokens-per-sec.
* cleaning up and adding apple copyright to helium modelfile
* update Copyright to this year
* nits + even faster
---------
Co-authored-by: Awni Hannun <awni.hannun@gmail.com>
2025-02-03 13:36:08 -08:00
Awni Hannun
9c2ef38d4d
only download local shard ( #1240 )
2025-02-02 13:58:44 -08:00
Gökdeniz Gülmez
57e10446b0
Merge branch 'ml-explore:main' into adding-support-for-mamba2
2025-01-29 15:07:11 +01:00
Awni Hannun
e8afb59de4
better overflow correction ( #1229 )
2025-01-28 14:37:30 -08:00
Gökdeniz Gülmez
de856c7223
Merge branch 'main' into adding-support-for-mamba2
2025-01-26 16:58:06 +01:00
Gökdeniz Gülmez
77faa14ba4
adding support for kyutai's helium ( #1208 )
...
* initial commit
* adding helium into training
* Update ACKNOWLEDGMENTS.md
* nits
* nits
* fixes / nits
---------
Co-authored-by: Awni Hannun <awni@apple.com>
2025-01-26 07:19:07 -08:00
Goekdeniz-Guelmez
2462a34194
removig sanitize
2025-01-22 22:30:15 +01:00
Gökdeniz Gülmez
dd29e74b89
Merge branch 'ml-explore:main' into adding-support-for-mamba2
2025-01-22 14:19:06 +01:00
Awni Hannun
9a3ddc3e65
some fixes for pipeline parallel deep seek r1 ( #1216 )
2025-01-21 19:40:29 -08:00
Goekdeniz-Guelmez
a4b716e65d
small optimization
2025-01-22 00:15:02 +01:00
Goekdeniz-Guelmez
12e9f34524
removing unnessesairy lines and cleaning up
2025-01-21 23:06:40 +01:00
Goekdeniz-Guelmez
c13de475f6
removing custom RMSNorm class
2025-01-21 22:52:45 +01:00
Goekdeniz-Guelmez
a6a92cb91f
codestral inference exxtually works now
2025-01-21 21:01:39 +01:00
Goekdeniz-Guelmez
5a6ada2df0
getting reall closer:
...
python -m mlx_lm.generate --model /Users/gokdenizgulmez/Desktop/Mamba-Codestral-7B-v0.1-4bit --prompt "# A function that computes fibonacci
def fibonacci(" -m 64
==========
n):
print(f"{os.path.abspath(".")/data/data/data/com.android.launcher.png)
## 🙌🏼 🙌 🙌 🙌 🙌 🙌 🙌
class _State(Enum):
def __init__ (self
==========
Prompt: 16 tokens, 84.547 tokens-per-sec
Generation: 64 tokens, 13.774 tokens-per-sec
Peak memory: 4.139 GB
2025-01-21 20:44:51 +01:00
Goekdeniz-Guelmez
eb432f4b7d
inference with the origional mamba2 model woirks but still not with codestral. working:
...
rokyang/mamba2-130m-hf
rokyang/mamba2-370m-hf
rokyang/mamba2-780m-hf
rokyang/mamba2-1.3b-hf
rokyang/mamba2-2.7b-hf
2025-01-21 19:38:07 +01:00
Gökdeniz Gülmez
be4bc7a090
Merge branch 'ml-explore:main' into adding-support-for-mamba2
2025-01-21 10:57:21 +01:00
Goekdeniz-Guelmez
e96c17d061
inference works
2025-01-20 19:50:08 +01:00
Goekdeniz-Guelmez
db514f24c8
update
2025-01-20 19:44:05 +01:00
Goekdeniz-Guelmez
531ac96481
fixing cache
2025-01-20 18:26:21 +01:00
Awni Hannun
50f0a7f6d9
add internlm3 ( #1206 )
2025-01-15 14:55:41 -08:00
Goekdeniz-Guelmez
dd4957f3da
adding correct initialisation of dt, A and D
2025-01-13 21:28:43 +01:00
Gökdeniz Gülmez
5509ef8e52
Merge branch 'ml-explore:main' into adding-support-for-mamba2
2025-01-13 20:16:04 +01:00
Awni Hannun
c117af83b8
fix gpt bigcode ( #1204 )
2025-01-13 10:22:32 -08:00
Prince Canuma
bf2da36fc6
Fix Cohere2: mask shape error (long context) ( #1202 )
...
* fix mask shape error (long context)
* Update llms/mlx_lm/models/cohere2.py
Co-authored-by: Awni Hannun <awni.hannun@gmail.com>
* revert layer_idx
* black formatting
* Update cohere2.py
* format
---------
Co-authored-by: Awni Hannun <awni.hannun@gmail.com>
Co-authored-by: Awni Hannun <awni@apple.com>
2025-01-12 12:58:08 -08:00
Awni Hannun
5cae0a60e6
deepseek v3 model with pipeline parallelism ( #1191 )
...
* deepseekv3
* use upload_large_file instead of deprecated multi comit
* add pipeline generation and example
* comment
* get fp16 working
* use mlx==0.22
2025-01-09 15:55:53 -08:00
Goekdeniz-Guelmez
8deada9d11
optimizations
2024-12-27 17:52:14 +01:00
Goekdeniz-Guelmez
4e94e87f57
nits
2024-12-27 15:41:54 +01:00
Goekdeniz-Guelmez
3384d38a83
nits
2024-12-27 15:37:41 +01:00
Goekdeniz-Guelmez
2ed51946ab
still gibberish
2024-12-27 15:36:37 +01:00
Goekdeniz-Guelmez
f4cbe27b0f
new set but still gibberish
2024-12-27 15:27:09 +01:00
Goekdeniz-Guelmez
d044db959d
update
2024-12-27 15:17:45 +01:00
Alex Barron
d4ef909d4a
Length masking for batch inputs ( #1173 )
...
* length masking
* add mask to mlx_lm model interface
* remove lengths
* fix test:
* comment + fix
2024-12-18 19:43:52 -08:00
Goekdeniz-Guelmez
0ae536c423
update: using einsum on som elines making it faster, but still generates Gibberish on Codestral
2024-12-18 19:32:22 +01:00
Gökdeniz Gülmez
68533e2a8f
Merge branch 'ml-explore:main' into adding-support-for-mamba2
2024-12-17 11:14:40 +01:00
Prince Canuma
dfa4dd6c93
Add support for cohere2 ( #1157 )
...
* add support for cohere2
* revert to act_fn to silu
* fix tests and sliding window attention
* add tests
* add to tuner
* fix sliding window
* add coauthor :)
Co-authored-by: n8programs <43304488+N8python@users.noreply.github.com>
* Add rotating kvcache to save space
* some nits
* style
* nits
---------
Co-authored-by: n8programs <43304488+N8python@users.noreply.github.com>
Co-authored-by: N8 <n8@n8programs.com>
Co-authored-by: Awni Hannun <awni@apple.com>
2024-12-16 08:01:03 -08:00
Goekdeniz-Guelmez
dff4e52910
adding the modelnames in the LORA.md file and removing unused functions from mamba2.py
2024-12-12 22:52:00 +01:00