Commit Graph

709 Commits

Author SHA1 Message Date
Gökdeniz Gülmez
c26e188417
Merge branch 'ml-explore:main' into adding-support-for-mamba2 2025-02-12 11:09:20 +01:00
Awni Hannun
ec30dc3538
hunyuan finetune (#1270) 2025-02-11 16:49:35 -08:00
Awni Hannun
42413c5d85
fix lora timings after validation (#1278) 2025-02-11 16:48:55 -08:00
Awni Hannun
f8cbf159e0
fix sharding for more even number of layers (#1276) 2025-02-11 16:26:59 -08:00
Awni Hannun
e879ea70e1
fix generation evaluations (#1277) 2025-02-11 16:10:30 -08:00
Matt Clayton
3d677f0870
Add "from_draft" to GenerationResponse (#1272)
* Add from_draft field in GenerationResponse

* Cleanup

* Re-work for minimal changes, add test

* Fix comment
2025-02-11 15:41:02 -08:00
Awni Hannun
bded1a8fcd
fix looping in whisper (#1273) 2025-02-10 13:04:35 -08:00
Chime Ogbuji
5865899c81
Completion only fine-tuning of instruction models with collections of HF datasets (#1103)
- Optional completion only fine-tuning with `--mask-prompt`
- Collections of Hugging Face datasets

---------

Co-authored-by: Awni Hannun <awni@apple.com>
2025-02-09 20:12:34 -08:00
Sri Harsha Pamu
1ced1b00ca
rm temp argument (#1267) 2025-02-09 11:39:11 -08:00
Awni Hannun
f58c7de901
Some improvements to speedup alignment computation in MLX Whisper (#1259)
* some improvements to speedup alignment computation in MLX Whisper

* fix alignment
2025-02-08 15:47:00 -08:00
Awni Hannun
1503bd4f55
support hunyuan 7b (#1263) 2025-02-08 15:46:47 -08:00
Awni Hannun
31611b62d7
Add IBM granite model (#1265)
* add granite

* add thinking option
2025-02-08 15:46:15 -08:00
Awni Hannun
6120a5f376
Faster DSv2/3 expert score computation (#1257)
* fix deepseek sharding (#1242)

* compile and use put along axis in deep seek routing function
2025-02-07 10:24:57 -08:00
Awni Hannun
52c41b5b5a
Fix prompt cache for models without chat template (#1250)
* fix deepseek sharding (#1242)

* fix prompt cache with no chat template
2025-02-06 11:10:58 -08:00
Nripesh Niketan
747c08e202
Chore: pre-commit bump (#1253) 2025-02-06 09:06:31 -08:00
Pedro Cuenca
e2e5478da5
READMEs: fix typo in link, minor update. (#1246) 2025-02-04 11:52:32 -08:00
Awni Hannun
21d0ab6e8a
fix deepseek sharding (#1242) 2025-02-03 16:59:50 -08:00
Gökdeniz Gülmez
0989c073b0
Optimizations for mamba1 (#1213)
* added mx.einsum() operations: before: 41.293 tokens-per-sec, after: 57.822 tokens-per-sec

* Fused Operations in delta, B, C = ... :. Before: 57.822 tokens-per-sec, after: 83.890 tokens-per-sec

* Pre-computing A_log. After: 83.890 tokens-per-sec, before: 85.848 tokens-per-sec

* Update MambaBlock, Batched Input Processing, Improved Cache Handling, Pre-computed Constants, Cleaner State Management, Explicit Return Values:. Before: 82.442 tokens-per-sec, after: 129.130 tokens-per-sec.

* cleaning up and adding apple copyright to helium modelfile

* update Copyright to this year

* nits + even faster

---------

Co-authored-by: Awni Hannun <awni.hannun@gmail.com>
2025-02-03 13:36:08 -08:00
Awni Hannun
d9924d08d1
Fix no validation in lora (#1241) 2025-02-03 09:55:24 -08:00
Awni Hannun
9c2ef38d4d
only download local shard (#1240) 2025-02-02 13:58:44 -08:00
Gökdeniz Gülmez
57e10446b0
Merge branch 'ml-explore:main' into adding-support-for-mamba2 2025-01-29 15:07:11 +01:00
Awni Hannun
e8afb59de4
better overflow correction (#1229) 2025-01-28 14:37:30 -08:00
Anchen
7a83077cd7
chore(mlx-lm): support text type content in messages (#1225)
* chore(mlx-lm): support text type content

* chore: optimize the messagef content processing

* nits + format

---------

Co-authored-by: Awni Hannun <awni@apple.com>
2025-01-27 17:13:50 -08:00
Awni Hannun
f44a52e2dc
batched min p and fix spec gen sampling (#1222) 2025-01-27 15:40:31 -08:00
Goekdeniz-Guelmez
3642a9df9b update ACKNOWLEDGMENTS 2025-01-26 17:00:32 +01:00
Gökdeniz Gülmez
de856c7223
Merge branch 'main' into adding-support-for-mamba2 2025-01-26 16:58:06 +01:00
Gökdeniz Gülmez
77faa14ba4
adding support for kyutai's helium (#1208)
* initial commit

* adding helium into training

* Update ACKNOWLEDGMENTS.md

* nits

* nits

* fixes / nits

---------

Co-authored-by: Awni Hannun <awni@apple.com>
2025-01-26 07:19:07 -08:00
Goekdeniz-Guelmez
2462a34194 removig sanitize 2025-01-22 22:30:15 +01:00
Gökdeniz Gülmez
dd29e74b89
Merge branch 'ml-explore:main' into adding-support-for-mamba2 2025-01-22 14:19:06 +01:00
Awni Hannun
9a3ddc3e65
some fixes for pipeline parallel deep seek r1 (#1216) 2025-01-21 19:40:29 -08:00
Goekdeniz-Guelmez
a4b716e65d small optimization 2025-01-22 00:15:02 +01:00
Victor Nogueira
df1406735b
Fix dataset variable name, in datasets.py (#1212) 2025-01-21 14:12:43 -08:00
Goekdeniz-Guelmez
12e9f34524 removing unnessesairy lines and cleaning up 2025-01-21 23:06:40 +01:00
Goekdeniz-Guelmez
c13de475f6 removing custom RMSNorm class 2025-01-21 22:52:45 +01:00
Goekdeniz-Guelmez
a6a92cb91f codestral inference exxtually works now 2025-01-21 21:01:39 +01:00
Goekdeniz-Guelmez
5a6ada2df0 getting reall closer:
python -m mlx_lm.generate --model /Users/gokdenizgulmez/Desktop/Mamba-Codestral-7B-v0.1-4bit --prompt "# A function that computes fibonacci
def fibonacci(" -m 64
==========
n):
    print(f"{os.path.abspath(".")/data/data/data/com.android.launcher.png)

## 🙌🏼 🙌🙌🙌🙌🙌🙌

class _State(Enum):
    def __init__ (self
==========
Prompt: 16 tokens, 84.547 tokens-per-sec
Generation: 64 tokens, 13.774 tokens-per-sec
Peak memory: 4.139 GB
2025-01-21 20:44:51 +01:00
Goekdeniz-Guelmez
eb432f4b7d inference with the origional mamba2 model woirks but still not with codestral. working:
rokyang/mamba2-130m-hf
rokyang/mamba2-370m-hf
rokyang/mamba2-780m-hf
rokyang/mamba2-1.3b-hf
rokyang/mamba2-2.7b-hf
2025-01-21 19:38:07 +01:00
Gökdeniz Gülmez
be4bc7a090
Merge branch 'ml-explore:main' into adding-support-for-mamba2 2025-01-21 10:57:21 +01:00
Goekdeniz-Guelmez
e96c17d061 inference works 2025-01-20 19:50:08 +01:00
Goekdeniz-Guelmez
db514f24c8 update 2025-01-20 19:44:05 +01:00
Goekdeniz-Guelmez
531ac96481 fixing cache 2025-01-20 18:26:21 +01:00
Jarrett
07f88f8057
fix(lora): add back store_true default args (#1205) 2025-01-16 11:15:42 -08:00
Awni Hannun
50f0a7f6d9
add internlm3 (#1206) 2025-01-15 14:55:41 -08:00
Ivan Fioravanti
6ae6c72c2e
reduction moved to CPU in case of distributed training (#1200) 2025-01-14 17:20:42 -08:00
Goekdeniz-Guelmez
dd4957f3da adding correct initialisation of dt, A and D 2025-01-13 21:28:43 +01:00
Gökdeniz Gülmez
5509ef8e52
Merge branch 'ml-explore:main' into adding-support-for-mamba2 2025-01-13 20:16:04 +01:00
Awni Hannun
c117af83b8
fix gpt bigcode (#1204) 2025-01-13 10:22:32 -08:00
Chime Ogbuji
0228c46434
Custom local dataset features (#1085)
* Generalize prompt_feature and completion_feature for use in local datasets to facilitate compatibility with many other training dataset formats.

* Persist configured prompt/completion key

* rebase + nits

---------

Co-authored-by: Awni Hannun <awni@apple.com>
2025-01-13 10:01:18 -08:00
Prince Canuma
bf2da36fc6
Fix Cohere2: mask shape error (long context) (#1202)
* fix mask shape error (long context)

* Update llms/mlx_lm/models/cohere2.py

Co-authored-by: Awni Hannun <awni.hannun@gmail.com>

* revert layer_idx

* black formatting

* Update cohere2.py

* format

---------

Co-authored-by: Awni Hannun <awni.hannun@gmail.com>
Co-authored-by: Awni Hannun <awni@apple.com>
2025-01-12 12:58:08 -08:00
Xingjun.Wang
514502da22
Support snapshot_download for ModelScope (#1194)
* add MLX_USE_MODELSCOPE env

* update

* update snapshot_download

* update

* remove modelscope dependency and add import check

* update

* nits

* fix

---------

Co-authored-by: wangxingjun778 <jason@U-C7X6TX5G-2239.local>
Co-authored-by: Awni Hannun <awni@apple.com>
2025-01-10 15:29:34 -08:00