Goekdeniz-Guelmez
e96afe9e9f
updates
2025-02-11 09:09:28 +01:00
Goekdeniz-Guelmez
88ca747e9e
nits
2025-02-10 19:46:19 +01:00
Goekdeniz-Guelmez
b7bc811507
nits
2025-02-10 19:45:19 +01:00
Goekdeniz-Guelmez
e5aa2c3b5d
nits
2025-02-10 17:51:14 +01:00
Goekdeniz-Guelmez
f88e897019
removing helper functions
2025-02-10 16:07:28 +01:00
Goekdeniz-Guelmez
d9da35f458
nits
2025-02-10 10:52:32 +01:00
Gökdeniz Gülmez
0dac286539
Merge branch 'main' into adding-GRPO-training
2025-02-10 10:43:22 +01:00
Chime Ogbuji
5865899c81
Completion only fine-tuning of instruction models with collections of HF datasets ( #1103 )
...
- Optional completion only fine-tuning with `--mask-prompt`
- Collections of Hugging Face datasets
---------
Co-authored-by: Awni Hannun <awni@apple.com>
2025-02-09 20:12:34 -08:00
Sri Harsha Pamu
1ced1b00ca
rm temp argument ( #1267 )
2025-02-09 11:39:11 -08:00
Goekdeniz-Guelmez
00712522ba
rebase loss calculation
2025-02-09 17:13:05 +01:00
Goekdeniz-Guelmez
a527cdb39b
fix: prevent gradients from flowing through the reference model's logits
2025-02-09 17:02:58 +01:00
Goekdeniz-Guelmez
54179901b5
fix
2025-02-09 15:41:47 +01:00
Goekdeniz-Guelmez
39e9469059
freeze ref model
2025-02-09 15:30:51 +01:00
Goekdeniz-Guelmez
9ba6146a76
fix
2025-02-09 14:32:50 +01:00
Awni Hannun
1503bd4f55
support hunyuan 7b ( #1263 )
2025-02-08 15:46:47 -08:00
Awni Hannun
31611b62d7
Add IBM granite model ( #1265 )
...
* add granite
* add thinking option
2025-02-08 15:46:15 -08:00
Awni Hannun
6120a5f376
Faster DSv2/3 expert score computation ( #1257 )
...
* fix deepseek sharding (#1242 )
* compile and use put along axis in deep seek routing function
2025-02-07 10:24:57 -08:00
Awni Hannun
52c41b5b5a
Fix prompt cache for models without chat template ( #1250 )
...
* fix deepseek sharding (#1242 )
* fix prompt cache with no chat template
2025-02-06 11:10:58 -08:00
Gökdeniz Gülmez
94dcd0f63e
Merge branch 'ml-explore:main' into adding-GRPO-training
2025-02-06 08:15:58 +01:00
Goekdeniz-Guelmez
bcfa55d882
updates
2025-02-05 15:02:12 +01:00
Goekdeniz-Guelmez
0a19522ec4
updates
2025-02-05 14:38:09 +01:00
Goekdeniz-Guelmez
35a2d99cf9
smoll fix
2025-02-05 11:30:21 +01:00
Goekdeniz-Guelmez
a33cad84b4
udpates
2025-02-05 09:48:00 +01:00
Goekdeniz-Guelmez
d84ad0cf86
fix testing
2025-02-05 08:53:30 +01:00
Goekdeniz-Guelmez
2a8e6f6e44
udpate
2025-02-05 08:47:03 +01:00
Goekdeniz-Guelmez
0a09a93454
fix cache handling
2025-02-05 08:44:06 +01:00
Pedro Cuenca
e2e5478da5
READMEs: fix typo in link, minor update. ( #1246 )
2025-02-04 11:52:32 -08:00
Goekdeniz-Guelmez
7b0141455e
better create_dataset
2025-02-04 10:43:00 +01:00
Goekdeniz-Guelmez
bd1a42ec2f
adding args into dataset handling
2025-02-04 10:22:34 +01:00
Goekdeniz-Guelmez
7173840283
first succesfull training run
2025-02-04 09:18:45 +01:00
Awni Hannun
21d0ab6e8a
fix deepseek sharding ( #1242 )
2025-02-03 16:59:50 -08:00
Gökdeniz Gülmez
0989c073b0
Optimizations for mamba1 ( #1213 )
...
* added mx.einsum() operations: before: 41.293 tokens-per-sec, after: 57.822 tokens-per-sec
* Fused Operations in delta, B, C = ... :. Before: 57.822 tokens-per-sec, after: 83.890 tokens-per-sec
* Pre-computing A_log. After: 83.890 tokens-per-sec, before: 85.848 tokens-per-sec
* Update MambaBlock, Batched Input Processing, Improved Cache Handling, Pre-computed Constants, Cleaner State Management, Explicit Return Values:. Before: 82.442 tokens-per-sec, after: 129.130 tokens-per-sec.
* cleaning up and adding apple copyright to helium modelfile
* update Copyright to this year
* nits + even faster
---------
Co-authored-by: Awni Hannun <awni.hannun@gmail.com>
2025-02-03 13:36:08 -08:00
Goekdeniz-Guelmez
ca32424043
updates
2025-02-03 21:57:26 +01:00
Goekdeniz-Guelmez
54e295ea80
fix name funcs
2025-02-03 19:56:11 +01:00
Goekdeniz-Guelmez
06f9c29c94
print func name
2025-02-03 19:47:40 +01:00
Goekdeniz-Guelmez
40bca770ae
fixes
2025-02-03 19:43:49 +01:00
Goekdeniz-Guelmez
05d921b788
optims
2025-02-03 19:37:05 +01:00
Awni Hannun
d9924d08d1
Fix no validation in lora ( #1241 )
2025-02-03 09:55:24 -08:00
Goekdeniz-Guelmez
1d9e4802f0
first working prototype, will try training out at home
2025-02-03 12:05:29 +01:00
Goekdeniz-Guelmez
23d75cd7ad
starting fist training test run
2025-02-03 10:08:28 +01:00
Goekdeniz-Guelmez
41ff5364d7
Merge branch 'adding-GRPO-training' of https://github.com/Goekdeniz-Guelmez/mlx-examples into adding-GRPO-training
2025-02-03 09:19:00 +01:00
Goekdeniz-Guelmez
a3ed632422
dataset wrapper done
2025-02-03 09:13:17 +01:00
Gökdeniz Gülmez
734d6f4a69
Merge branch 'ml-explore:main' into adding-GRPO-training
2025-02-03 09:07:20 +01:00
Goekdeniz-Guelmez
d034ca369e
adding function for R1
2025-02-03 08:26:42 +01:00
Awni Hannun
9c2ef38d4d
only download local shard ( #1240 )
2025-02-02 13:58:44 -08:00
Goekdeniz-Guelmez
243c9621d9
update lora.py
2025-01-31 21:10:44 +01:00
Goekdeniz-Guelmez
a57d553fc1
update
2025-01-31 16:57:43 +01:00
Goekdeniz-Guelmez
80bcf68956
grpo_trainer shoudl be done
2025-01-31 16:54:18 +01:00
Goekdeniz-Guelmez
6c58aa995c
updates
2025-01-31 16:27:31 +01:00
Goekdeniz-Guelmez
93370ff1c3
updates ans fixing the KL div lines
2025-01-30 23:55:40 +01:00