Gökdeniz Gülmez
80c64da960
Merge branch 'ml-explore:main' into adding-orpo-training
2025-02-12 11:09:43 +01:00
Awni Hannun
ec30dc3538
hunyuan finetune ( #1270 )
2025-02-11 16:49:35 -08:00
Awni Hannun
42413c5d85
fix lora timings after validation ( #1278 )
2025-02-11 16:48:55 -08:00
Awni Hannun
f8cbf159e0
fix sharding for more even number of layers ( #1276 )
2025-02-11 16:26:59 -08:00
Awni Hannun
e879ea70e1
fix generation evaluations ( #1277 )
2025-02-11 16:10:30 -08:00
Matt Clayton
3d677f0870
Add "from_draft" to GenerationResponse ( #1272 )
...
* Add from_draft field in GenerationResponse
* Cleanup
* Re-work for minimal changes, add test
* Fix comment
2025-02-11 15:41:02 -08:00
Gökdeniz Gülmez
575ece6ef0
Merge branch 'main' into adding-orpo-training
2025-02-10 10:51:01 +01:00
Chime Ogbuji
5865899c81
Completion only fine-tuning of instruction models with collections of HF datasets ( #1103 )
...
- Optional completion only fine-tuning with `--mask-prompt`
- Collections of Hugging Face datasets
---------
Co-authored-by: Awni Hannun <awni@apple.com>
2025-02-09 20:12:34 -08:00
Sri Harsha Pamu
1ced1b00ca
rm temp argument ( #1267 )
2025-02-09 11:39:11 -08:00
Awni Hannun
1503bd4f55
support hunyuan 7b ( #1263 )
2025-02-08 15:46:47 -08:00
Awni Hannun
31611b62d7
Add IBM granite model ( #1265 )
...
* add granite
* add thinking option
2025-02-08 15:46:15 -08:00
Awni Hannun
6120a5f376
Faster DSv2/3 expert score computation ( #1257 )
...
* fix deepseek sharding (#1242 )
* compile and use put along axis in deep seek routing function
2025-02-07 10:24:57 -08:00
Awni Hannun
52c41b5b5a
Fix prompt cache for models without chat template ( #1250 )
...
* fix deepseek sharding (#1242 )
* fix prompt cache with no chat template
2025-02-06 11:10:58 -08:00
Goekdeniz-Guelmez
56712664f6
nice metric printing in testing
2025-02-04 11:21:52 +01:00
Goekdeniz-Guelmez
43940ec673
fix Test
2025-02-04 11:13:07 +01:00
Goekdeniz-Guelmez
1beefd58a0
add create_dataset
2025-02-04 11:06:57 +01:00
Gökdeniz Gülmez
c33c245c11
Merge branch 'ml-explore:main' into adding-orpo-training
2025-02-04 11:04:40 +01:00
Awni Hannun
21d0ab6e8a
fix deepseek sharding ( #1242 )
2025-02-03 16:59:50 -08:00
Gökdeniz Gülmez
0989c073b0
Optimizations for mamba1 ( #1213 )
...
* added mx.einsum() operations: before: 41.293 tokens-per-sec, after: 57.822 tokens-per-sec
* Fused Operations in delta, B, C = ... :. Before: 57.822 tokens-per-sec, after: 83.890 tokens-per-sec
* Pre-computing A_log. After: 83.890 tokens-per-sec, before: 85.848 tokens-per-sec
* Update MambaBlock, Batched Input Processing, Improved Cache Handling, Pre-computed Constants, Cleaner State Management, Explicit Return Values:. Before: 82.442 tokens-per-sec, after: 129.130 tokens-per-sec.
* cleaning up and adding apple copyright to helium modelfile
* update Copyright to this year
* nits + even faster
---------
Co-authored-by: Awni Hannun <awni.hannun@gmail.com>
2025-02-03 13:36:08 -08:00
Awni Hannun
d9924d08d1
Fix no validation in lora ( #1241 )
2025-02-03 09:55:24 -08:00
Gökdeniz Gülmez
2c96da5155
Merge branch 'ml-explore:main' into adding-orpo-training
2025-02-03 09:07:52 +01:00
Awni Hannun
9c2ef38d4d
only download local shard ( #1240 )
2025-02-02 13:58:44 -08:00
Goekdeniz-Guelmez
541677aa7f
cleaning up
2025-01-31 21:36:24 +01:00
Gökdeniz Gülmez
ceccb4c9e9
Merge branch 'ml-explore:main' into adding-orpo-training
2025-01-29 15:07:22 +01:00
Awni Hannun
e8afb59de4
better overflow correction ( #1229 )
2025-01-28 14:37:30 -08:00
Anchen
7a83077cd7
chore(mlx-lm): support text type content in messages ( #1225 )
...
* chore(mlx-lm): support text type content
* chore: optimize the messagef content processing
* nits + format
---------
Co-authored-by: Awni Hannun <awni@apple.com>
2025-01-27 17:13:50 -08:00
Awni Hannun
f44a52e2dc
batched min p and fix spec gen sampling ( #1222 )
2025-01-27 15:40:31 -08:00
Gökdeniz Gülmez
294d189eed
Merge branch 'main' into adding-orpo-training
2025-01-26 16:59:37 +01:00
Gökdeniz Gülmez
77faa14ba4
adding support for kyutai's helium ( #1208 )
...
* initial commit
* adding helium into training
* Update ACKNOWLEDGMENTS.md
* nits
* nits
* fixes / nits
---------
Co-authored-by: Awni Hannun <awni@apple.com>
2025-01-26 07:19:07 -08:00
Goekdeniz-Guelmez
2f2ddd4811
clean up
2025-01-26 15:17:06 +01:00
Goekdeniz-Guelmez
d8e7834345
Removed rejected_rewards handling, Updated batch unpacking to match iterator, Updated batch unpacking to match iterator, Added preference score scaling, Simplified reward calculation, Removed redundant rejected_rewards
2025-01-25 21:35:37 +01:00
Goekdeniz-Guelmez
09ed837896
updates
2025-01-24 16:57:18 +01:00
Goekdeniz-Guelmez
e3688293ed
removing dpo and fixing some stuff for orpo
2025-01-24 16:09:22 +01:00
Goekdeniz-Guelmez
0bb001121e
niits
2025-01-22 21:39:29 +01:00
Gökdeniz Gülmez
4098c3bd2f
Merge branch 'ml-explore:main' into adding-orpo-training
2025-01-22 14:18:38 +01:00
Awni Hannun
9a3ddc3e65
some fixes for pipeline parallel deep seek r1 ( #1216 )
2025-01-21 19:40:29 -08:00
Victor Nogueira
df1406735b
Fix dataset variable name, in datasets.py
( #1212 )
2025-01-21 14:12:43 -08:00
Goekdeniz-Guelmez
363bde634e
fixes
2025-01-19 13:45:33 +01:00
Goekdeniz-Guelmez
ea0d11cd2f
update
2025-01-19 02:05:43 +01:00
Goekdeniz-Guelmez
424cb854e9
nits
2025-01-19 02:03:50 +01:00
Goekdeniz-Guelmez
9ede9db19b
nits
2025-01-19 02:03:31 +01:00
Goekdeniz-Guelmez
fa80d081f2
finish
2025-01-19 01:58:29 +01:00
Goekdeniz-Guelmez
7d279b51ef
remerge with dpo
2025-01-19 01:14:08 +01:00
Goekdeniz-Guelmez
a9b7609118
initial commit
2025-01-19 01:09:43 +01:00
Goekdeniz-Guelmez
06a9f5d106
update lora_config.yaml
2025-01-19 00:53:41 +01:00
Goekdeniz-Guelmez
1b4e19675d
update LORA.md
2025-01-19 00:48:45 +01:00
Goekdeniz-Guelmez
582f979dfd
fixing reference model loading and freezing
2025-01-19 00:41:27 +01:00
Goekdeniz-Guelmez
1ff788821c
initial commit
2025-01-19 00:19:36 +01:00
Jarrett
07f88f8057
fix(lora): add back store_true default args ( #1205 )
2025-01-16 11:15:42 -08:00
Awni Hannun
50f0a7f6d9
add internlm3 ( #1206 )
2025-01-15 14:55:41 -08:00