Anchen
82e3338987
chore(mlx-lm): add max token arg for mlx_lm.chat ( #1089 )
...
* chore(mlx-lm): add max token arg for mlx_lm.chat
* chore: update the default max token value
2024-11-04 06:06:34 -08:00
Angelos Katharopoulos
331148d8ec
Enable distributed LoRA training ( #821 )
2024-11-02 18:02:31 -07:00
Awni Hannun
0f799947d0
fix ( #1079 )
2024-11-01 16:30:32 -07:00
Awni Hannun
e510987870
Clear cache every now and then ( #1081 )
...
* clear cache every now and then
* don't need user arg anymore
2024-11-01 14:15:32 -07:00
Alex Barron
85ffd2c96a
Quantized KV Cache ( #1075 )
...
* add QuantizedKVCache
* simplify
* add tests
* single sdpa function
* fix sed
* in place
* fix tests
* support different k and v head dims
2024-10-31 16:59:52 -07:00
Awni Hannun
9f34fdbda4
Wire models in MLX LM ( #1069 )
...
* wired in MLX LM
* fix synch
* comment + nit
* version
* mlx lm version
* bump to 0.19.2
2024-10-31 08:17:14 -07:00
Goekdeniz-Guelmez
58b448dc0b
updates
2024-10-30 21:23:13 +01:00
Gökdeniz Gülmez
ffc7ab06a0
Merge branch 'ml-explore:main' into adding-support-for-mamba2
2024-10-30 17:04:38 +01:00
Awni Hannun
8fe9539af7
Fix detokenizer space match for quote ( #1072 )
...
* fix + test
* remove transformer flax/torch warning
* format
2024-10-27 15:06:07 -07:00
hschaeufler
ab4bf05c6e
Update lora_config.yaml with new param: num_layers ( #1068 )
2024-10-26 09:34:46 -07:00
Gökdeniz Gülmez
3b70708201
Merge branch 'ml-explore:main' into adding-support-for-mamba2
2024-10-25 08:57:37 +02:00
Goekdeniz-Guelmez
7c8849e795
update
2024-10-24 16:16:42 +02:00
Awni Hannun
9000e280ae
fix mamba models conversion ( #1065 )
2024-10-22 15:44:08 -07:00
Goekdeniz-Guelmez
a677638c4b
inference works but is hella slow
2024-10-22 23:06:06 +02:00
Goekdeniz-Guelmez
9ab581d678
notes
2024-10-22 22:10:53 +02:00
Goekdeniz-Guelmez
e43a2ab229
not working, incorrect handling with cache probably
2024-10-22 22:04:25 +02:00
Goekdeniz-Guelmez
55485b98e8
update
2024-10-22 21:23:47 +02:00
madroid
d1d480867b
LoRA: update tools datasets docs ( #1063 )
...
* LoRA: update tools datasets docs
* nits
* nits
---------
Co-authored-by: Awni Hannun <awni@apple.com >
2024-10-22 12:19:11 -07:00
Goekdeniz-Guelmez
758597eaa8
adding multi token input and correct cache handling in ssm step
2024-10-22 20:44:23 +02:00
Awni Hannun
66e7bcb886
override dtype with quant ( #1062 )
2024-10-22 09:56:45 -07:00
Goekdeniz-Guelmez
5326d9373a
Merge branch 'adding-support-for-mamba2' of https://github.com/Goekdeniz-Guelmez/mlx-examples into adding-support-for-mamba2
2024-10-22 18:26:05 +02:00
Goekdeniz-Guelmez
b9c57cd429
generation works! trying training now
2024-10-22 18:25:59 +02:00
Gökdeniz Gülmez
0ef73f3a2d
Merge branch 'ml-explore:main' into adding-support-for-mamba2
2024-10-21 15:14:19 +02:00
aronson
743763bc2e
Handle empty string case in maybe_trim_space ( #1055 )
...
* Handle empty string case in maybe_trim_space
* nit
---------
Co-authored-by: Awni Hannun <awni@apple.com >
2024-10-20 20:46:43 -07:00
Goekdeniz-Guelmez
c1634ce81b
still generating gibberish
2024-10-20 18:41:28 +02:00
Goekdeniz-Guelmez
ab4cf1d1cf
generation works but outputs gibberish
2024-10-20 18:04:34 +02:00
Goekdeniz-Guelmez
4ab5139c05
quick save
2024-10-20 16:11:39 +02:00
Goekdeniz-Guelmez
cd036ccfb5
fix generation works too (almost)
2024-10-16 21:13:36 +02:00
Goekdeniz-Guelmez
181d6abedc
Merge branch 'adding-support-for-mamba2' of https://github.com/Goekdeniz-Guelmez/mlx-examples into adding-support-for-mamba2
2024-10-16 21:09:42 +02:00
Goekdeniz-Guelmez
8073cb486c
adding debug statements (somehiw generating only goes through the fist MambaMixer block pass)
2024-10-16 21:09:30 +02:00
Gökdeniz Gülmez
855fcc4327
Merge branch 'ml-explore:main' into adding-support-for-mamba2
2024-10-16 18:57:55 +02:00
Awni Hannun
605c4854f1
Prompt caching in mlx_lm.server
( #1026 )
...
* caching in server
* nits
* fix tests
* don't throw if no metal
* comments
2024-10-14 10:57:22 -07:00
Awni Hannun
8dca1a2f60
Tokenizer updates + tests ( #1024 )
...
* tokenizer updates + tests
* nit
* add can_trim_prompt_cache
* nits
2024-10-14 10:48:46 -07:00
Awni Hannun
c799133998
Make llm async eval less brittle ( #1040 )
...
* Make llm async eval less brittle
* nit
2024-10-14 10:25:24 -07:00
Gökdeniz Gülmez
3f1c1dde6a
Merge branch 'ml-explore:main' into adding-support-for-mamba2
2024-10-14 16:32:00 +02:00
Shunta Saito
7612c646f3
Fix PLaMo model to support Grouped Query Attention ( #1037 )
2024-10-12 15:26:50 -07:00
Goekdeniz-Guelmez
00ba27fe6c
adding debug statements
2024-10-11 21:36:41 +02:00
Goekdeniz-Guelmez
6f88dd59d7
quick clean up and fix
2024-10-11 21:08:13 +02:00
Goekdeniz-Guelmez
9c075a71f8
Merge branch 'adding-support-for-mamba2' of https://github.com/Goekdeniz-Guelmez/mlx-examples into adding-support-for-mamba2
2024-10-11 20:54:35 +02:00
Goekdeniz-Guelmez
4e1236cbf6
fixing loading the model
2024-10-11 20:53:29 +02:00
Awni Hannun
4360e7ccec
clear cache during prompt processing ( #1027 )
2024-10-09 16:48:32 -07:00
Awni Hannun
b7373cb44f
fix long prompt generations ( #1023 )
2024-10-09 11:09:36 -07:00
Awni Hannun
fca087be49
More cache improvements ( #1015 )
...
* fix rotating kv cache for chat use case
* reorg + fixes to caching, unify prompt caching across types and use cases for e.g. caching during a chat
* nit in chat
* fix tests
* fix tests
* fix tests
* docs
* chat command
* comments + docs
* Define meta_state on all Cache implementations
* fixes + trim_prompt_cache api
* fix default model
---------
Co-authored-by: Angelos Katharopoulos <a_katharopoulos@apple.com >
2024-10-07 20:45:51 -07:00
Gökdeniz Gülmez
52d6ca0ad0
Merge branch 'ml-explore:main' into adding-support-for-mamba2
2024-10-04 22:25:31 +02:00
madroid
36c1d8e8dc
Server: support function calling ( #1003 )
2024-10-02 12:36:07 -07:00
Goekdeniz-Guelmez
264ba43707
update trainer/lora.py and adding DepthWiseConv1d because mlx 0.18.0 doesnt axepts groups parameter
2024-10-02 19:19:32 +02:00
Gökdeniz Gülmez
49b9fc1a4c
Create mamba2.py
2024-10-02 12:48:15 +02:00
nathan
0866e23a67
repetiton_penalty and logits_bias just using logits_processors ( #1004 )
...
* refactor of repetition_penalty and logits_bias to use logits_processor
* nits
---------
Co-authored-by: Awni Hannun <awni@apple.com >
2024-09-30 08:49:03 -07:00
Zai Thottakath
418d9a5511
Feature: QDoRA ( #891 )
...
* feat: QDoRA with tests and a small bug fix for recalculation of self.m
* some simplifications and fixes
---------
Co-authored-by: Awni Hannun <awni@apple.com >
2024-09-30 08:01:11 -07:00
madroid
aa1c8abdc6
LoRA: Support HuggingFace dataset via data parameter ( #996 )
...
* LoRA: support huggingface dataset via `data` argument
* LoRA: Extract the load_custom_hf_dataset function
* LoRA: split small functions
* fix spelling errors
* handle load hf dataset error
* fix pre-commit lint
* update data argument help
* nits and doc
---------
Co-authored-by: Awni Hannun <awni@apple.com >
2024-09-30 07:36:21 -07:00