Goekdeniz-Guelmez
d044db959d
update
2024-12-27 15:17:45 +01:00
Goekdeniz-Guelmez
0ae536c423
update: using einsum on som elines making it faster, but still generates Gibberish on Codestral
2024-12-18 19:32:22 +01:00
Gökdeniz Gülmez
7996a6f4fd
Merge branch 'ml-explore:main' into adding-support-for-mamba2
2024-12-18 18:35:43 +01:00
Billel Mokeddem
845efddc8c
Fix decoding manually added tokens ( #1164 )
...
* Fix decoding manually added tokens
* fix + test
* nit
* nit
* no lag bpe
---------
Co-authored-by: Awni Hannun <awni@apple.com>
2024-12-17 09:54:29 -08:00
Gökdeniz Gülmez
68533e2a8f
Merge branch 'ml-explore:main' into adding-support-for-mamba2
2024-12-17 11:14:40 +01:00
Prince Canuma
dfa4dd6c93
Add support for cohere2 ( #1157 )
...
* add support for cohere2
* revert to act_fn to silu
* fix tests and sliding window attention
* add tests
* add to tuner
* fix sliding window
* add coauthor :)
Co-authored-by: n8programs <43304488+N8python@users.noreply.github.com>
* Add rotating kvcache to save space
* some nits
* style
* nits
---------
Co-authored-by: n8programs <43304488+N8python@users.noreply.github.com>
Co-authored-by: N8 <n8@n8programs.com>
Co-authored-by: Awni Hannun <awni@apple.com>
2024-12-16 08:01:03 -08:00
Ikko Eltociear Ashimine
fc0674d2d8
chore: update evaluate.py ( #1159 )
...
occurence -> occurrence
2024-12-15 06:06:29 -08:00
Goekdeniz-Guelmez
dff4e52910
adding the modelnames in the LORA.md file and removing unused functions from mamba2.py
2024-12-12 22:52:00 +01:00
Awni Hannun
9f2ea5892e
Bpe stream without space ( #1154 )
...
* bpe streaming detokenization without space
* version bump
2024-12-12 13:13:50 -08:00
Goekdeniz-Guelmez
a883e39f41
optimizing the code for faster inference but still generates giberish
2024-12-12 21:08:33 +01:00
Awni Hannun
2ba0e36683
[mlx-lm] Use top p in server ( #1144 )
...
* use top p in server
* couple other fixes
2024-12-12 11:12:21 -08:00
Angelos Katharopoulos
19abf3dcaa
Replace unicode errors instead of raising exception ( #1146 )
2024-12-12 11:10:41 -08:00
madroid
06af3c9b0e
Add finish_reason in GenerationResponse ( #1153 )
2024-12-12 10:37:40 -08:00
Awni Hannun
77b42b7c8b
fix llava ( #1149 )
2024-12-12 10:37:26 -08:00
Gökdeniz Gülmez
c1d9ec329c
Merge branch 'ml-explore:main' into adding-support-for-mamba2
2024-12-10 20:43:11 +01:00
Alex Barron
135c5818c1
Fix max_tokens ( #1148 )
2024-12-10 11:26:04 -08:00
Goekdeniz-Guelmez
184d3d3267
clean up
2024-12-10 18:20:13 +01:00
Goekdeniz-Guelmez
80e88b4f4d
nits
2024-12-10 18:18:59 +01:00
Goekdeniz-Guelmez
b10afe3662
nits
2024-12-10 18:15:12 +01:00
Goekdeniz-Guelmez
9f8a6a3509
inference on codestral works but is giberish
2024-12-10 17:34:44 +01:00
Gökdeniz Gülmez
ddad2105ef
Merge branch 'main' into adding-support-for-mamba2
2024-12-10 14:32:44 +01:00
madroid
12083c4b7e
Support for multiple EOS tokens ( #1141 )
...
* Support for multiple EOS tokens
* Change _eos_token_ids type from list to set
* Remove model_config & add eos_token_id
* nits
---------
Co-authored-by: Awni Hannun <awni@apple.com>
2024-12-09 08:53:58 -08:00
n8programs
5687d5b99b
Adds EXAONE architecture. ( #1145 )
...
* Adds EXAONE architecture.
* nits + format
* format
* clean up and fix rope
* clean up and fix rope
---------
Co-authored-by: Awni Hannun <awni@apple.com>
2024-12-09 07:58:25 -08:00
Alex Barron
2211b27388
Mixed Quantizations ( #1132 )
...
* saving/loading mixed quantizations
* comment
* add bits per weight
* more concise bpw
* count bias too
2024-12-08 14:21:50 -08:00
Alex Barron
cd8cf28c39
mlx_lm.evaluate
(#1140 )
...
* Add evaluation script
* only write top level results
* add lm eval version
* typo
* create output dir
* relative import
* comment
---------
Co-authored-by: David Grangier <dgrangier@users.noreply.github.com>
2024-12-08 12:20:10 -08:00
vb
1727959a27
Add mentions of MLX-my-repo. ( #1129 )
...
* Add mentions of MLX-my-repo.
* simplify
* move
* move
---------
Co-authored-by: Awni Hannun <awni@apple.com>
2024-12-03 19:21:39 -08:00
Awni Hannun
1963df8565
Allow prompt callback to generate_step
( #1133 )
...
* allow prompt callback and use in cache_prompt
* nit
* comments
* bump version
2024-12-03 16:17:14 -08:00
Awni Hannun
8801beb66f
Add olmo2 ( #1128 )
...
* add olmo2
* add olmo2
2024-12-02 11:42:58 -08:00
Neil Mehta
cefe793ae0
Accept mx.array type for prompt argument for stream_generate ( #1125 )
...
* Accept mx.array type for prompt argument for stream_generate
* Fix formatting
2024-11-26 16:51:55 -08:00
Awni Hannun
cfc29c29f4
Put prompt processing in same stream ( #1122 )
...
* put prompt processing in same stream
* patch
2024-11-25 09:47:00 -08:00
madroid
a5e173802e
docs: update stream_generate return type annotation ( #1121 )
...
Improve documentation clarity by:
1. Fix return type annotation to correctly reflect GenerationResponse
2. Simplify docstring by referencing GenerationResponse class
3. Remove redundant field descriptions
2024-11-25 08:10:14 -08:00
Kevin Conner
0ffdb6dd20
Fix object property value in mlx_lm.server chat completions response to match OpenAI spec ( #1119 )
...
These were "chat.completions" and "chat.completions.chunk"
but should be "chat.completion" and "chat.completion.chunk"
for compatibility with clients expecting an OpenAI API.
In particular, this solves a problem in which aider 0.64.1 reports
hitting a token limit on any completion request, no matter how small,
despite apparently correct counts in the usage property.
Refer to:
https://platform.openai.com/docs/api-reference/chat/object
> object string
> The object type, which is always chat.completion.
https://platform.openai.com/docs/api-reference/chat/streaming
> object string
> The object type, which is always chat.completion.chunk.
2024-11-24 16:37:37 -08:00
Goekdeniz-Guelmez
38e5801edb
loading codestral works but no tinference
2024-11-24 16:26:45 +01:00
Awni Hannun
0f135396ae
Generation refactor: part 2 ( #1099 )
...
* unify with stream_generate
* fixes
* nit
* some cleanup, warnings, tests
* fix test + faster min p + test
* version
2024-11-23 11:47:06 -08:00
Awni Hannun
004eb4cc9d
Tencent HunYuan MOE model ( #1100 )
...
* hunyuan
* fix
* format str
* default trust remote code for tokenizer, allow system prompt to be configurable
2024-11-23 11:06:26 -08:00
Goekdeniz-Guelmez
a6ddc27a4e
removing last checkpoint file
2024-11-21 22:33:56 +01:00
Goekdeniz-Guelmez
57b1717cf5
inference fixed
2024-11-21 22:25:58 +01:00
Goekdeniz-Guelmez
117ffd3909
removing some files
2024-11-21 22:05:42 +01:00
Goekdeniz-Guelmez
e22b2dbf27
Fixed streaming generation and got rid of generating gibberish, but is still a litle slow: 0.222 tokens-per-sec
2024-11-21 22:01:28 +01:00
Gökdeniz Gülmez
e4eae973e8
Merge branch 'ml-explore:main' into adding-support-for-mamba2
2024-11-21 21:06:45 +01:00
Alban Lecocq
bd6d910ca3
[MLX LM] Fix f-string formatting in memory warning message ( #1105 )
...
* Fix missing f-prefix for string interpolation in model size warning
* Ensures proper display of memory values in MB for model and max size
2024-11-13 06:14:03 -08:00
Goekdeniz-Guelmez
1d851069ea
nits
2024-11-10 17:21:18 +01:00
Goekdeniz-Guelmez
1a6688384d
imopemented multi Token inputs, but still generating Gibberish
2024-11-10 17:19:00 +01:00
Goekdeniz-Guelmez
2f95b361a8
removed the custom Mamba2Cache adn updated the existing MambaCache but still only one input Token and outputs gibberish
2024-11-10 16:57:03 +01:00
Gökdeniz Gülmez
49d3f188f8
Merge branch 'ml-explore:main' into adding-support-for-mamba2
2024-11-10 16:36:02 +01:00
Goekdeniz-Guelmez
3a499f9735
fixed inference slowness but it cant handle multible Token inputs and is generateing gibberish
2024-11-10 16:35:07 +01:00
Goekdeniz-Guelmez
800b60239c
save checkpoint
2024-11-10 14:36:26 +01:00
Awni Hannun
657b4cc0aa
[MLX LM] Sampler refactor + a few improvements ( #1094 )
...
* starting
* refactor sampler/processor and a few improvements
* fix stream
* fix stream generate
* fix eos handling in stream generate
2024-11-07 16:15:24 -08:00
Goekdeniz-Guelmez
906f972d36
save push
2024-11-06 16:35:46 +01:00
Angelos Katharopoulos
ed9e81dd58
Fix rotating kv cache size ( #1093 )
2024-11-05 10:24:24 -08:00