cavit99
877d2a345b
Change DEFAULT_SEED to None for stochastic generation by default ( #1323 )
...
* Change DEFAULT_SEED to None for stochastic generation by default
* Update llms/mlx_lm/chat.py
* Update llms/mlx_lm/generate.py
---------
Co-authored-by: Awni Hannun <awni.hannun@gmail.com>
2025-03-06 06:49:35 -08:00
Awni Hannun
845cd8c01e
support kimi + more options in chat mode ( #1312 )
2025-02-28 11:33:18 -08:00
Awni Hannun
c4833a2f55
fix encoding with special tokens + chat template ( #1189 )
2025-01-03 10:50:59 -08:00
Alex Barron
135c5818c1
Fix max_tokens ( #1148 )
2024-12-10 11:26:04 -08:00
Awni Hannun
0f135396ae
Generation refactor: part 2 ( #1099 )
...
* unify with stream_generate
* fixes
* nit
* some cleanup, warnings, tests
* fix test + faster min p + test
* version
2024-11-23 11:47:06 -08:00
Awni Hannun
657b4cc0aa
[MLX LM] Sampler refactor + a few improvements ( #1094 )
...
* starting
* refactor sampler/processor and a few improvements
* fix stream
* fix stream generate
* fix eos handling in stream generate
2024-11-07 16:15:24 -08:00
Anchen
82e3338987
chore(mlx-lm): add max token arg for mlx_lm.chat ( #1089 )
...
* chore(mlx-lm): add max token arg for mlx_lm.chat
* chore: update the default max token value
2024-11-04 06:06:34 -08:00
Awni Hannun
9f34fdbda4
Wire models in MLX LM ( #1069 )
...
* wired in MLX LM
* fix synch
* comment + nit
* version
* mlx lm version
* bump to 0.19.2
2024-10-31 08:17:14 -07:00
Awni Hannun
fca087be49
More cache improvements ( #1015 )
...
* fix rotating kv cache for chat use case
* reorg + fixes to caching, unify prompt caching across types and use cases for e.g. caching during a chat
* nit in chat
* fix tests
* fix tests
* fix tests
* docs
* chat command
* comments + docs
* Define meta_state on all Cache implementations
* fixes + trim_prompt_cache api
* fix default model
---------
Co-authored-by: Angelos Katharopoulos <a_katharopoulos@apple.com>
2024-10-07 20:45:51 -07:00