Billel Mokeddem 
							
						 
					 
					
						
						
							
						
						845efddc8c 
					 
					
						
						
							
							Fix decoding manually added tokens ( #1164 )  
						
						 
						
						... 
						
						
						
						* Fix decoding manually added tokens
* fix + test
* nit
* nit
* no lag bpe
---------
Co-authored-by: Awni Hannun <awni@apple.com > 
						
						
					 
					
						2024-12-17 09:54:29 -08:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Prince Canuma 
							
						 
					 
					
						
						
							
						
						dfa4dd6c93 
					 
					
						
						
							
							Add support for cohere2 ( #1157 )  
						
						 
						
						... 
						
						
						
						* add support for cohere2
* revert to act_fn to silu
* fix tests and sliding window attention
* add tests
* add to tuner
* fix sliding window
* add coauthor :)
Co-authored-by: n8programs <43304488+N8python@users.noreply.github.com >
* Add rotating kvcache to save space
* some nits
* style
* nits
---------
Co-authored-by: n8programs <43304488+N8python@users.noreply.github.com >
Co-authored-by: N8 <n8@n8programs.com >
Co-authored-by: Awni Hannun <awni@apple.com > 
						
						
					 
					
						2024-12-16 08:01:03 -08:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Ikko Eltociear Ashimine 
							
						 
					 
					
						
						
							
						
						fc0674d2d8 
					 
					
						
						
							
							chore: update evaluate.py ( #1159 )  
						
						 
						
						... 
						
						
						
						occurence -> occurrence 
						
						
					 
					
						2024-12-15 06:06:29 -08:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Awni Hannun 
							
						 
					 
					
						
						
							
						
						9f2ea5892e 
					 
					
						
						
							
							Bpe stream without space ( #1154 )  
						
						 
						
						... 
						
						
						
						* bpe streaming detokenization without space
* version bump 
						
						
					 
					
						2024-12-12 13:13:50 -08:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Awni Hannun 
							
						 
					 
					
						
						
							
						
						2ba0e36683 
					 
					
						
						
							
							[mlx-lm] Use top p in server ( #1144 )  
						
						 
						
						... 
						
						
						
						* use top p in server
* couple other fixes 
						
						
					 
					
						2024-12-12 11:12:21 -08:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Angelos Katharopoulos 
							
						 
					 
					
						
						
							
						
						19abf3dcaa 
					 
					
						
						
							
							Replace unicode errors instead of raising exception ( #1146 )  
						
						 
						
						
						
						
					 
					
						2024-12-12 11:10:41 -08:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								madroid 
							
						 
					 
					
						
						
							
						
						06af3c9b0e 
					 
					
						
						
							
							Add finish_reason in GenerationResponse ( #1153 )  
						
						 
						
						
						
						
					 
					
						2024-12-12 10:37:40 -08:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Awni Hannun 
							
						 
					 
					
						
						
							
						
						77b42b7c8b 
					 
					
						
						
							
							fix llava ( #1149 )  
						
						 
						
						
						
						
					 
					
						2024-12-12 10:37:26 -08:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Alex Barron 
							
						 
					 
					
						
						
							
						
						135c5818c1 
					 
					
						
						
							
							Fix max_tokens ( #1148 )  
						
						 
						
						
						
						
					 
					
						2024-12-10 11:26:04 -08:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								madroid 
							
						 
					 
					
						
						
							
						
						12083c4b7e 
					 
					
						
						
							
							Support for multiple EOS tokens ( #1141 )  
						
						 
						
						... 
						
						
						
						* Support for multiple EOS tokens
* Change _eos_token_ids type from list to set
* Remove model_config & add eos_token_id
* nits
---------
Co-authored-by: Awni Hannun <awni@apple.com > 
						
						
					 
					
						2024-12-09 08:53:58 -08:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								n8programs 
							
						 
					 
					
						
						
							
						
						5687d5b99b 
					 
					
						
						
							
							Adds EXAONE architecture. ( #1145 )  
						
						 
						
						... 
						
						
						
						* Adds EXAONE architecture.
* nits + format
* format
* clean up and fix rope
* clean up and fix rope
---------
Co-authored-by: Awni Hannun <awni@apple.com > 
						
						
					 
					
						2024-12-09 07:58:25 -08:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								hehua2008 
							
						 
					 
					
						
						
							
						
						893b3f085e 
					 
					
						
						
							
							Change Flux default max_shift to 1.15 to match the official one ( #1137 )  
						
						 
						
						
						
						
					 
					
						2024-12-08 23:29:48 -08:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Peter Sibley 
							
						 
					 
					
						
						
							
						
						ed91bbc4dc 
					 
					
						
						
							
							Fix final message at end of flux training ( #1143 )  
						
						 
						
						
						
						
					 
					
						2024-12-08 23:01:53 -08:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								hehua2008 
							
						 
					 
					
						
						
							
						
						1fd6aae871 
					 
					
						
						
							
							Fix flux training with batch size ( #1135 )  
						
						 
						
						... 
						
						
						
						Co-authored-by: Angelos Katharopoulos <a_katharopoulos@apple.com > 
						
						
					 
					
						2024-12-08 22:09:04 -08:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Alex Barron 
							
						 
					 
					
						
						
							
						
						2211b27388 
					 
					
						
						
							
							Mixed Quantizations ( #1132 )  
						
						 
						
						... 
						
						
						
						* saving/loading mixed quantizations
* comment
* add bits per weight
* more concise bpw
* count bias too 
						
						
					 
					
						2024-12-08 14:21:50 -08:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Alex Barron 
							
						 
					 
					
						
						
							
						
						cd8cf28c39 
					 
					
						
						
							
							mlx_lm.evaluate (#1140 )  
						
						 
						
						... 
						
						
						
						* Add evaluation script
* only write top level results
* add lm eval version
* typo
* create output dir
* relative import
* comment
---------
Co-authored-by: David Grangier <dgrangier@users.noreply.github.com > 
						
						
					 
					
						2024-12-08 12:20:10 -08:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								vb 
							
						 
					 
					
						
						
							
						
						1727959a27 
					 
					
						
						
							
							Add mentions of MLX-my-repo. ( #1129 )  
						
						 
						
						... 
						
						
						
						* Add mentions of MLX-my-repo.
* simplify
* move
* move
---------
Co-authored-by: Awni Hannun <awni@apple.com > 
						
						
					 
					
						2024-12-03 19:21:39 -08:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Awni Hannun 
							
						 
					 
					
						
						
							
						
						1963df8565 
					 
					
						
						
							
							Allow prompt callback to generate_step ( #1133 )  
						
						 
						
						... 
						
						
						
						* allow prompt callback and use in cache_prompt
* nit
* comments
* bump version 
						
						
					 
					
						2024-12-03 16:17:14 -08:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								sakares saengkaew 
							
						 
					 
					
						
						
							
						
						0ca162cfb2 
					 
					
						
						
							
							Fix data_iter in prepare_dataset from speechcommands example ( #1113 )  
						
						 
						
						
						
						
					 
					
						2024-12-02 23:56:07 -08:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Angelos Katharopoulos 
							
						 
					 
					
						
						
							
						
						eb9277f574 
					 
					
						
						
							
							Allow loading from diffusers ckpt ( #1117 )  
						
						 
						
						
						
						
					 
					
						2024-12-02 13:15:50 -08:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								hehua2008 
							
						 
					 
					
						
						
							
						
						2a9294a5f0 
					 
					
						
						
							
							Fix bug in FluxSampler.timesteps method ( #1131 )  
						
						 
						
						
						
						
					 
					
						2024-12-02 13:15:19 -08:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Awni Hannun 
							
						 
					 
					
						
						
							
						
						8801beb66f 
					 
					
						
						
							
							Add olmo2 ( #1128 )  
						
						 
						
						... 
						
						
						
						* add olmo2
* add olmo2 
						
						
					 
					
						2024-12-02 11:42:58 -08:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Neil Mehta 
							
						 
					 
					
						
						
							
						
						cefe793ae0 
					 
					
						
						
							
							Accept mx.array type for prompt argument for stream_generate ( #1125 )  
						
						 
						
						... 
						
						
						
						* Accept mx.array type for prompt argument for stream_generate
* Fix formatting 
						
						
					 
					
						2024-11-26 16:51:55 -08:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Awni Hannun 
							
						 
					 
					
						
						
							
						
						cfc29c29f4 
					 
					
						
						
							
							Put prompt processing in same stream ( #1122 )  
						
						 
						
						... 
						
						
						
						* put prompt processing in same stream
* patch 
						
						
					 
					
						2024-11-25 09:47:00 -08:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								madroid 
							
						 
					 
					
						
						
							
						
						a5e173802e 
					 
					
						
						
							
							docs: update stream_generate return type annotation ( #1121 )  
						
						 
						
						... 
						
						
						
						Improve documentation clarity by:
1. Fix return type annotation to correctly reflect GenerationResponse
2. Simplify docstring by referencing GenerationResponse class
3. Remove redundant field descriptions 
						
						
					 
					
						2024-11-25 08:10:14 -08:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Remixer Dec 
							
						 
					 
					
						
						
							
						
						adaab81029 
					 
					
						
						
							
							Allow converting models from local directories ( #1118 )  
						
						 
						
						
						
						
					 
					
						2024-11-24 16:41:06 -08:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Kevin Conner 
							
						 
					 
					
						
						
							
						
						0ffdb6dd20 
					 
					
						
						
							
							Fix object property value in mlx_lm.server chat completions response to match OpenAI spec ( #1119 )  
						
						 
						
						... 
						
						
						
						These were "chat.completions" and "chat.completions.chunk"
but should be "chat.completion" and "chat.completion.chunk"
for compatibility with clients expecting an OpenAI API.
In particular, this solves a problem in which aider 0.64.1 reports
hitting a token limit on any completion request, no matter how small,
despite apparently correct counts in the usage property.
Refer to:
https://platform.openai.com/docs/api-reference/chat/object 
> object string
> The object type, which is always chat.completion.
https://platform.openai.com/docs/api-reference/chat/streaming 
> object string
> The object type, which is always chat.completion.chunk. 
						
						
					 
					
						2024-11-24 16:37:37 -08:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Awni Hannun 
							
						 
					 
					
						
						
							
						
						0f135396ae 
					 
					
						
						
							
							Generation refactor: part 2 ( #1099 )  
						
						 
						
						... 
						
						
						
						* unify with stream_generate
* fixes
* nit
* some cleanup, warnings, tests
* fix test + faster min p + test
* version 
						
						
					 
					
						2024-11-23 11:47:06 -08:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Awni Hannun 
							
						 
					 
					
						
						
							
						
						004eb4cc9d 
					 
					
						
						
							
							Tencent HunYuan MOE model ( #1100 )  
						
						 
						
						... 
						
						
						
						* hunyuan
* fix
* format str
* default trust remote code for tokenizer, allow system prompt to be configurable 
						
						
					 
					
						2024-11-23 11:06:26 -08:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Angelos Katharopoulos 
							
						 
					 
					
						
						
							
						
						042280ce50 
					 
					
						
						
							
							Fix format ( #1115 )  
						
						 
						
						
						
						
					 
					
						2024-11-20 16:15:53 -08:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Valentin Roussellet 
							
						 
					 
					
						
						
							
						
						60c7b80350 
					 
					
						
						
							
							Pass seed to sd img2img ( #1114 )  
						
						 
						
						
						
						
					 
					
						2024-11-20 15:21:52 -08:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Alban Lecocq 
							
						 
					 
					
						
						
							
						
						bd6d910ca3 
					 
					
						
						
							
							[MLX LM] Fix f-string formatting in memory warning message ( #1105 )  
						
						 
						
						... 
						
						
						
						* Fix missing f-prefix for string interpolation in model size warning
* Ensures proper display of memory values in MB for model and max size 
						
						
					 
					
						2024-11-13 06:14:03 -08:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								madroid 
							
						 
					 
					
						
						
							
						
						1e07660184 
					 
					
						
						
							
							FLUX: save train config ( #1049 )  
						
						 
						
						
						
						
					 
					
						2024-11-08 17:15:19 -08:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Awni Hannun 
							
						 
					 
					
						
						
							
						
						657b4cc0aa 
					 
					
						
						
							
							[MLX LM] Sampler refactor + a few improvements ( #1094 )  
						
						 
						
						... 
						
						
						
						* starting
* refactor sampler/processor and a few improvements
* fix stream
* fix stream generate
* fix eos handling in stream generate 
						
						
					 
					
						2024-11-07 16:15:24 -08:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Angelos Katharopoulos 
							
						 
					 
					
						
						
							
						
						ed9e81dd58 
					 
					
						
						
							
							Fix rotating kv cache size ( #1093 )  
						
						 
						
						
						
						
					 
					
						2024-11-05 10:24:24 -08:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Awni Hannun 
							
						 
					 
					
						
						
							
						
						6fd1f70f73 
					 
					
						
						
							
							fix spm decoder multi-byte ( #1092 )  
						
						 
						
						
						
						
					 
					
						2024-11-05 06:06:26 -08:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Anthony Wu 
							
						 
					 
					
						
						
							
						
						4394633ce0 
					 
					
						
						
							
							mlx_whisper: add support for audio input from stdin ( #1012 )  
						
						 
						
						... 
						
						
						
						* add support for audio and input name from stdin
* refactored to stdin - arg, and output-name template
* fix bugs, add test coverage
* fix doc to match arg rename
* some nits
---------
Co-authored-by: Awni Hannun <awni@apple.com > 
						
						
					 
					
						2024-11-04 14:02:13 -08:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								ilyasch2 
							
						 
					 
					
						
						
							
						
						3b526f0aa1 
					 
					
						
						
							
							Add support for falcon-mamba ( #1074 )  
						
						 
						
						... 
						
						
						
						* Add support for falcon-mamba
* nits
* nit
---------
Co-authored-by: Awni Hannun <awni@apple.com > 
						
						
					 
					
						2024-11-04 12:23:30 -08:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Anchen 
							
						 
					 
					
						
						
							
						
						82e3338987 
					 
					
						
						
							
							chore(mlx-lm): add max token arg for mlx_lm.chat ( #1089 )  
						
						 
						
						... 
						
						
						
						* chore(mlx-lm): add max token arg for mlx_lm.chat
* chore: update the default max token value 
						
						
					 
					
						2024-11-04 06:06:34 -08:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Angelos Katharopoulos 
							
						 
					 
					
						
						
							
						
						331148d8ec 
					 
					
						
						
							
							Enable distributed LoRA training ( #821 )  
						
						 
						
						
						
						
					 
					
						2024-11-02 18:02:31 -07:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Awni Hannun 
							
						 
					 
					
						
						
							
						
						29c954f4cb 
					 
					
						
						
							
							fix ( #1082 )  
						
						 
						
						
						
						
					 
					
						2024-11-02 13:51:38 -07:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Awni Hannun 
							
						 
					 
					
						
						
							
						
						0f799947d0 
					 
					
						
						
							
							fix ( #1079 )  
						
						 
						
						
						
						
					 
					
						2024-11-01 16:30:32 -07:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Awni Hannun 
							
						 
					 
					
						
						
							
						
						e510987870 
					 
					
						
						
							
							Clear cache every now and then ( #1081 )  
						
						 
						
						... 
						
						
						
						* clear cache every now and then
* don't need user arg anymore 
						
						
					 
					
						2024-11-01 14:15:32 -07:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Awni Hannun 
							
						 
					 
					
						
						
							
						
						8160e0c4e5 
					 
					
						
						
							
							Whisper improvements ( #1080 )  
						
						 
						
						... 
						
						
						
						* use safetensors in whisper
* speed up decoder
* version 
						
						
					 
					
						2024-11-01 10:52:28 -07:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Alex Barron 
							
						 
					 
					
						
						
							
						
						85ffd2c96a 
					 
					
						
						
							
							Quantized KV Cache ( #1075 )  
						
						 
						
						... 
						
						
						
						* add QuantizedKVCache
* simplify
* add tests
* single sdpa function
* fix sed
* in place
* fix tests
* support different k and v head dims 
						
						
					 
					
						2024-10-31 16:59:52 -07:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Awni Hannun 
							
						 
					 
					
						
						
							
						
						9f34fdbda4 
					 
					
						
						
							
							Wire models in MLX LM ( #1069 )  
						
						 
						
						... 
						
						
						
						* wired in MLX LM
* fix synch
* comment + nit
* version
* mlx lm version
* bump to 0.19.2 
						
						
					 
					
						2024-10-31 08:17:14 -07:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Awni Hannun 
							
						 
					 
					
						
						
							
						
						8fe9539af7 
					 
					
						
						
							
							Fix detokenizer space match for quote ( #1072 )  
						
						 
						
						... 
						
						
						
						* fix + test
* remove transformer flax/torch warning
* format 
						
						
					 
					
						2024-10-27 15:06:07 -07:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								hschaeufler 
							
						 
					 
					
						
						
							
						
						ab4bf05c6e 
					 
					
						
						
							
							Update lora_config.yaml with new param: num_layers ( #1068 )  
						
						 
						
						
						
						
					 
					
						2024-10-26 09:34:46 -07:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Saurav Maheshkar 
							
						 
					 
					
						
						
							
						
						4971462bf0 
					 
					
						
						
							
							feat(clip): add linear probe evaluation script ( #960 )  
						
						 
						
						
						
						
					 
					
						2024-10-24 21:56:17 -07:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Awni Hannun 
							
						 
					 
					
						
						
							
						
						9000e280ae 
					 
					
						
						
							
							fix mamba models conversion ( #1065 )  
						
						 
						
						
						
						
					 
					
						2024-10-22 15:44:08 -07:00