Awni Hannun 
							
						 
					 
					
						
						
							
						
						2146bcd7ee 
					 
					
						
						
							
							Quantize embedding / Update quantize API ( #680 )  
						
						... 
						
						
						
						* more async eval
* quantize embedding / update quantize api
* more updates for quantize
* update for quantize embeddings
* update sd quant API
* update sdxl quants
* error for datasets < batch_size
* async
* fix config loading
* fix quant
* fix tests
* fix req
* remove lm head if tie weights is true
* fix test 
						
						
					 
					
						2024-04-18 18:16:10 -07:00 
						 
				 
			
				
					
						
							
							
								Awni Hannun 
							
						 
					 
					
						
						
							
						
						b8a348c1b8 
					 
					
						
						
							
							Switch to fast RMS/LN Norm ( #603 )  
						
						... 
						
						
						
						* use nn.RMSNorm, use sdpa, cleanup
* bump mlx versions
* minor update
* use fast layer norm
* version bump
* update requirement for whisper
* update requirement for gguf 
						
						
					 
					
						2024-03-23 07:13:51 -07:00 
						 
				 
			
				
					
						
							
							
								Angelos Katharopoulos 
							
						 
					 
					
						
						
							
						
						f71e965d57 
					 
					
						
						
							
							Change gqa to use repeat instead of concatenate ( #443 )  
						
						
						
						
					 
					
						2024-02-14 17:40:11 -08:00 
						 
				 
			
				
					
						
							
							
								Long Sha 
							
						 
					 
					
						
						
							
						
						8071aacd98 
					 
					
						
						
							
							fix-mistral-download-link ( #418 )  
						
						
						
						
					 
					
						2024-02-06 19:56:56 -08:00 
						 
				 
			
				
					
						
							
							
								Daniel Strobusch 
							
						 
					 
					
						
						
							
						
						85258b2be7 
					 
					
						
						
							
							make parameter naming consistent with other examples. ( #214 )  
						
						
						
						
					 
					
						2024-01-02 08:18:12 -08:00 
						 
				 
			
				
					
						
							
							
								Anchen 
							
						 
					 
					
						
						
							
						
						31ddbd7806 
					 
					
						
						
							
							add deepseek coder example ( #172 )  
						
						... 
						
						
						
						* feat: add example for deepseek coder
* chore: remove hardcoded rope_scaling_factor
* feat: add quantization support
* chore: update readme
* chore: clean up the rope scalling factor param in create cos sin theta
* feat: add repetition_penalty
* style /consistency changes to ease future integration
* nits in README
* one more typo
---------
Co-authored-by: Awni Hannun <awni@apple.com > 
						
						
					 
					
						2023-12-28 21:42:22 -08:00 
						 
				 
			
				
					
						
							
							
								Vaibhav Srivastav 
							
						 
					 
					
						
						
							
						
						0eaa323c10 
					 
					
						
						
							
							Fix conversion + inference errors. - Mistral ( #176 )  
						
						... 
						
						
						
						* Fix conversion + inference errors.
* wire rope_theta throuugh to nn.RoPE
---------
Co-authored-by: Awni Hannun <awni@apple.com > 
						
						
					 
					
						2023-12-22 14:10:25 -08:00 
						 
				 
			
				
					
						
							
							
								Todsaporn Banjerdkit 
							
						 
					 
					
						
						
							
						
						7ae445f6c7 
					 
					
						
						
							
							feat: add mistral tps ( #173 )  
						
						... 
						
						
						
						* feat: add mistral tps
* eval params before timing + format
---------
Co-authored-by: Awni Hannun <awni@apple.com > 
						
						
					 
					
						2023-12-22 07:55:57 -08:00 
						 
				 
			
				
					
						
							
							
								Awni Hannun 
							
						 
					 
					
						
						
							
						
						3cf436b529 
					 
					
						
						
							
							Quantize example ( #162 )  
						
						... 
						
						
						
						* testing quantization
* conversion + quantization working
* one config processor
* quantization in mistral / nits in llama
* args for quantization
* llama / mistral conversion in good shape
* phi2 quantized
* mixtral
* qwen conversion 
						
						
					 
					
						2023-12-21 12:59:37 -08:00 
						 
				 
			
				
					
						
							
							
								Daniel Strobusch 
							
						 
					 
					
						
						
							
						
						43b6522af2 
					 
					
						
						
							
							rename --model_path to --model-path ( #151 )  
						
						... 
						
						
						
						use same argument convention for mistral/mixtral as for llama convert. 
						
						
					 
					
						2023-12-21 06:28:57 -08:00 
						 
				 
			
				
					
						
							
							
								Awni Hannun 
							
						 
					 
					
						
						
							
						
						27c0a8c002 
					 
					
						
						
							
							Add llms subdir + update README ( #145 )  
						
						... 
						
						
						
						* add llms subdir + update README
* nits
* use same pre-commit as mlx
* update readmes a bit
* format 
						
						
					 
					
						2023-12-20 10:22:25 -08:00