Awni Hannun
2146bcd7ee
Quantize embedding / Update quantize API ( #680 )
...
* more async eval
* quantize embedding / update quantize api
* more updates for quantize
* update for quantize embeddings
* update sd quant API
* update sdxl quants
* error for datasets < batch_size
* async
* fix config loading
* fix quant
* fix tests
* fix req
* remove lm head if tie weights is true
* fix test
2024-04-18 18:16:10 -07:00
Anchen
31ddbd7806
add deepseek coder example ( #172 )
...
* feat: add example for deepseek coder
* chore: remove hardcoded rope_scaling_factor
* feat: add quantization support
* chore: update readme
* chore: clean up the rope scalling factor param in create cos sin theta
* feat: add repetition_penalty
* style /consistency changes to ease future integration
* nits in README
* one more typo
---------
Co-authored-by: Awni Hannun <awni@apple.com>
2023-12-28 21:42:22 -08:00
Awni Hannun
3cf436b529
Quantize example ( #162 )
...
* testing quantization
* conversion + quantization working
* one config processor
* quantization in mistral / nits in llama
* args for quantization
* llama / mistral conversion in good shape
* phi2 quantized
* mixtral
* qwen conversion
2023-12-21 12:59:37 -08:00
Daniel Strobusch
43b6522af2
rename --model_path to --model-path ( #151 )
...
use same argument convention for mistral/mixtral as for llama convert.
2023-12-21 06:28:57 -08:00
Awni Hannun
27c0a8c002
Add llms subdir + update README ( #145 )
...
* add llms subdir + update README
* nits
* use same pre-commit as mlx
* update readmes a bit
* format
2023-12-20 10:22:25 -08:00