Files
mlx-examples/llms/gguf_llm/models.py
Jaward Sesay 7c0962f4e2 Add Supported Quantized Phi-3-mini-4k-instruct gguf Weight (#717)
* support for phi-3 4bits quantized gguf weights

* Added link to 4 bits quantized model

* removed some prints

* Added correct comment

* Added correct comment

* removed print

Since last condition already prints warning for when quantization is None
2024-04-29 20:11:32 -07:00

11 KiB