This website requires JavaScript.
Explore
Help
Sign In
zhangyiss
/
mlx-examples
Watch
1
Star
0
Fork
0
You've already forked mlx-examples
mirror of
https://github.com/ml-explore/mlx-examples.git
synced
2025-07-03 23:31:14 +08:00
Code
Issues
Actions
Packages
Projects
Releases
Wiki
Activity
7d279b51ef
mlx-examples
/
llms
/
gguf_llm
/
requirements.txt
6 lines
62 B
Plaintext
Raw
Normal View
History
Unescape
Escape
Switch to fast RMS/LN Norm (#603) * use nn.RMSNorm, use sdpa, cleanup * bump mlx versions * minor update * use fast layer norm * version bump * update requirement for whisper * update requirement for gguf
2024-03-23 22:13:51 +08:00
mlx>=0.8
Example reading directly from gguf file (#222) * Draft of tiny llama from gguf * Transpose all * No transposition with new layout * Read config from gguf * Create tokenizer from gguf * move gguf and update to be similar to hf_llm * change model to HF style + updates to REAMDE * nits in REAMDE * nit readme * only use mlx for metadata * fix eos/bos tokenizer * fix tokenization * quantization runs * 8-bit works * tokenizer fix * bump mlx version --------- Co-authored-by: Juarez Bochi <juarez.bochi@grammarly.com> Co-authored-by: Awni Hannun <awni@apple.com>
2024-01-24 07:41:54 +08:00
numpy
update protobuf (#467)
2024-02-21 03:46:36 +08:00
protobuf==3.20.2
Example reading directly from gguf file (#222) * Draft of tiny llama from gguf * Transpose all * No transposition with new layout * Read config from gguf * Create tokenizer from gguf * move gguf and update to be similar to hf_llm * change model to HF style + updates to REAMDE * nits in REAMDE * nit readme * only use mlx for metadata * fix eos/bos tokenizer * fix tokenization * quantization runs * 8-bit works * tokenizer fix * bump mlx version --------- Co-authored-by: Juarez Bochi <juarez.bochi@grammarly.com> Co-authored-by: Awni Hannun <awni@apple.com>
2024-01-24 07:41:54 +08:00
sentencepiece
fixed the requirements (#803)
2024-05-29 21:14:19 +08:00
huggingface_hub
Reference in New Issue
Copy Permalink