mirror of
https://github.com/ml-explore/mlx-examples.git
synced 2025-07-17 07:51:12 +08:00
![]() * added mx.einsum() operations: before: 41.293 tokens-per-sec, after: 57.822 tokens-per-sec * Fused Operations in delta, B, C = ... :. Before: 57.822 tokens-per-sec, after: 83.890 tokens-per-sec * Pre-computing A_log. After: 83.890 tokens-per-sec, before: 85.848 tokens-per-sec * Update MambaBlock, Batched Input Processing, Improved Cache Handling, Pre-computed Constants, Cleaner State Management, Explicit Return Values:. Before: 82.442 tokens-per-sec, after: 129.130 tokens-per-sec. * cleaning up and adding apple copyright to helium modelfile * update Copyright to this year * nits + even faster --------- Co-authored-by: Awni Hannun <awni.hannun@gmail.com> |
||
---|---|---|
.. | ||
examples | ||
models | ||
tuner | ||
__init__.py | ||
_version.py | ||
cache_prompt.py | ||
chat.py | ||
convert.py | ||
evaluate.py | ||
fuse.py | ||
generate.py | ||
gguf.py | ||
LORA.md | ||
lora.py | ||
MANAGE.md | ||
manage.py | ||
MERGE.md | ||
merge.py | ||
py.typed | ||
README.md | ||
requirements.txt | ||
sample_utils.py | ||
SERVER.md | ||
server.py | ||
tokenizer_utils.py | ||
UPLOAD.md | ||
utils.py |
Generate Text with MLX and 🤗 Hugging Face
This an example of large language model text generation that can pull models from the Hugging Face Hub.
For more information on this example, see the README in the parent directory.
This package also supports fine tuning with LoRA or QLoRA. For more information see the LoRA documentation.