mirror of
https://github.com/ml-explore/mlx-examples.git
synced 2025-06-24 01:17:28 +08:00
Examples in the MLX framework
![]() * Add skeleton * Load all encoder weights * Pass config to all modules, fix ln * Load position bias embeddings * Load decoder weights * Move position biases to attention module * translate pytorch to mx * Fix default prompt * Fix relative_attention_max_distance config * No scaling, no encoder mask * LM head * Decode (broken after 1st token) * Use position bias in all layers * Utils to compare encoder output * Fix layer norm * Fix decoder mask * Use position bias in decoder * Concatenate tokens * Remove prints * Stop on eos * Measure tokens/s * with cache * bug fix with bidirectional only for encoder, add offset to position bias * format * Fix T5.__call__ * Stream output * Add argument to generate float16 npz * Load config from HF to support any model * Uncomment bidirectional param * Add gitignore * Add readme.md for t5 * Fix relative position scale * Fix --encode-only * Run hf_t5 with any model * Add hf generation for comparison * Fix type for attention mask * Increase hf max_length * Rescale output before projecting on vocab * readme updates * nits * Pass ln2 to cross attention * Fix example * Fix attention for 3b model * fp16, abstract tokenizer a bit, format * clamp for low precision * higher clipping, remove non-helpful casts * default to fp32 for now * Adds support for flan-t5 * Update t5 docs on variant support * readme flan * nit --------- Co-authored-by: Awni Hannun <awni@apple.com> |
||
---|---|---|
bert | ||
cifar | ||
gcn | ||
llama | ||
lora | ||
mistral | ||
mixtral | ||
mnist | ||
phi2 | ||
stable_diffusion | ||
t5 | ||
transformer_lm | ||
whisper | ||
.gitignore | ||
.pre-commit-config.yaml | ||
ACKNOWLEDGMENTS.md | ||
CODE_OF_CONDUCT.md | ||
CONTRIBUTING.md | ||
LICENSE | ||
README.md |
MLX Examples
This repo contains a variety of standalone examples using the MLX framework.
The MNIST example is a good starting point to learn how to use MLX.
Some more useful examples include:
- Transformer language model training.
- Large scale text generation with LLaMA, Mistral or Phi.
- Mixture-of-experts (MoE) language model with Mixtral 8x7B
- Parameter efficient fine-tuning with LoRA.
- Generating images with Stable Diffusion.
- Speech recognition with OpenAI's Whisper.
- Bidirectional language understanding with BERT
- Semi-supervised learning on graph-structured data with GCN.
Contributing
We are grateful for all of our contributors. If you contribute to MLX Examples and wish to be acknowledged, please add your name to to the list in your pull request.
Citing MLX Examples
The MLX software suite was initially developed with equal contribution by Awni Hannun, Jagrit Digani, Angelos Katharopoulos, and Ronan Collobert. If you find MLX Examples useful in your research and wish to cite it, please use the following BibTex entry:
@software{mlx2023,
author = {Awni Hannun and Jagrit Digani and Angelos Katharopoulos and Ronan Collobert},
title = {{MLX}: Efficient and flexible machine learning on Apple silicon},
url = {https://github.com/ml-explore},
version = {0.0},
year = {2023},
}