2023-12-06 03:58:58 +08:00
|
|
|
# MLX Examples
|
2023-11-30 00:17:26 +08:00
|
|
|
|
2023-12-06 03:58:58 +08:00
|
|
|
This repo contains a variety of standalone examples using the [MLX
|
|
|
|
framework](https://github.com/ml-explore/mlx).
|
|
|
|
|
|
|
|
The [MNIST](mnist) example is a good starting point to learn how to use MLX.
|
|
|
|
|
2023-12-21 02:22:25 +08:00
|
|
|
Some more useful examples are listed below.
|
|
|
|
|
|
|
|
### Text Models
|
2023-12-06 03:58:58 +08:00
|
|
|
|
|
|
|
- [Transformer language model](transformer_lm) training.
|
2023-12-21 02:22:25 +08:00
|
|
|
- Large scale text generation with [LLaMA](llms/llama),
|
|
|
|
[Mistral](llms/mistral), [Phi-2](llms/phi2), and more in the [LLMs](llms)
|
|
|
|
directory.
|
|
|
|
- A mixture-of-experts (MoE) language model with [Mixtral 8x7B](llms/mixtral).
|
2023-12-06 03:58:58 +08:00
|
|
|
- Parameter efficient fine-tuning with [LoRA](lora).
|
2023-12-21 02:22:25 +08:00
|
|
|
- Text-to-text multi-task Transformers with [T5](t5).
|
|
|
|
- Bidirectional language understanding with [BERT](bert).
|
|
|
|
|
|
|
|
### Image Models
|
|
|
|
|
2023-12-06 03:58:58 +08:00
|
|
|
- Generating images with [Stable Diffusion](stable_diffusion).
|
2023-12-21 02:22:25 +08:00
|
|
|
|
|
|
|
### Audio Models
|
|
|
|
|
2023-12-06 03:58:58 +08:00
|
|
|
- Speech recognition with [OpenAI's Whisper](whisper).
|
2023-12-21 02:22:25 +08:00
|
|
|
|
|
|
|
### Other Models
|
|
|
|
|
2023-12-13 08:26:13 +08:00
|
|
|
- Semi-supervised learning on graph-structured data with [GCN](gcn).
|
2023-11-30 04:31:18 +08:00
|
|
|
|
2023-12-21 02:22:25 +08:00
|
|
|
### Hugging Face
|
|
|
|
|
|
|
|
Note: You can now directly download a few converted checkpoints from the [MLX
|
|
|
|
Community](https://huggingface.co/mlx-community) organization on Hugging Face.
|
|
|
|
We encourage you to join the community and [contribute new
|
|
|
|
models](https://github.com/ml-explore/mlx-examples/issues/155).
|
2023-12-20 22:57:13 +08:00
|
|
|
|
2023-11-30 04:31:18 +08:00
|
|
|
## Contributing
|
|
|
|
|
2023-12-19 02:12:35 +08:00
|
|
|
We are grateful for all of [our
|
|
|
|
contributors](ACKNOWLEDGMENTS.md#Individual-Contributors). If you contribute
|
|
|
|
to MLX Examples and wish to be acknowledged, please add your name to to the list in your
|
|
|
|
pull request.
|
|
|
|
|
|
|
|
## Citing MLX Examples
|
|
|
|
|
|
|
|
The MLX software suite was initially developed with equal contribution by Awni
|
|
|
|
Hannun, Jagrit Digani, Angelos Katharopoulos, and Ronan Collobert. If you find
|
|
|
|
MLX Examples useful in your research and wish to cite it, please use the following
|
|
|
|
BibTex entry:
|
|
|
|
|
|
|
|
```
|
|
|
|
@software{mlx2023,
|
|
|
|
author = {Awni Hannun and Jagrit Digani and Angelos Katharopoulos and Ronan Collobert},
|
|
|
|
title = {{MLX}: Efficient and flexible machine learning on Apple silicon},
|
|
|
|
url = {https://github.com/ml-explore},
|
|
|
|
version = {0.0},
|
|
|
|
year = {2023},
|
|
|
|
}
|
|
|
|
```
|