2023-12-06 03:58:58 +08:00
|
|
|
# MLX Examples
|
2023-11-30 00:17:26 +08:00
|
|
|
|
2023-12-06 03:58:58 +08:00
|
|
|
This repo contains a variety of standalone examples using the [MLX
|
|
|
|
framework](https://github.com/ml-explore/mlx).
|
|
|
|
|
|
|
|
The [MNIST](mnist) example is a good starting point to learn how to use MLX.
|
|
|
|
|
2023-12-21 02:22:25 +08:00
|
|
|
Some more useful examples are listed below.
|
|
|
|
|
|
|
|
### Text Models
|
2023-12-06 03:58:58 +08:00
|
|
|
|
|
|
|
- [Transformer language model](transformer_lm) training.
|
2023-12-21 02:22:25 +08:00
|
|
|
- Large scale text generation with [LLaMA](llms/llama),
|
|
|
|
[Mistral](llms/mistral), [Phi-2](llms/phi2), and more in the [LLMs](llms)
|
|
|
|
directory.
|
|
|
|
- A mixture-of-experts (MoE) language model with [Mixtral 8x7B](llms/mixtral).
|
2024-01-05 13:05:59 +08:00
|
|
|
- Parameter efficient fine-tuning with [LoRA or QLoRA](lora).
|
2023-12-21 02:22:25 +08:00
|
|
|
- Text-to-text multi-task Transformers with [T5](t5).
|
|
|
|
- Bidirectional language understanding with [BERT](bert).
|
|
|
|
|
|
|
|
### Image Models
|
|
|
|
|
2024-02-07 12:02:27 +08:00
|
|
|
- Image classification using [ResNets on CIFAR-10](cifar).
|
2023-12-06 03:58:58 +08:00
|
|
|
- Generating images with [Stable Diffusion](stable_diffusion).
|
2024-02-07 12:02:27 +08:00
|
|
|
- Convolutional variational autoencoder [(CVAE) on MNIST](cvae).
|
2023-12-21 02:22:25 +08:00
|
|
|
|
|
|
|
### Audio Models
|
|
|
|
|
2023-12-06 03:58:58 +08:00
|
|
|
- Speech recognition with [OpenAI's Whisper](whisper).
|
2023-12-21 02:22:25 +08:00
|
|
|
|
2024-02-01 06:19:53 +08:00
|
|
|
### Multimodal models
|
|
|
|
|
|
|
|
- Joint text and image embeddings with [CLIP](clip).
|
|
|
|
|
2023-12-21 02:22:25 +08:00
|
|
|
### Other Models
|
|
|
|
|
2023-12-13 08:26:13 +08:00
|
|
|
- Semi-supervised learning on graph-structured data with [GCN](gcn).
|
2024-02-01 06:19:53 +08:00
|
|
|
- Real NVP [normalizing flow](normalizing_flow) for density estimation and
|
|
|
|
sampling.
|
2023-11-30 04:31:18 +08:00
|
|
|
|
2023-12-21 02:22:25 +08:00
|
|
|
### Hugging Face
|
|
|
|
|
|
|
|
Note: You can now directly download a few converted checkpoints from the [MLX
|
|
|
|
Community](https://huggingface.co/mlx-community) organization on Hugging Face.
|
|
|
|
We encourage you to join the community and [contribute new
|
|
|
|
models](https://github.com/ml-explore/mlx-examples/issues/155).
|
2023-12-20 22:57:13 +08:00
|
|
|
|
2023-11-30 04:31:18 +08:00
|
|
|
## Contributing
|
|
|
|
|
2023-12-19 02:12:35 +08:00
|
|
|
We are grateful for all of [our
|
|
|
|
contributors](ACKNOWLEDGMENTS.md#Individual-Contributors). If you contribute
|
2024-01-08 03:35:39 +08:00
|
|
|
to MLX Examples and wish to be acknowledged, please add your name to the list in your
|
2023-12-19 02:12:35 +08:00
|
|
|
pull request.
|
|
|
|
|
|
|
|
## Citing MLX Examples
|
|
|
|
|
|
|
|
The MLX software suite was initially developed with equal contribution by Awni
|
|
|
|
Hannun, Jagrit Digani, Angelos Katharopoulos, and Ronan Collobert. If you find
|
|
|
|
MLX Examples useful in your research and wish to cite it, please use the following
|
|
|
|
BibTex entry:
|
|
|
|
|
|
|
|
```
|
|
|
|
@software{mlx2023,
|
|
|
|
author = {Awni Hannun and Jagrit Digani and Angelos Katharopoulos and Ronan Collobert},
|
|
|
|
title = {{MLX}: Efficient and flexible machine learning on Apple silicon},
|
|
|
|
url = {https://github.com/ml-explore},
|
|
|
|
version = {0.0},
|
|
|
|
year = {2023},
|
|
|
|
}
|
|
|
|
```
|