2023-11-30 02:52:08 +08:00
|
|
|
# MLX
|
|
|
|
|
2023-11-30 08:23:42 +08:00
|
|
|
[**Quickstart**](#quickstart) | [**Installation**](#installation) |
|
|
|
|
[**Documentation**](https://ml-explore.github.io/mlx/build/html/index.html) |
|
2023-12-07 05:32:41 +08:00
|
|
|
[**Examples**](#examples)
|
|
|
|
|
|
|
|
[](https://circleci.com/gh/ml-explore/mlx)
|
2023-11-30 02:52:08 +08:00
|
|
|
|
2024-10-26 10:25:35 +08:00
|
|
|
MLX is an array framework for machine learning on Apple silicon,
|
2024-02-13 04:25:04 +08:00
|
|
|
brought to you by Apple machine learning research.
|
2023-11-30 02:52:08 +08:00
|
|
|
|
2023-11-30 08:23:42 +08:00
|
|
|
Some key features of MLX include:
|
2023-11-30 02:52:08 +08:00
|
|
|
|
2024-02-21 01:54:49 +08:00
|
|
|
- **Familiar APIs**: MLX has a Python API that closely follows NumPy. MLX
|
|
|
|
also has fully featured C++, [C](https://github.com/ml-explore/mlx-c), and
|
|
|
|
[Swift](https://github.com/ml-explore/mlx-swift/) APIs, which closely mirror
|
|
|
|
the Python API. MLX has higher-level packages like `mlx.nn` and
|
|
|
|
`mlx.optimizers` with APIs that closely follow PyTorch to simplify building
|
|
|
|
more complex models.
|
2023-11-30 02:52:08 +08:00
|
|
|
|
2023-12-11 06:47:37 +08:00
|
|
|
- **Composable function transformations**: MLX supports composable function
|
2023-11-30 08:23:42 +08:00
|
|
|
transformations for automatic differentiation, automatic vectorization,
|
|
|
|
and computation graph optimization.
|
2023-11-30 02:52:08 +08:00
|
|
|
|
2023-11-30 08:23:42 +08:00
|
|
|
- **Lazy computation**: Computations in MLX are lazy. Arrays are only
|
|
|
|
materialized when needed.
|
2023-11-30 02:52:08 +08:00
|
|
|
|
2023-12-11 06:47:37 +08:00
|
|
|
- **Dynamic graph construction**: Computation graphs in MLX are constructed
|
2023-11-30 08:23:42 +08:00
|
|
|
dynamically. Changing the shapes of function arguments does not trigger
|
|
|
|
slow compilations, and debugging is simple and intuitive.
|
2023-11-30 02:52:08 +08:00
|
|
|
|
2023-11-30 08:23:42 +08:00
|
|
|
- **Multi-device**: Operations can run on any of the supported devices
|
2023-12-11 06:47:37 +08:00
|
|
|
(currently the CPU and the GPU).
|
2023-11-30 02:52:08 +08:00
|
|
|
|
2023-12-06 13:31:27 +08:00
|
|
|
- **Unified memory**: A notable difference from MLX and other frameworks
|
2023-11-30 08:23:42 +08:00
|
|
|
is the *unified memory model*. Arrays in MLX live in shared memory.
|
|
|
|
Operations on MLX arrays can be performed on any of the supported
|
2023-12-11 06:47:37 +08:00
|
|
|
device types without transferring data.
|
2023-11-30 02:52:08 +08:00
|
|
|
|
2023-11-30 08:23:42 +08:00
|
|
|
MLX is designed by machine learning researchers for machine learning
|
2023-12-06 13:31:27 +08:00
|
|
|
researchers. The framework is intended to be user-friendly, but still efficient
|
2023-11-30 08:23:42 +08:00
|
|
|
to train and deploy models. The design of the framework itself is also
|
|
|
|
conceptually simple. We intend to make it easy for researchers to extend and
|
|
|
|
improve MLX with the goal of quickly exploring new ideas.
|
|
|
|
|
|
|
|
The design of MLX is inspired by frameworks like
|
|
|
|
[NumPy](https://numpy.org/doc/stable/index.html),
|
|
|
|
[PyTorch](https://pytorch.org/), [Jax](https://github.com/google/jax), and
|
|
|
|
[ArrayFire](https://arrayfire.org/).
|
|
|
|
|
|
|
|
## Examples
|
|
|
|
|
|
|
|
The [MLX examples repo](https://github.com/ml-explore/mlx-examples) has a
|
2023-12-06 13:31:27 +08:00
|
|
|
variety of examples, including:
|
2023-11-30 08:23:42 +08:00
|
|
|
|
|
|
|
- [Transformer language model](https://github.com/ml-explore/mlx-examples/tree/main/transformer_lm) training.
|
2023-12-06 13:31:27 +08:00
|
|
|
- Large-scale text generation with
|
2023-12-26 12:53:20 +08:00
|
|
|
[LLaMA](https://github.com/ml-explore/mlx-examples/tree/main/llms/llama) and
|
2023-11-30 08:23:42 +08:00
|
|
|
finetuning with [LoRA](https://github.com/ml-explore/mlx-examples/tree/main/lora).
|
|
|
|
- Generating images with [Stable Diffusion](https://github.com/ml-explore/mlx-examples/tree/main/stable_diffusion).
|
|
|
|
- Speech recognition with [OpenAI's Whisper](https://github.com/ml-explore/mlx-examples/tree/main/whisper).
|
|
|
|
|
|
|
|
## Quickstart
|
|
|
|
|
|
|
|
See the [quick start
|
2024-01-06 07:58:33 +08:00
|
|
|
guide](https://ml-explore.github.io/mlx/build/html/usage/quick_start.html)
|
2023-11-30 08:23:42 +08:00
|
|
|
in the documentation.
|
|
|
|
|
|
|
|
## Installation
|
2023-11-30 02:52:08 +08:00
|
|
|
|
2023-12-11 06:47:37 +08:00
|
|
|
MLX is available on [PyPI](https://pypi.org/project/mlx/). To install the Python API, run:
|
2023-11-30 02:52:08 +08:00
|
|
|
|
2024-01-23 12:53:54 +08:00
|
|
|
**With `pip`**:
|
|
|
|
|
2023-11-30 02:52:08 +08:00
|
|
|
```
|
2023-11-30 08:23:42 +08:00
|
|
|
pip install mlx
|
2023-11-30 02:52:08 +08:00
|
|
|
```
|
|
|
|
|
2024-01-23 12:53:54 +08:00
|
|
|
**With `conda`**:
|
|
|
|
|
|
|
|
```
|
|
|
|
conda install -c conda-forge mlx
|
|
|
|
```
|
|
|
|
|
2023-11-30 08:23:42 +08:00
|
|
|
Checkout the
|
|
|
|
[documentation](https://ml-explore.github.io/mlx/build/html/install.html#)
|
|
|
|
for more information on building the C++ and Python APIs from source.
|
2023-11-30 02:52:08 +08:00
|
|
|
|
2023-11-30 04:54:28 +08:00
|
|
|
## Contributing
|
2023-11-30 02:52:08 +08:00
|
|
|
|
2024-05-20 21:16:40 +08:00
|
|
|
Check out the [contribution guidelines](https://github.com/ml-explore/mlx/tree/main/CONTRIBUTING.md) for more information
|
2023-12-19 02:07:00 +08:00
|
|
|
on contributing to MLX. See the
|
|
|
|
[docs](https://ml-explore.github.io/mlx/build/html/install.html) for more
|
|
|
|
information on building from source, and running tests.
|
2023-12-15 04:58:45 +08:00
|
|
|
|
2023-12-19 02:07:00 +08:00
|
|
|
We are grateful for all of [our
|
2024-05-20 21:16:40 +08:00
|
|
|
contributors](https://github.com/ml-explore/mlx/tree/main/ACKNOWLEDGMENTS.md#Individual-Contributors). If you contribute
|
2023-12-30 04:58:00 +08:00
|
|
|
to MLX and wish to be acknowledged, please add your name to the list in your
|
2023-12-19 02:07:00 +08:00
|
|
|
pull request.
|
2023-12-15 04:58:45 +08:00
|
|
|
|
2023-12-19 02:07:00 +08:00
|
|
|
## Citing MLX
|
|
|
|
|
|
|
|
The MLX software suite was initially developed with equal contribution by Awni
|
|
|
|
Hannun, Jagrit Digani, Angelos Katharopoulos, and Ronan Collobert. If you find
|
|
|
|
MLX useful in your research and wish to cite it, please use the following
|
|
|
|
BibTex entry:
|
|
|
|
|
|
|
|
```
|
|
|
|
@software{mlx2023,
|
|
|
|
author = {Awni Hannun and Jagrit Digani and Angelos Katharopoulos and Ronan Collobert},
|
|
|
|
title = {{MLX}: Efficient and flexible machine learning on Apple silicon},
|
|
|
|
url = {https://github.com/ml-explore},
|
|
|
|
version = {0.0},
|
|
|
|
year = {2023},
|
|
|
|
}
|
|
|
|
```
|