MLX: An array framework for Apple silicon
mlx
Go to file
2023-11-30 11:12:53 -08:00
benchmarks copyright + ack 2023-11-30 11:12:53 -08:00
cmake jagrit's commit files 2023-11-29 10:52:08 -08:00
docs copyright + ack 2023-11-30 11:12:53 -08:00
examples copyright + ack 2023-11-30 11:12:53 -08:00
mlx copyright + ack 2023-11-30 11:12:53 -08:00
python copyright + ack 2023-11-30 11:12:53 -08:00
tests copyright + ack 2023-11-30 11:12:53 -08:00
.clang-format awni's commit files 2023-11-29 10:30:41 -08:00
.gitignore angelos's commit files 2023-11-29 10:42:59 -08:00
.pre-commit-config.yaml angelos's commit files 2023-11-29 10:42:59 -08:00
ACKNOWLEDGMENTS.md copyright + ack 2023-11-30 11:12:53 -08:00
CMakeLists.txt angelos's commit files 2023-11-29 10:42:59 -08:00
CODE_OF_CONDUCT.md contribution and code of conduct (#1) 2023-11-29 12:54:28 -08:00
CONTRIBUTING.md Readme (#2) 2023-11-29 16:23:42 -08:00
LICENSE copyright + ack 2023-11-30 11:12:53 -08:00
MANIFEST.in awni's commit files 2023-11-29 10:30:41 -08:00
mlx.pc.in awni's commit files 2023-11-29 10:30:41 -08:00
pyproject.toml jagrit's commit files 2023-11-29 10:52:08 -08:00
README.md nits 2023-11-29 16:36:43 -08:00
setup.py copyright + ack 2023-11-30 11:12:53 -08:00

MLX

Quickstart | Installation | Documentation | Examples

MLX is an array framework for machine learning on Apple silicon.

Some key features of MLX include:

  • Familiar APIs: MLX has a Python API which closely follows NumPy. MLX also has a fully featured C++ API which closely mirrors the Python API. MLX has higher level packages like mlx.nn and mlx.optimizers with APIs that closely follow PyTorch to simplify building more complex models.

  • Composable function transformations: MLX has composable function transformations for automatic differentiation, automatic vectorization, and computation graph optimization.

  • Lazy computation: Computations in MLX are lazy. Arrays are only materialized when needed.

  • Dynamic graph construction: Computation graphs in MLX are built dynamically. Changing the shapes of function arguments does not trigger slow compilations, and debugging is simple and intuitive.

  • Multi-device: Operations can run on any of the supported devices (currently the CPU and GPU).

  • Unified memory: A noteable difference from MLX and other frameworks is the unified memory model. Arrays in MLX live in shared memory. Operations on MLX arrays can be performed on any of the supported device types without moving data.

MLX is designed by machine learning researchers for machine learning researchers. The framework is intended to be user friendly, but still efficient to train and deploy models. The design of the framework itself is also conceptually simple. We intend to make it easy for researchers to extend and improve MLX with the goal of quickly exploring new ideas.

The design of MLX is inspired by frameworks like NumPy, PyTorch, Jax, and ArrayFire.

Examples

The MLX examples repo has a variety of examples including:

Quickstart

See the quick start guide in the documentation.

Installation

MLX is available on PyPi. To install the Python API run:

pip install mlx

Checkout the documentation for more information on building the C++ and Python APIs from source.

Contributing

Check out the contribution guidelines for more information on contributing to MLX.