mirror of
https://github.com/ml-explore/mlx.git
synced 2025-09-27 16:28:10 +08:00
Doc theme (#5)
* change docs theme + links + logo * move mlx intro to landing page
This commit is contained in:
@@ -1,6 +1,30 @@
|
||||
MLX
|
||||
===
|
||||
|
||||
MLX is a NumPy-like array framework designed for efficient and flexible
|
||||
machine learning on Apple silicon.
|
||||
|
||||
The Python API closely follows NumPy with a few exceptions. MLX also has a
|
||||
fully featured C++ API which closely follows the Python API.
|
||||
|
||||
The main differences between MLX and NumPy are:
|
||||
|
||||
- **Composable function transformations**: MLX has composable function
|
||||
transformations for automatic differentiation, automatic vectorization,
|
||||
and computation graph optimization.
|
||||
- **Lazy computation**: Computations in MLX are lazy. Arrays are only
|
||||
materialized when needed.
|
||||
- **Multi-device**: Operations can run on any of the supported devices (CPU,
|
||||
GPU, ...)
|
||||
|
||||
The design of MLX is strongly inspired by frameworks like `PyTorch
|
||||
<https://pytorch.org/>`_, `Jax <https://github.com/google/jax>`_, and
|
||||
`ArrayFire <https://arrayfire.org/>`_. A noteable difference from these
|
||||
frameworks and MLX is the *unified memory model*. Arrays in MLX live in shared
|
||||
memory. Operations on MLX arrays can be performed on any of the supported
|
||||
device types without performing data copies. Currently supported device types
|
||||
are the CPU and GPU.
|
||||
|
||||
.. toctree::
|
||||
:caption: Install
|
||||
:maxdepth: 1
|
||||
|
Reference in New Issue
Block a user