mlx/docs/build/html/_sources/index.rst

74 lines
1.8 KiB
ReStructuredText
Raw Normal View History

2023-12-06 04:10:03 +08:00
MLX
===
2023-12-06 06:18:20 +08:00
MLX is a NumPy-like array framework designed for efficient and flexible machine
learning on Apple silicon, brought to you by Apple machine learning research.
2023-12-06 04:10:03 +08:00
The Python API closely follows NumPy with a few exceptions. MLX also has a
fully featured C++ API which closely follows the Python API.
The main differences between MLX and NumPy are:
- **Composable function transformations**: MLX has composable function
transformations for automatic differentiation, automatic vectorization,
and computation graph optimization.
- **Lazy computation**: Computations in MLX are lazy. Arrays are only
materialized when needed.
- **Multi-device**: Operations can run on any of the supported devices (CPU,
GPU, ...)
2023-12-06 06:18:20 +08:00
The design of MLX is inspired by frameworks like `PyTorch
2023-12-06 04:10:03 +08:00
<https://pytorch.org/>`_, `Jax <https://github.com/google/jax>`_, and
`ArrayFire <https://arrayfire.org/>`_. A noteable difference from these
frameworks and MLX is the *unified memory model*. Arrays in MLX live in shared
memory. Operations on MLX arrays can be performed on any of the supported
device types without performing data copies. Currently supported device types
are the CPU and GPU.
.. toctree::
:caption: Install
:maxdepth: 1
install
.. toctree::
:caption: Usage
:maxdepth: 1
quick_start
using_streams
.. toctree::
:caption: Examples
:maxdepth: 1
examples/linear_regression
examples/mlp
examples/llama-inference
.. toctree::
:caption: Python API Reference
:maxdepth: 1
python/array
python/devices_and_streams
python/ops
python/random
python/transforms
python/fft
python/nn
python/optimizers
python/tree_utils
.. toctree::
:caption: C++ API Reference
:maxdepth: 1
cpp/ops
.. toctree::
:caption: Further Reading
:maxdepth: 1
dev/extensions