mirror of
https://github.com/ml-explore/mlx.git
synced 2025-08-22 04:56:41 +08:00

* implemented vector_norm in cpp added linalg to mlx * implemented vector_norm python binding * renamed vector_norm to norm, implemented norm without provided ord * completed the implementation of the norm * added tests * removed unused import in linalg.cpp * updated python bindings * added some tests for python bindings * handling inf, -inf as numpy does, more extensive tests of compatibility with numpy * added better docs and examples * refactored mlx.linalg.norm bindings * reused existing util for implementation of linalg.norm * more tests * fixed a bug with no ord and axis provided * removed unused imports * some style and API consistency updates to linalg norm * remove unused includes * fix python tests * fixed a bug with frobenius norm of a complex-valued matrix * complex for vector too --------- Co-authored-by: Awni Hannun <awni@apple.com>
76 lines
1.8 KiB
ReStructuredText
76 lines
1.8 KiB
ReStructuredText
MLX
|
|
===
|
|
|
|
MLX is a NumPy-like array framework designed for efficient and flexible machine
|
|
learning on Apple silicon, brought to you by Apple machine learning research.
|
|
|
|
The Python API closely follows NumPy with a few exceptions. MLX also has a
|
|
fully featured C++ API which closely follows the Python API.
|
|
|
|
The main differences between MLX and NumPy are:
|
|
|
|
- **Composable function transformations**: MLX has composable function
|
|
transformations for automatic differentiation, automatic vectorization,
|
|
and computation graph optimization.
|
|
- **Lazy computation**: Computations in MLX are lazy. Arrays are only
|
|
materialized when needed.
|
|
- **Multi-device**: Operations can run on any of the supported devices (CPU,
|
|
GPU, ...)
|
|
|
|
The design of MLX is inspired by frameworks like `PyTorch
|
|
<https://pytorch.org/>`_, `Jax <https://github.com/google/jax>`_, and
|
|
`ArrayFire <https://arrayfire.org/>`_. A noteable difference from these
|
|
frameworks and MLX is the *unified memory model*. Arrays in MLX live in shared
|
|
memory. Operations on MLX arrays can be performed on any of the supported
|
|
device types without performing data copies. Currently supported device types
|
|
are the CPU and GPU.
|
|
|
|
.. toctree::
|
|
:caption: Install
|
|
:maxdepth: 1
|
|
|
|
install
|
|
|
|
.. toctree::
|
|
:caption: Usage
|
|
:maxdepth: 1
|
|
|
|
quick_start
|
|
unified_memory
|
|
using_streams
|
|
|
|
.. toctree::
|
|
:caption: Examples
|
|
:maxdepth: 1
|
|
|
|
examples/linear_regression
|
|
examples/mlp
|
|
examples/llama-inference
|
|
|
|
.. toctree::
|
|
:caption: Python API Reference
|
|
:maxdepth: 1
|
|
|
|
python/array
|
|
python/devices_and_streams
|
|
python/ops
|
|
python/random
|
|
python/transforms
|
|
python/fft
|
|
python/linalg
|
|
python/nn
|
|
python/optimizers
|
|
python/tree_utils
|
|
|
|
.. toctree::
|
|
:caption: C++ API Reference
|
|
:maxdepth: 1
|
|
|
|
cpp/ops
|
|
|
|
.. toctree::
|
|
:caption: Further Reading
|
|
:maxdepth: 1
|
|
|
|
dev/extensions
|