This commit is contained in:
CircleCI Docs
2025-02-14 21:44:39 +00:00
parent cc43b2d401
commit 81f84f87d1
748 changed files with 24254 additions and 13906 deletions

View File

@@ -0,0 +1,6 @@
mlx.core.bitwise\_invert
========================
.. currentmodule:: mlx.core
.. autofunction:: bitwise_invert

View File

@@ -0,0 +1,6 @@
mlx.core.linalg.lu
==================
.. currentmodule:: mlx.core.linalg
.. autofunction:: lu

View File

@@ -0,0 +1,6 @@
mlx.core.linalg.lu\_factor
==========================
.. currentmodule:: mlx.core.linalg
.. autofunction:: lu_factor

View File

@@ -0,0 +1,6 @@
mlx.core.linalg.solve
=====================
.. currentmodule:: mlx.core.linalg
.. autofunction:: solve

View File

@@ -0,0 +1,6 @@
mlx.core.linalg.solve\_triangular
=================================
.. currentmodule:: mlx.core.linalg
.. autofunction:: solve_triangular

View File

@@ -51,11 +51,20 @@ The default floating point type is ``float32`` and the default integer type is
* - ``float32``
- 4
- 32-bit float
* - ``float64``
- 4
- 64-bit double
* - ``complex64``
- 8
- 64-bit complex float
.. note::
Arrays with type ``float64`` only work with CPU operations. Using
``float64`` arrays on the GPU will result in an exception.
Data type are aranged in a hierarchy. See the :obj:`DtypeCategory` object
documentation for more information. Use :func:`issubdtype` to determine if one
``dtype`` (or category) is a subtype of another category.

View File

@@ -5,8 +5,8 @@ Linear Algebra
.. currentmodule:: mlx.core.linalg
.. autosummary::
:toctree: _autosummary
.. autosummary::
:toctree: _autosummary
inv
tri_inv
@@ -18,3 +18,7 @@ Linear Algebra
svd
eigvalsh
eigh
lu
lu_factor
solve
solve_triangular

View File

@@ -32,6 +32,7 @@ Operations
atleast_2d
atleast_3d
bitwise_and
bitwise_invert
bitwise_or
bitwise_xor
block_masked_mm

View File

@@ -21,11 +21,13 @@ Let's convert an array to NumPy and back.
.. note::
Since NumPy does not support ``bfloat16`` arrays, you will need to convert to ``float16`` or ``float32`` first:
``np.array(a.astype(mx.float32))``.
Otherwise, you will receive an error like: ``Item size 2 for PEP 3118 buffer format string does not match the dtype V item size 0.``
Since NumPy does not support ``bfloat16`` arrays, you will need to convert
to ``float16`` or ``float32`` first: ``np.array(a.astype(mx.float32))``.
Otherwise, you will receive an error like: ``Item size 2 for PEP 3118
buffer format string does not match the dtype V item size 0.``
By default, NumPy copies data to a new array. This can be prevented by creating an array view:
By default, NumPy copies data to a new array. This can be prevented by creating
an array view:
.. code-block:: python
@@ -35,10 +37,16 @@ By default, NumPy copies data to a new array. This can be prevented by creating
a_view[0] = 1
print(a[0].item()) # 1
A NumPy array view is a normal NumPy array, except that it does not own its memory.
This means writing to the view is reflected in the original array.
.. note::
While this is quite powerful to prevent copying arrays, it should be noted that external changes to the memory of arrays cannot be reflected in gradients.
NumPy arrays with type ``float64`` will be default converted to MLX arrays
with type ``float32``.
A NumPy array view is a normal NumPy array, except that it does not own its
memory. This means writing to the view is reflected in the original array.
While this is quite powerful to prevent copying arrays, it should be noted that
external changes to the memory of arrays cannot be reflected in gradients.
Let's demonstrate this in an example:
@@ -56,11 +64,12 @@ Let's demonstrate this in an example:
The function ``f`` indirectly modifies the array ``x`` through a memory view.
However, this modification is not reflected in the gradient, as seen in the last line outputting ``1.0``,
representing the gradient of the sum operation alone.
The squaring of ``x`` occurs externally to MLX, meaning that no gradient is incorporated.
It's important to note that a similar issue arises during array conversion and copying.
For instance, a function defined as ``mx.array(np.array(x)**2).sum()`` would also result in an incorrect gradient,
However, this modification is not reflected in the gradient, as seen in the
last line outputting ``1.0``, representing the gradient of the sum operation
alone. The squaring of ``x`` occurs externally to MLX, meaning that no
gradient is incorporated. It's important to note that a similar issue arises
during array conversion and copying. For instance, a function defined as
``mx.array(np.array(x)**2).sum()`` would also result in an incorrect gradient,
even though no in-place operations on MLX memory are executed.
PyTorch
@@ -71,7 +80,8 @@ PyTorch
PyTorch Support for :obj:`memoryview` is experimental and can break for
multi-dimensional arrays. Casting to NumPy first is advised for now.
PyTorch supports the buffer protocol, but it requires an explicit :obj:`memoryview`.
PyTorch supports the buffer protocol, but it requires an explicit
:obj:`memoryview`.
.. code-block:: python
@@ -82,7 +92,8 @@ PyTorch supports the buffer protocol, but it requires an explicit :obj:`memoryvi
b = torch.tensor(memoryview(a))
c = mx.array(b.numpy())
Conversion from PyTorch tensors back to arrays must be done via intermediate NumPy arrays with ``numpy()``.
Conversion from PyTorch tensors back to arrays must be done via intermediate
NumPy arrays with ``numpy()``.
JAX
---
@@ -100,7 +111,8 @@ JAX fully supports the buffer protocol.
TensorFlow
----------
TensorFlow supports the buffer protocol, but it requires an explicit :obj:`memoryview`.
TensorFlow supports the buffer protocol, but it requires an explicit
:obj:`memoryview`.
.. code-block:: python