update docs

This commit is contained in:
Awni Hannun
2024-02-01 13:08:29 -08:00
committed by CircleCI Docs
parent c465c51cbb
commit f12615680d
378 changed files with 32586 additions and 1950 deletions

View File

@@ -929,7 +929,7 @@ We see some modest improvements right away!
This operation is now good to be used to build other operations,
in :class:`mlx.nn.Module` calls, and also as a part of graph
transformations such as :meth:`grad` and :meth:`simplify`!
transformations like :meth:`grad`!
Scripts
-------

View File

@@ -25,6 +25,8 @@
~array.cummin
~array.cumprod
~array.cumsum
~array.diag
~array.diagonal
~array.exp
~array.flatten
~array.item

View File

@@ -0,0 +1,6 @@
mlx.core.diag
=============
.. currentmodule:: mlx.core
.. autofunction:: diag

View File

@@ -0,0 +1,6 @@
mlx.core.diagonal
=================
.. currentmodule:: mlx.core
.. autofunction:: diagonal

View File

@@ -0,0 +1,6 @@
mlx.core.linalg.qr
==================
.. currentmodule:: mlx.core.linalg
.. autofunction:: qr

View File

@@ -0,0 +1,18 @@
mlx.optimizers.Adafactor
========================
.. currentmodule:: mlx.optimizers
.. autoclass:: Adafactor
.. rubric:: Methods
.. autosummary::
~Adafactor.__init__
~Adafactor.apply_single

View File

@@ -9,3 +9,4 @@ Linear Algebra
:toctree: _autosummary
norm
qr

View File

@@ -180,3 +180,4 @@ In detail:
nn/layers
nn/functions
nn/losses
nn/init

View File

@@ -0,0 +1,8 @@
mlx.nn.Softshrink
=================
.. currentmodule:: mlx.nn
.. autoclass:: Softshrink

View File

@@ -0,0 +1,6 @@
mlx.nn.init.constant
====================
.. currentmodule:: mlx.nn.init
.. autofunction:: constant

View File

@@ -0,0 +1,6 @@
mlx.nn.init.glorot\_normal
==========================
.. currentmodule:: mlx.nn.init
.. autofunction:: glorot_normal

View File

@@ -0,0 +1,6 @@
mlx.nn.init.glorot\_uniform
===========================
.. currentmodule:: mlx.nn.init
.. autofunction:: glorot_uniform

View File

@@ -0,0 +1,6 @@
mlx.nn.init.he\_normal
======================
.. currentmodule:: mlx.nn.init
.. autofunction:: he_normal

View File

@@ -0,0 +1,6 @@
mlx.nn.init.he\_uniform
=======================
.. currentmodule:: mlx.nn.init
.. autofunction:: he_uniform

View File

@@ -0,0 +1,6 @@
mlx.nn.init.identity
====================
.. currentmodule:: mlx.nn.init
.. autofunction:: identity

View File

@@ -0,0 +1,6 @@
mlx.nn.init.normal
==================
.. currentmodule:: mlx.nn.init
.. autofunction:: normal

View File

@@ -0,0 +1,6 @@
mlx.nn.init.uniform
===================
.. currentmodule:: mlx.nn.init
.. autofunction:: uniform

View File

@@ -0,0 +1,6 @@
mlx.nn.init.constant
====================
.. currentmodule:: mlx.nn.init
.. autofunction:: constant

View File

@@ -0,0 +1,6 @@
mlx.nn.init.glorot\_normal
==========================
.. currentmodule:: mlx.nn.init
.. autofunction:: glorot_normal

View File

@@ -0,0 +1,6 @@
mlx.nn.init.glorot\_uniform
===========================
.. currentmodule:: mlx.nn.init
.. autofunction:: glorot_uniform

View File

@@ -0,0 +1,6 @@
mlx.nn.init.he\_normal
======================
.. currentmodule:: mlx.nn.init
.. autofunction:: he_normal

View File

@@ -0,0 +1,6 @@
mlx.nn.init.he\_uniform
=======================
.. currentmodule:: mlx.nn.init
.. autofunction:: he_uniform

View File

@@ -0,0 +1,6 @@
mlx.nn.init.identity
====================
.. currentmodule:: mlx.nn.init
.. autofunction:: identity

View File

@@ -0,0 +1,6 @@
mlx.nn.init.normal
==================
.. currentmodule:: mlx.nn.init
.. autofunction:: normal

View File

@@ -0,0 +1,6 @@
mlx.nn.init.uniform
===================
.. currentmodule:: mlx.nn.init
.. autofunction:: uniform

View File

@@ -0,0 +1,6 @@
mlx.nn.initializers.constant
============================
.. currentmodule:: mlx.nn.initializers
.. autofunction:: constant

View File

@@ -0,0 +1,6 @@
mlx.nn.initializers.glorot\_normal
==================================
.. currentmodule:: mlx.nn.initializers
.. autofunction:: glorot_normal

View File

@@ -0,0 +1,6 @@
mlx.nn.initializers.glorot\_uniform
===================================
.. currentmodule:: mlx.nn.initializers
.. autofunction:: glorot_uniform

View File

@@ -0,0 +1,6 @@
mlx.nn.initializers.he\_normal
==============================
.. currentmodule:: mlx.nn.initializers
.. autofunction:: he_normal

View File

@@ -0,0 +1,6 @@
mlx.nn.initializers.he\_uniform
===============================
.. currentmodule:: mlx.nn.initializers
.. autofunction:: he_uniform

View File

@@ -0,0 +1,6 @@
mlx.nn.initializers.identity
============================
.. currentmodule:: mlx.nn.initializers
.. autofunction:: identity

View File

@@ -0,0 +1,6 @@
mlx.nn.initializers.normal
==========================
.. currentmodule:: mlx.nn.initializers
.. autofunction:: normal

View File

@@ -0,0 +1,6 @@
mlx.nn.initializers.uniform
===========================
.. currentmodule:: mlx.nn.initializers
.. autofunction:: uniform

View File

@@ -0,0 +1,8 @@
mlx.nn.losses.gaussian\_nll\_loss
=================================
.. currentmodule:: mlx.nn.losses
.. autoclass:: gaussian_nll_loss

View File

@@ -0,0 +1,8 @@
mlx.nn.softshrink
=================
.. currentmodule:: mlx.nn
.. autoclass:: softshrink

View File

@@ -19,5 +19,6 @@ simple functions.
prelu
relu
selu
softshrink
silu
step

View File

@@ -0,0 +1,45 @@
.. _init:
.. currentmodule:: mlx.nn.init
Initializers
------------
The ``mlx.nn.init`` package contains commonly used initializers for neural
network parameters. Initializers return a function which can be applied to any
input :obj:`mlx.core.array` to produce an initialized output.
For example:
.. code:: python
import mlx.core as mx
import mlx.nn as nn
init_fn = nn.init.uniform()
# Produces a [2, 2] uniform matrix
param = init_fn(mx.zeros((2, 2)))
To re-initialize all the parameter in an :obj:`mlx.nn.Module` from say a uniform
distribution, you can do:
.. code:: python
import mlx.nn as nn
model = nn.Sequential(nn.Linear(5, 10), nn.ReLU(), nn.Linear(10, 5))
init_fn = nn.init.uniform(low=-0.1, high=0.1)
model.apply(init_fn)
.. autosummary::
:toctree: _autosummary
constant
normal
uniform
identity
glorot_normal
glorot_uniform
he_normal
he_uniform

View File

@@ -0,0 +1,18 @@
.. _initializers:
.. currentmodule:: mlx.nn.initializers
Initializers
--------------
.. autosummary::
:toctree: _autosummary_functions
constant
normal
uniform
identity
glorot_normal
glorot_uniform
he_normal
he_uniform

View File

@@ -33,5 +33,6 @@ Layers
Sequential
SiLU
SinusoidalPositionalEncoding
Softshrink
Step
Transformer

View File

@@ -12,6 +12,7 @@ Loss Functions
binary_cross_entropy
cosine_similarity_loss
cross_entropy
gaussian_nll_loss
hinge_loss
huber_loss
kl_div_loss

View File

@@ -35,6 +35,8 @@ Operations
cos
cosh
dequantize
diag
diagonal
divide
divmod
equal

View File

@@ -40,6 +40,7 @@ model's parameters and the **optimizer state**.
SGD
RMSprop
Adagrad
Adafactor
AdaDelta
Adam
AdamW

View File

@@ -14,4 +14,3 @@ Transforms
jvp
vjp
vmap
simplify

View File

@@ -20,7 +20,7 @@ Transforming Compute Graphs
Lazy evaluation let's us record a compute graph without actually doing any
computations. This is useful for function transformations like :func:`grad` and
:func:`vmap` and graph optimizations like :func:`simplify`.
:func:`vmap` and graph optimizations.
Currently, MLX does not compile and rerun compute graphs. They are all
generated dynamically. However, lazy evaluation makes it much easier to