mlx/docs/build/html/_sources/python/optimizers.rst

49 lines
1.3 KiB
ReStructuredText
Raw Normal View History

2023-11-30 04:41:56 +08:00
.. _optimizers:
Optimizers
==========
The optimizers in MLX can be used both with :mod:`mlx.nn` but also with pure
:mod:`mlx.core` functions. A typical example involves calling
:meth:`Optimizer.update` to update a model's parameters based on the loss
gradients and subsequently calling :func:`mlx.core.eval` to evaluate both the
model's parameters and the **optimizer state**.
.. code-block:: python
# Create a model
model = MLP(num_layers, train_images.shape[-1], hidden_dim, num_classes)
mx.eval(model.parameters())
# Create the gradient function and the optimizer
loss_and_grad_fn = nn.value_and_grad(model, loss_fn)
optimizer = optim.SGD(learning_rate=learning_rate)
for e in range(num_epochs):
for X, y in batch_iterate(batch_size, train_images, train_labels):
loss, grads = loss_and_grad_fn(model, X, y)
# Update the model with the gradients. So far no computation has happened.
optimizer.update(model, grads)
# Compute the new parameters but also the optimizer state.
mx.eval(model.parameters(), optimizer.state)
.. currentmodule:: mlx.optimizers
.. autosummary::
:toctree: _autosummary
:template: optimizers-template.rst
OptimizerState
Optimizer
SGD
2023-12-18 05:23:03 +08:00
RMSprop
Adagrad
2024-02-02 05:08:29 +08:00
Adafactor
2023-12-18 05:23:03 +08:00
AdaDelta
2023-11-30 04:41:56 +08:00
Adam
2023-12-18 05:23:03 +08:00
AdamW
Adamax
2023-12-22 14:13:41 +08:00
Lion