mirror of
https://github.com/ml-explore/mlx.git
synced 2025-12-16 01:49:05 +08:00
* in. com. * upd. ackn. * update __init__ * nits * nits + format * used mx.maximum(x, 0) instead of calling the function and moves relu6 under relu2 to make it nicer * same with _make_activation_module * Update python/mlx/nn/layers/activations.py upd Co-authored-by: Awni Hannun <awni.hannun@gmail.com> * update funct.rst * upd. layers.rst --------- Co-authored-by: Awni Hannun <awni.hannun@gmail.com>
71 lines
858 B
ReStructuredText
71 lines
858 B
ReStructuredText
.. _layers:
|
|
|
|
.. currentmodule:: mlx.nn
|
|
|
|
Layers
|
|
------
|
|
|
|
.. autosummary::
|
|
:toctree: _autosummary
|
|
:template: nn-module-template.rst
|
|
|
|
ALiBi
|
|
AvgPool1d
|
|
AvgPool2d
|
|
AvgPool3d
|
|
BatchNorm
|
|
CELU
|
|
Conv1d
|
|
Conv2d
|
|
Conv3d
|
|
ConvTranspose1d
|
|
ConvTranspose2d
|
|
ConvTranspose3d
|
|
Dropout
|
|
Dropout2d
|
|
Dropout3d
|
|
Embedding
|
|
ELU
|
|
GELU
|
|
GLU
|
|
GroupNorm
|
|
GRU
|
|
HardShrink
|
|
HardTanh
|
|
Hardswish
|
|
InstanceNorm
|
|
LayerNorm
|
|
LeakyReLU
|
|
Linear
|
|
LogSigmoid
|
|
LogSoftmax
|
|
LSTM
|
|
MaxPool1d
|
|
MaxPool2d
|
|
MaxPool3d
|
|
Mish
|
|
MultiHeadAttention
|
|
PReLU
|
|
QuantizedEmbedding
|
|
QuantizedLinear
|
|
RMSNorm
|
|
ReLU
|
|
ReLU2
|
|
ReLU6
|
|
RNN
|
|
RoPE
|
|
SELU
|
|
Sequential
|
|
Sigmoid
|
|
SiLU
|
|
SinusoidalPositionalEncoding
|
|
Softmin
|
|
Softshrink
|
|
Softsign
|
|
Softmax
|
|
Softplus
|
|
Step
|
|
Tanh
|
|
Transformer
|
|
Upsample
|