mlx/python/mlx/nn/layers
2025-03-21 13:52:17 -07:00
..
__init__.py Distributed layers (#1270) 2025-03-21 13:52:17 -07:00
activations.py feat: Added "tanh" option to GELU approximation (#1268) 2024-07-28 09:07:56 +02:00
base.py Add type hint for _extra_repr (#1948) 2025-03-10 06:05:36 -07:00
containers.py copyright + ack 2023-11-30 11:12:53 -08:00
convolution_transpose.py add dilation for conv 3d layers + test for 3d conv w/ dilation (#1802) 2025-01-28 06:17:07 -08:00
convolution.py Ensure Conv2D and Conv3D's kernel sizes aren't trimmed (#1852) 2025-02-10 06:27:01 -08:00
distributed.py Distributed layers (#1270) 2025-03-21 13:52:17 -07:00
dropout.py Typing the dropout. (#1479) 2024-10-15 06:45:46 -07:00
embedding.py Block sparse qmm (#1124) 2024-05-16 15:24:14 -07:00
linear.py Block sparse qmm (#1124) 2024-05-16 15:24:14 -07:00
normalization.py faster group norm (#1304) 2024-08-01 12:49:23 -07:00
pooling.py Doc fix (#1615) 2024-11-22 11:12:25 -08:00
positional_encoding.py Doc error for default for scale in SinusoidalPositionalEncoding (#1174) 2024-06-02 13:42:45 -07:00
quantized.py No reshapes in quantized embedding (#1682) 2024-12-09 18:57:38 -08:00
recurrent.py Faster RNN layers (#1419) 2024-09-17 06:04:19 -07:00
transformer.py use sdpa and exportable functions in transformer multi head attention (#1760) 2025-01-09 13:11:55 -08:00
upsample.py add cubic to type hinting for upsample (#1709) 2024-12-17 07:30:23 -08:00