Nripesh Niketan
|
e09bf35b28
|
feat: Add Dropout3d layer to nn.layers (#313)
* feat: Add Dropout3d layer to nn.layers
* acknowledgement
* Add dropout tests to test_nn.py
* run pre-commit
* Add activation functions and dropout3d ops
* Add dropout tests for bfloat16 and float16
|
2023-12-31 14:01:21 -08:00 |
|
YUN, Junwoo
|
4417e37ede
|
Transformer fix (#167)
* add transformer with dropout, fix transformer ffm, layernorm order
* precommit changes
* precommit changes
* add docstring, activation, norm_first
* run precommit
* run precommit
* add doctstring
* precommit
* style nits in docs
---------
Co-authored-by: junwoo-yun <junwoo.yun@bagelcode.com>
Co-authored-by: Awni Hannun <awni@apple.com>
|
2023-12-27 08:48:36 -08:00 |
|
__mo_san__
|
a123c3c7d2
|
implement-batch-norm-layer (#217)
- Add batch normalization layer
---------
Co-authored-by: Robert McCraith <mccraithrobert@gmail.com>
Co-authored-by: Awni Hannun <awni@apple.com>
|
2023-12-25 07:32:53 -08:00 |
|
Justin Deschenaux
|
e8deca84e0
|
Add dropout2d (#250)
|
2023-12-22 08:02:29 -08:00 |
|
Angelos Katharopoulos
|
57fe918cf8
|
Adds C++ and nn quantization utilities (#230)
* Add C++ de-/quantize ops
* Add quantize functions to the docs and tests
* Add a QuantizedLinear module
|
2023-12-20 14:17:38 -08:00 |
|
Awni Hannun
|
ee0c2835c5
|
Docs updates (#198)
Reorganize NN docs + a few other tidbits.
|
2023-12-17 13:20:55 -08:00 |
|