* feat: add logicalAnd and logicalOR
* run pre-commit
* Refactor logical_and and logical_or functions
* Add acknowledgement
* Add logical AND and logical OR operators
* Refactor logical_and and logical_or functions
* Add support for logical operators on bool arrays
* Update mlx/ops.cpp
Co-authored-by: Awni Hannun <awni.hannun@gmail.com>
* Update mlx/ops.cpp
Co-authored-by: Awni Hannun <awni.hannun@gmail.com>
* Add logical AND and OR operators for arrays and scalars
* Refactor vjp and jvp methods in primitives.cpp
* Add overloaded operators for logical AND and OR
* format
---------
Co-authored-by: Awni Hannun <awni.hannun@gmail.com>
Co-authored-by: Awni Hannun <awni@apple.com>
* Added GLU activation function and gated activation function
* Ran pre-commit
* Ran pre commit
* Removed old sigmoid implementation to match with main
* Removed gated activation from __init__.py
* Removed unused test cases
* Removed unused imports
* format / docstring
---------
Co-authored-by: Awni Hannun <awni@apple.com>
* Add note to install instructions for building from source to ensure native arm64 environment and tools.
* Add troubleshooting info.
* remove cmake bits
---------
Co-authored-by: Awni Hannun <awni@apple.com>
* inner / outer impl
* python tests
* ops list and ack
* updated descriptions
* use test helper
* removed dtype check and flatten outer to 1-D
* updated docs
* just use the reshape to flatten
All functions that take an optional dtype should
* have a default dtype visible in the generated docs (accomplished via `"dtype"_a = std::optional{float32}`)
* behave identical when `dtype=None` or no dtype is passed
This important when passing kw args down from a numpy function like:
```
def f(x, dtype=None):
mx.random.uniform(dtype=dtype)
# ...
```
NumPy functions behave like this.
It also fixes a minor bug in `tri`: #378Closes#378
* support python mlx.array creation from list of mlx.array's
* include bfloat16 in UT
* refactor so that sub array made of all python primitive types gets initialized by fill_vector
* address PR comment: arr.shape().size() -> arr.ndim()
* address PR comment: get back Dtype constness and let stack to handle type promotions automatically
* move all ObjC (via metal-cpp) interaction until post static initializers
- metal-cpp relies on static initializers to cache class and selector pointers
- code in mlx was using metal-cpp to set up NSAutoreleasePools during its own static init time
- but this code was silently failing as the class and selector pointers from metal-cpp were still nil
- defer the creation of NSAutoreleasePools until after static init time
- ensure that we have coverage where autorelease pools are needed
* Update device.cpp
remove commented code
* Update device.cpp
remove commented out code
* Update scheduler.h
update comment
* per discussion use the pool inside the task() -- this will be metal only, not needed for cpu
* Update allocator.cpp
move pool to release/alloc area
* implemented instancenorm
* implemented vector_norm in cpp
added linalg to mlx
* implemented vector_norm python binding
* renamed vector_norm to norm, implemented norm without provided ord
* completed the implementation of the norm
* added tests
* removed unused import in linalg.cpp
* updated python bindings
* added some tests for python bindings
* handling inf, -inf as numpy does, more extensive tests of compatibility with numpy
* added better docs and examples
* refactored mlx.linalg.norm bindings
* reused existing util for implementation of linalg.norm
* more tests
* fixed a bug with no ord and axis provided
* removed unused imports
* some style and API consistency updates to linalg norm
* remove unused includes
* fix python tests
* fixed a bug with frobenius norm of a complex-valued matrix
* complex for vector too
* addressed PR review comments
* fixed import order in __init__
* expected values in instancenorm tests are simple lists
* minor return expression style change
* added InstanceNorm to docs
* doc string nits
* added myself to individual contributors
---------
Co-authored-by: Awni Hannun <awni@apple.com>
* update module to check weights on load, also fix docs and reorganize tests
* nits + rebase
* a few more docs updates for Module
* use manual module file
* comment
* try alternative gc
* try no cache
* add forced swap
* remove cache for now
* add cache back
* change fit crtieria
* remove unused function
* nit in comment
* tune / fix allocation
* increase block limit to original
* Added an identity and bilinear layers
Added a reset_parameters option
Added normal init for bias
* pre-commit run
* add type hints for parameters and the return type
change Bilinear math to x_1 and x_2
change __call__ arguments to x and y instead of input and output
add explanation to the Initialization
* Remove unnecessary reshape
* Added 'i' to bilinear formula
* Changed bilinear computation to two matrix multiplications
* avoid saving intermediate results, kept y in bilinear for better clarity (can be replaced with x1)
* Changed math formula in Linear
Added more explanation to math formulas
Changed x1, x2 reshape to support all inputs sizes