Manuel Villanueva 89a3df9014 Fixed several type annotations in the MLX stubs which degraded to Unknown/Any (#2560)
* Added scalar to stubs to fix Unkown Type Hint

### Proposed changes

Issue #2478 reports that several type annotations in the MLX stubs degrade to Unknown/Any in editors like VS Code with Pylance, due to missing imports (Union, Optional, Tuple) and an undefined scalar type alias.

This PR updates the stub generation patterns to:
	•	Add missing typing imports in mlx.core.__prefix__ so that Union, Optional, Tuple, etc. are always available.
	•	Define and export scalar: TypeAlias = Union[int, float, bool] in mlx.core.__suffix__ so that functions typed with Union[scalar, array] resolve correctly instead of falling back to Any.
	•	Update submodule stub prefixes (distributed, fast, linalg, metal, random) to import scalar alongside array, Device, and Stream, ensuring type checkers resolve the union consistently across modules.

With these changes, functions like mlx.add now display rich type signatures such as:

```
def add(
    a: scalar | array,
    b: scalar | array,
    stream: Stream | Device | None = None
) -> array
```

instead of degrading to Any.

### Checklist

	•	I have read the CONTRIBUTING document
	•	I have run pre-commit run --all-files to format my code / installed pre-commit prior to committing changes
	•	I have added tests that prove my fix is effective or that my feature works (n/a — stub generation only)
	•	I have updated the necessary documentation (if needed)

* add bool to patterns

---------

Co-authored-by: Awni Hannun <awni@apple.com>
2025-09-03 12:52:08 -07:00
2024-06-12 22:06:49 -07:00
2025-08-21 11:56:15 -07:00
2025-09-02 11:00:36 -07:00
2023-11-29 10:30:41 -08:00
2025-04-30 06:04:07 -07:00
2024-09-20 11:39:46 -07:00
2025-03-25 19:06:34 -07:00
2023-11-30 11:12:53 -08:00
2025-04-30 06:04:07 -07:00
2025-08-06 06:19:12 -07:00

MLX

Quickstart | Installation | Documentation | Examples

CircleCI

MLX is an array framework for machine learning on Apple silicon, brought to you by Apple machine learning research.

Some key features of MLX include:

  • Familiar APIs: MLX has a Python API that closely follows NumPy. MLX also has fully featured C++, C, and Swift APIs, which closely mirror the Python API. MLX has higher-level packages like mlx.nn and mlx.optimizers with APIs that closely follow PyTorch to simplify building more complex models.

  • Composable function transformations: MLX supports composable function transformations for automatic differentiation, automatic vectorization, and computation graph optimization.

  • Lazy computation: Computations in MLX are lazy. Arrays are only materialized when needed.

  • Dynamic graph construction: Computation graphs in MLX are constructed dynamically. Changing the shapes of function arguments does not trigger slow compilations, and debugging is simple and intuitive.

  • Multi-device: Operations can run on any of the supported devices (currently the CPU and the GPU).

  • Unified memory: A notable difference from MLX and other frameworks is the unified memory model. Arrays in MLX live in shared memory. Operations on MLX arrays can be performed on any of the supported device types without transferring data.

MLX is designed by machine learning researchers for machine learning researchers. The framework is intended to be user-friendly, but still efficient to train and deploy models. The design of the framework itself is also conceptually simple. We intend to make it easy for researchers to extend and improve MLX with the goal of quickly exploring new ideas.

The design of MLX is inspired by frameworks like NumPy, PyTorch, Jax, and ArrayFire.

Examples

The MLX examples repo has a variety of examples, including:

Quickstart

See the quick start guide in the documentation.

Installation

MLX is available on PyPI. To install MLX on macOS, run:

pip install mlx

To install the CUDA backend on Linux, run:

pip install mlx[cuda]

To install a CPU-only Linux package, run:

pip install mlx[cpu]

Checkout the documentation for more information on building the C++ and Python APIs from source.

Contributing

Check out the contribution guidelines for more information on contributing to MLX. See the docs for more information on building from source, and running tests.

We are grateful for all of our contributors. If you contribute to MLX and wish to be acknowledged, please add your name to the list in your pull request.

Citing MLX

The MLX software suite was initially developed with equal contribution by Awni Hannun, Jagrit Digani, Angelos Katharopoulos, and Ronan Collobert. If you find MLX useful in your research and wish to cite it, please use the following BibTex entry:

@software{mlx2023,
  author = {Awni Hannun and Jagrit Digani and Angelos Katharopoulos and Ronan Collobert},
  title = {{MLX}: Efficient and flexible machine learning on Apple silicon},
  url = {https://github.com/ml-explore},
  version = {0.0},
  year = {2023},
}
Description
MLX: An array framework for Apple silicon
mlx
Readme MIT Cite this repository 410 MiB
Languages
C++ 65.3%
Python 22.5%
Cuda 6.6%
Metal 3.7%
CMake 1%
Other 0.8%