diff --git a/README.md b/README.md index 8ca186dcf..fb30a24f2 100644 --- a/README.md +++ b/README.md @@ -2,7 +2,7 @@ [**Quickstart**](#quickstart) | [**Installation**](#installation) | [**Documentation**](https://ml-explore.github.io/mlx/build/html/index.html) | -[**Examples**](#examples) +[**Examples**](#examples) [![CircleCI](https://circleci.com/gh/ml-explore/mlx.svg?style=svg)](https://circleci.com/gh/ml-explore/mlx) @@ -11,37 +11,37 @@ brought to you by Apple machine learning research. Some key features of MLX include: - - **Familiar APIs**: MLX has a Python API that closely follows NumPy. MLX +- **Familiar APIs**: MLX has a Python API that closely follows NumPy. MLX also has fully featured C++, [C](https://github.com/ml-explore/mlx-c), and [Swift](https://github.com/ml-explore/mlx-swift/) APIs, which closely mirror the Python API. MLX has higher-level packages like `mlx.nn` and `mlx.optimizers` with APIs that closely follow PyTorch to simplify building more complex models. - - **Composable function transformations**: MLX supports composable function - transformations for automatic differentiation, automatic vectorization, - and computation graph optimization. +- **Composable function transformations**: MLX supports composable function + transformations for automatic differentiation, automatic vectorization, + and computation graph optimization. - - **Lazy computation**: Computations in MLX are lazy. Arrays are only - materialized when needed. +- **Lazy computation**: Computations in MLX are lazy. Arrays are only + materialized when needed. - - **Dynamic graph construction**: Computation graphs in MLX are constructed - dynamically. Changing the shapes of function arguments does not trigger - slow compilations, and debugging is simple and intuitive. +- **Dynamic graph construction**: Computation graphs in MLX are constructed + dynamically. Changing the shapes of function arguments does not trigger + slow compilations, and debugging is simple and intuitive. - - **Multi-device**: Operations can run on any of the supported devices - (currently the CPU and the GPU). +- **Multi-device**: Operations can run on any of the supported devices + (currently the CPU and the GPU). - - **Unified memory**: A notable difference from MLX and other frameworks - is the *unified memory model*. Arrays in MLX live in shared memory. - Operations on MLX arrays can be performed on any of the supported - device types without transferring data. +- **Unified memory**: A notable difference from MLX and other frameworks + is the *unified memory model*. Arrays in MLX live in shared memory. + Operations on MLX arrays can be performed on any of the supported + device types without transferring data. MLX is designed by machine learning researchers for machine learning researchers. The framework is intended to be user-friendly, but still efficient to train and deploy models. The design of the framework itself is also conceptually simple. We intend to make it easy for researchers to extend and -improve MLX with the goal of quickly exploring new ideas. +improve MLX with the goal of quickly exploring new ideas. The design of MLX is inspired by frameworks like [NumPy](https://numpy.org/doc/stable/index.html), @@ -91,7 +91,7 @@ Checkout the [documentation](https://ml-explore.github.io/mlx/build/html/install.html#) for more information on building the C++ and Python APIs from source. -## Contributing +## Contributing Check out the [contribution guidelines](https://github.com/ml-explore/mlx/tree/main/CONTRIBUTING.md) for more information on contributing to MLX. See the @@ -110,7 +110,7 @@ Hannun, Jagrit Digani, Angelos Katharopoulos, and Ronan Collobert. If you find MLX useful in your research and wish to cite it, please use the following BibTex entry: -``` +```text @software{mlx2023, author = {Awni Hannun and Jagrit Digani and Angelos Katharopoulos and Ronan Collobert}, title = {{MLX}: Efficient and flexible machine learning on Apple silicon},