From d1926c4752a28ef7ac41dd04d0a868534da1e25d Mon Sep 17 00:00:00 2001 From: Awni Hannun Date: Wed, 29 Nov 2023 16:23:42 -0800 Subject: [PATCH] Readme (#2) * readme wip * more readme * examples * spell * comments + nits --- CONTRIBUTING.md | 2 +- README.md | 94 ++++++++++++++++++++++++++++------------ docs/src/install.rst | 13 +++++- docs/src/quick_start.rst | 2 +- 4 files changed, 80 insertions(+), 31 deletions(-) diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 072e12bf1..f1531bb88 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -11,7 +11,7 @@ possible. and after the change. Examples of benchmarks can be found in `benchmarks/python/`. 4. If you've changed APIs, update the documentation. 5. Every PR should have passing tests and at least one review. -6. For code formating install `pre-commit` using something like `pip install pre-commit` and run `pre-commit install`. +6. For code formatting install `pre-commit` using something like `pip install pre-commit` and run `pre-commit install`. This should install hooks for running `black` and `clang-format` to ensure consistent style for C++ and python code. diff --git a/README.md b/README.md index e0fd296e6..1dd409d51 100644 --- a/README.md +++ b/README.md @@ -1,39 +1,77 @@ # MLX -MLX is an array framework for machine learning specifically targeting Apple -Silicon. MLX is designed with inspiration from Jax, PyTorch, ArrayFire. +[**Quickstart**](#quickstart) | [**Installation**](#installation) | +[**Documentation**](https://ml-explore.github.io/mlx/build/html/index.html) | +[**Examples**](#examples) -[Documentation](https://ml-explore.github.io/mlx/build/html/index.html) +MLX is an array framework for machine learning on Apple silicon. -## Build +Some key features of MLX include: + + - **Familiar APIs**: MLX has a Python API which closely follows NumPy. + MLX also has a fully featured C++ API which closely mirrors the Python API. + MLX has higher level packages like `mlx.nn` and `mlx.optimizers` with APIs + that closely follow PyTorch to simplify building more complex models. + + - **Composable function transformations**: MLX has composable function + transformations for automatic differentiation, automatic vectorization, + and computation graph optimization. + + - **Lazy computation**: Computations in MLX are lazy. Arrays are only + materialized when needed. + + - **Dynamic graph construction**: Computation graphs in MLX are built + dynamically. Changing the shapes of function arguments does not trigger + slow compilations, and debugging is simple and intuitive. + + - **Multi-device**: Operations can run on any of the supported devices + (currently the CPU and GPU). + + - **Unified memory**: A noteable difference from MLX and other frameworks + is the *unified memory model*. Arrays in MLX live in shared memory. + Operations on MLX arrays can be performed on any of the supported + device types without moving data. + +MLX is designed by machine learning researchers for machine learning +researchers. The framework is intended to be user friendly, but still efficient +to train and deploy models. The design of the framework itself is also +conceptually simple. We intend to make it easy for researchers to extend and +improve MLX with the goal of quickly exploring new ideas. + +The design of MLX is inspired by frameworks like +[NumPy](https://numpy.org/doc/stable/index.html), +[PyTorch](https://pytorch.org/), [Jax](https://github.com/google/jax), and +[ArrayFire](https://arrayfire.org/). + +## Examples + +The [MLX examples repo](https://github.com/ml-explore/mlx-examples) has a +variety of examples including: + +- [Transformer language model](https://github.com/ml-explore/mlx-examples/tree/main/transformer_lm) training. +- Large scale text generation with + [LLaMA](https://github.com/ml-explore/mlx-examples/tree/main/llama) and + finetuning with [LoRA](https://github.com/ml-explore/mlx-examples/tree/main/lora). +- Generating images with [Stable Diffusion](https://github.com/ml-explore/mlx-examples/tree/main/stable_diffusion). +- Speech recognition with [OpenAI's Whisper](https://github.com/ml-explore/mlx-examples/tree/main/whisper). + +## Quickstart + +See the [quick start +guide](https://pages.github.pie.apple.com/ml-explore/framework002/build/html/quick_start.html) +in the documentation. + +## Installation + +MLX is available on [PyPi](https://pypi.org/project/mlx/). To install the Python API run: ``` -mkdir -p build && cd build -cmake .. && make -j -``` - -Run the C++ tests with `make test` (or `./tests/tests` for more detailed output). - -### Python bidings - -To install run: - -` -env CMAKE_BUILD_PARALLEL_LEVEL="" pip install . -` - -For developing use an editable install: - -``` -env CMAKE_BUILD_PARALLEL_LEVEL="" pip install -e . -``` - -To make sure the install is working run the tests with: - -``` -python -m unittest discover python/tests +pip install mlx ``` +Checkout the +[documentation](https://ml-explore.github.io/mlx/build/html/install.html#) +for more information on building the C++ and Python APIs from source. ## Contributing diff --git a/docs/src/install.rst b/docs/src/install.rst index fd4f9becd..aa36afa03 100644 --- a/docs/src/install.rst +++ b/docs/src/install.rst @@ -9,7 +9,7 @@ MLX with your own Apple silicon computer is .. code-block:: shell - pip install apple-mlx -i https://pypi.apple.com/simple + pip install mlx Build from source ----------------- @@ -46,6 +46,17 @@ Then simply build and install it using pip: env CMAKE_BUILD_PARALLEL_LEVEL="" pip install . +For developing use an editable install: + +.. code-block:: shell + + env CMAKE_BUILD_PARALLEL_LEVEL="" pip install -e . + +To make sure the install is working run the tests with: + +.. code-block:: shell + + python -m unittest discover python/tests C++ API ^^^^^^^ diff --git a/docs/src/quick_start.rst b/docs/src/quick_start.rst index c3e2b678b..3439580ba 100644 --- a/docs/src/quick_start.rst +++ b/docs/src/quick_start.rst @@ -13,7 +13,7 @@ The main differences between MLX and NumPy are: and computation graph optimization. - **Lazy computation**: Computations in MLX are lazy. Arrays are only materialized when needed. - - **Multi-device**: Operations can run on any of the suppoorted devices (CPU, + - **Multi-device**: Operations can run on any of the supported devices (CPU, GPU, ...) The design of MLX is strongly inspired by frameworks like `PyTorch