Update README.md (#121)

This commit is contained in:
Yiyang(Steven) Yu 2023-12-10 17:47:37 -05:00 committed by GitHub
parent 2d0130f80f
commit 0cfbfc9904
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

View File

@ -16,24 +16,24 @@ Some key features of MLX include:
MLX has higher-level packages like `mlx.nn` and `mlx.optimizers` with APIs
that closely follow PyTorch to simplify building more complex models.
- **Composable function transformations**: MLX has composable function
- **Composable function transformations**: MLX supports composable function
transformations for automatic differentiation, automatic vectorization,
and computation graph optimization.
- **Lazy computation**: Computations in MLX are lazy. Arrays are only
materialized when needed.
- **Dynamic graph construction**: Computation graphs in MLX are built
- **Dynamic graph construction**: Computation graphs in MLX are constructed
dynamically. Changing the shapes of function arguments does not trigger
slow compilations, and debugging is simple and intuitive.
- **Multi-device**: Operations can run on any of the supported devices
(currently, the CPU and GPU).
(currently the CPU and the GPU).
- **Unified memory**: A notable difference from MLX and other frameworks
is the *unified memory model*. Arrays in MLX live in shared memory.
Operations on MLX arrays can be performed on any of the supported
device types without moving data.
device types without transferring data.
MLX is designed by machine learning researchers for machine learning
researchers. The framework is intended to be user-friendly, but still efficient
@ -66,7 +66,7 @@ in the documentation.
## Installation
MLX is available on [PyPi](https://pypi.org/project/mlx/). To install the Python API, run:
MLX is available on [PyPI](https://pypi.org/project/mlx/). To install the Python API, run:
```
pip install mlx