<h1>Quick Start Guide<aclass="headerlink"href="#quick-start-guide"title="Permalink to this heading"></a></h1>
<p>MLX is a NumPy-like array framework designed for efficient and flexible
machine learning on Apple silicon. The Python API closely follows NumPy with
a few exceptions. MLX also has a fully featured C++ API which closely follows
the Python API.</p>
<p>The main differences between MLX and NumPy are:</p>
<blockquote>
<div><ulclass="simple">
<li><p><strong>Composable function transformations</strong>: MLX has composable function
transformations for automatic differentiation, automatic vectorization,
and computation graph optimization.</p></li>
<li><p><strong>Lazy computation</strong>: Computations in MLX are lazy. Arrays are only
materialized when needed.</p></li>
<li><p><strong>Multi-device</strong>: Operations can run on any of the suppoorted devices (CPU,
GPU, …)</p></li>
</ul>
</div></blockquote>
<p>The design of MLX is strongly inspired by frameworks like <aclass="reference external"href="https://pytorch.org/">PyTorch</a>, <aclass="reference external"href="https://github.com/google/jax">Jax</a>, and
<aclass="reference external"href="https://arrayfire.org/">ArrayFire</a>. A noteable difference from these
frameworks and MLX is the <em>unified memory model</em>. Arrays in MLX live in shared
memory. Operations on MLX arrays can be performed on any of the supported
device types without performing data copies. Currently supported device types
are the CPU and GPU.</p>
<sectionid="basics">
<h2>Basics<aclass="headerlink"href="#basics"title="Permalink to this heading"></a></h2>
<p>Import <codeclass="docutils literal notranslate"><spanclass="pre">mlx.core</span></code> and make an <aclass="reference internal"href="python/_autosummary/mlx.core.array.html#mlx.core.array"title="mlx.core.array"><codeclass="xref py py-class docutils literal notranslate"><spanclass="pre">array</span></code></a>:</p>
<p>Operations in MLX are lazy. The outputs of MLX operations are not computed
until they are needed. To force an array to be evaluated use
<aclass="reference internal"href="python/_autosummary/mlx.core.eval.html#mlx.core.eval"title="mlx.core.eval"><codeclass="xref py py-func docutils literal notranslate"><spanclass="pre">eval()</span></code></a>. Arrays will automatically be evaluated in a few cases. For
example, inspecting a scalar with <aclass="reference internal"href="python/_autosummary/mlx.core.array.item.html#mlx.core.array.item"title="mlx.core.array.item"><codeclass="xref py py-meth docutils literal notranslate"><spanclass="pre">array.item()</span></code></a>, printing an array,
or converting an array from <aclass="reference internal"href="python/_autosummary/mlx.core.array.html#mlx.core.array"title="mlx.core.array"><codeclass="xref py py-class docutils literal notranslate"><spanclass="pre">array</span></code></a> to <aclass="reference external"href="https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html#numpy.ndarray"title="(in NumPy v1.26)"><codeclass="xref py py-class docutils literal notranslate"><spanclass="pre">numpy.ndarray</span></code></a> all
automatically evaluate the array.</p>
<divclass="highlight-python notranslate"><divclass="highlight"><pre><span></span><spanclass="o">>></span><spanclass="n">c</span><spanclass="o">=</span><spanclass="n">a</span><spanclass="o">+</span><spanclass="n">b</span><spanclass="c1"># c not yet evaluated</span>
<spanclass="o">>></span><spanclass="nb">print</span><spanclass="p">(</span><spanclass="n">c</span><spanclass="p">)</span><spanclass="c1"># Also evaluates c</span>
<spanclass="o">>></span><spanclass="n">np</span><spanclass="o">.</span><spanclass="n">array</span><spanclass="p">(</span><spanclass="n">c</span><spanclass="p">)</span><spanclass="c1"># Also evaluates c</span>
<h2>Function and Graph Transformations<aclass="headerlink"href="#function-and-graph-transformations"title="Permalink to this heading"></a></h2>
<p>MLX has standard function transformations like <aclass="reference internal"href="python/_autosummary/mlx.core.grad.html#mlx.core.grad"title="mlx.core.grad"><codeclass="xref py py-func docutils literal notranslate"><spanclass="pre">grad()</span></code></a> and <aclass="reference internal"href="python/_autosummary/mlx.core.vmap.html#mlx.core.vmap"title="mlx.core.vmap"><codeclass="xref py py-func docutils literal notranslate"><spanclass="pre">vmap()</span></code></a>.
Transformations can be composed arbitrarily. For example
<codeclass="docutils literal notranslate"><spanclass="pre">grad(vmap(grad(fn)))</span></code> (or any other composition) is allowed.</p>
<p>Other gradient transformations include <aclass="reference internal"href="python/_autosummary/mlx.core.vjp.html#mlx.core.vjp"title="mlx.core.vjp"><codeclass="xref py py-func docutils literal notranslate"><spanclass="pre">vjp()</span></code></a> for vector-Jacobian products
and <aclass="reference internal"href="python/_autosummary/mlx.core.jvp.html#mlx.core.jvp"title="mlx.core.jvp"><codeclass="xref py py-func docutils literal notranslate"><spanclass="pre">jvp()</span></code></a> for Jacobian-vector products.</p>
<p>Use <aclass="reference internal"href="python/_autosummary/mlx.core.value_and_grad.html#mlx.core.value_and_grad"title="mlx.core.value_and_grad"><codeclass="xref py py-func docutils literal notranslate"><spanclass="pre">value_and_grad()</span></code></a> to efficiently compute both a function’s output and
gradient with respect to the function’s input.</p>
</section>
<sectionid="devices-and-streams">
<h2>Devices and Streams<aclass="headerlink"href="#devices-and-streams"title="Permalink to this heading"></a></h2>