Add mode parameter for quantization (#2499)

* add mode parameter for quantization

* mxfp4 quantize/dequantize + start of optional biases

* mxfp4 works

* speedup

* cpu mxfp4

* fix

* fix test tol

* fix

* refactor

* add quant mode enum
This commit is contained in:
Awni Hannun
2025-08-28 06:45:26 -07:00
committed by GitHub
parent 7ef8a6f2d5
commit 70560b6bd5
28 changed files with 3635 additions and 757 deletions

View File

@@ -4153,14 +4153,15 @@ void init_ops(nb::module_& m) {
nb::arg(),
nb::arg(),
"scales"_a,
"biases"_a,
"biases"_a = nb::none(),
"transpose"_a = true,
"group_size"_a = 64,
"bits"_a = 4,
"mode"_a = "affine",
nb::kw_only(),
"stream"_a = nb::none(),
nb::sig(
"def quantized_matmul(x: array, w: array, /, scales: array, biases: array, transpose: bool = True, group_size: int = 64, bits: int = 4, *, stream: Union[None, Stream, Device] = None) -> array"),
"def quantized_matmul(x: array, w: array, /, scales: array, biases: Optional[array] = None, transpose: bool = True, group_size: int = 64, bits: int = 4, mode: str = 'affine', *, stream: Union[None, Stream, Device] = None) -> array"),
R"pbdoc(
Perform the matrix multiplication with the quantized matrix ``w``. The
quantization uses one floating point scale and bias per ``group_size`` of
@@ -4171,7 +4172,8 @@ void init_ops(nb::module_& m) {
x (array): Input array
w (array): Quantized matrix packed in unsigned integers
scales (array): The scales to use per ``group_size`` elements of ``w``
biases (array): The biases to use per ``group_size`` elements of ``w``
biases (array, optional): The biases to use per ``group_size``
elements of ``w``. Default: ``None``.
transpose (bool, optional): Defines whether to multiply with the
transposed ``w`` or not, namely whether we are performing
``x @ w.T`` or ``x @ w``. Default: ``True``.
@@ -4179,6 +4181,7 @@ void init_ops(nb::module_& m) {
shares a scale and bias. Default: ``64``.
bits (int, optional): The number of bits occupied by each element in
``w``. Default: ``4``.
mode (str, optional): The quantization mode. Default: ``"affine"``.
Returns:
array: The result of the multiplication of ``x`` with ``w``.
@@ -4189,10 +4192,11 @@ void init_ops(nb::module_& m) {
nb::arg(),
"group_size"_a = 64,
"bits"_a = 4,
"mode"_a = "affine",
nb::kw_only(),
"stream"_a = nb::none(),
nb::sig(
"def quantize(w: array, /, group_size: int = 64, bits : int = 4, *, stream: Union[None, Stream, Device] = None) -> tuple[array, array, array]"),
"def quantize(w: array, /, group_size: int = 64, bits: int = 4, mode: str = 'affine', *, stream: Union[None, Stream, Device] = None) -> tuple[array, array, array]"),
R"pbdoc(
Quantize the matrix ``w`` using ``bits`` bits per element.
@@ -4203,30 +4207,11 @@ void init_ops(nb::module_& m) {
.. warning::
``quantize`` currently only supports 2D inputs with dimensions which are multiples of 32
``quantize`` currently only supports 2D inputs with the second
dimension divisible by ``group_size``
Formally, for a group of :math:`g` consecutive elements :math:`w_1` to
:math:`w_g` in a row of ``w`` we compute the quantized representation
of each element :math:`\hat{w_i}` as follows
.. math::
\begin{aligned}
\alpha &= \max_i w_i \\
\beta &= \min_i w_i \\
s &= \frac{\alpha - \beta}{2^b - 1} \\
\hat{w_i} &= \textrm{round}\left( \frac{w_i - \beta}{s}\right).
\end{aligned}
After the above computation, :math:`\hat{w_i}` fits in :math:`b` bits
and is packed in an unsigned 32-bit integer from the lower to upper
bits. For instance, for 4-bit quantization we fit 8 elements in an
unsigned 32 bit integer where the 1st element occupies the 4 least
significant bits, the 2nd bits 4-7 etc.
In order to be able to dequantize the elements of ``w`` we also need to
save :math:`s` and :math:`\beta` which are the returned ``scales`` and
``biases`` respectively.
The supported quantization modes are ``"affine"`` and ``"mxfp4"``. They
are described in more detail below.
Args:
w (array): Matrix to be quantized
@@ -4234,49 +4219,86 @@ void init_ops(nb::module_& m) {
scale and bias. Default: ``64``.
bits (int, optional): The number of bits occupied by each element of
``w`` in the returned quantized matrix. Default: ``4``.
mode (str, optional): The quantization mode. Default: ``"affine"``.
Returns:
tuple: A tuple containing
tuple: A tuple with either two or three elements containing:
* w_q (array): The quantized version of ``w``
* scales (array): The scale to multiply each element with, namely :math:`s`
* biases (array): The biases to add to each element, namely :math:`\beta`
* scales (array): The quantization scales
* biases (array): The quantization biases (returned for ``mode=="affine"``).
Notes:
The ``affine`` mode quantizes groups of :math:`g` consecutive
elements in a row of ``w``. For each group the quantized
representation of each element :math:`\hat{w_i}` is computed as follows:
.. math::
\begin{aligned}
\alpha &= \max_i w_i \\
\beta &= \min_i w_i \\
s &= \frac{\alpha - \beta}{2^b - 1} \\
\hat{w_i} &= \textrm{round}\left( \frac{w_i - \beta}{s}\right).
\end{aligned}
After the above computation, :math:`\hat{w_i}` fits in :math:`b` bits
and is packed in an unsigned 32-bit integer from the lower to upper
bits. For instance, for 4-bit quantization we fit 8 elements in an
unsigned 32 bit integer where the 1st element occupies the 4 least
significant bits, the 2nd bits 4-7 etc.
To dequantize the elements of ``w``, we also save :math:`s` and
:math:`\beta` which are the returned ``scales`` and
``biases`` respectively.
The ``mxfp4`` mode similarly quantizes groups of :math:`g` elements
of ``w``. For ``mxfp4`` the group size must be ``32``. The elements
are quantized to 4-bit precision floating-point values (E2M1) with a
shared 8-bit scale per group. Unlike ``affine`` quantization,
``mxfp4`` does not have a bias value. More details on the format can
be found in the `specification <https://www.opencompute.org/documents/ocp-microscaling-formats-mx-v1-0-spec-final-pdf>`_.
)pbdoc");
m.def(
"dequantize",
&mx::dequantize,
nb::arg(),
"scales"_a,
"biases"_a,
"biases"_a = nb::none(),
"group_size"_a = 64,
"bits"_a = 4,
"mode"_a = "affine",
nb::kw_only(),
"stream"_a = nb::none(),
nb::sig(
"def dequantize(w: array, /, scales: array, biases: array, group_size: int = 64, bits: int = 4, *, stream: Union[None, Stream, Device] = None) -> array"),
"def dequantize(w: array, /, scales: array, biases: Optional[array] = = None, group_size: int = 64, bits: int = 4, mode: str = 'affine', *, stream: Union[None, Stream, Device] = None) -> array"),
R"pbdoc(
Dequantize the matrix ``w`` using the provided ``scales`` and
``biases`` and the ``group_size`` and ``bits`` configuration.
Formally, given the notation in :func:`quantize`, we compute
:math:`w_i` from :math:`\hat{w_i}` and corresponding :math:`s` and
:math:`\beta` as follows
.. math::
w_i = s \hat{w_i} + \beta
Dequantize the matrix ``w`` using quantization parameters.
Args:
w (array): Matrix to be quantized
scales (array): The scales to use per ``group_size`` elements of ``w``
biases (array): The biases to use per ``group_size`` elements of ``w``
w (array): Matrix to be dequantized
scales (array): The scales to use per ``group_size`` elements of ``w``.
biases (array, optional): The biases to use per ``group_size``
elements of ``w``. Default: ``None``.
group_size (int, optional): The size of the group in ``w`` that shares a
scale and bias. Default: ``64``.
bits (int, optional): The number of bits occupied by each element in
``w``. Default: ``4``.
mode (str, optional): The quantization mode. Default: ``"affine"``.
Returns:
array: The dequantized version of ``w``
Notes:
The currently supported quantization modes are ``"affine"`` and ``mxfp4``.
For ``affine`` quantization, given the notation in :func:`quantize`,
we compute :math:`w_i` from :math:`\hat{w_i}` and corresponding :math:`s`
and :math:`\beta` as follows
.. math::
w_i = s \hat{w_i} + \beta
)pbdoc");
m.def(
"gather_qmm",
@@ -4284,17 +4306,18 @@ void init_ops(nb::module_& m) {
nb::arg(),
nb::arg(),
"scales"_a,
"biases"_a,
"biases"_a = nb::none(),
"lhs_indices"_a = nb::none(),
"rhs_indices"_a = nb::none(),
"transpose"_a = true,
"group_size"_a = 64,
"bits"_a = 4,
"mode"_a = "affine",
nb::kw_only(),
"sorted_indices"_a = false,
"stream"_a = nb::none(),
nb::sig(
"def gather_qmm(x: array, w: array, /, scales: array, biases: array, lhs_indices: Optional[array] = None, rhs_indices: Optional[array] = None, transpose: bool = True, group_size: int = 64, bits: int = 4, *, sorted_indices: bool = False, stream: Union[None, Stream, Device] = None) -> array"),
"def gather_qmm(x: array, w: array, /, scales: array, biases: Optional[array] = None, lhs_indices: Optional[array] = None, rhs_indices: Optional[array] = None, transpose: bool = True, group_size: int = 64, bits: int = 4, mode: str = 'affine', *, sorted_indices: bool = False, stream: Union[None, Stream, Device] = None) -> array"),
R"pbdoc(
Perform quantized matrix multiplication with matrix-level gather.
@@ -4310,7 +4333,8 @@ void init_ops(nb::module_& m) {
x (array): Input array
w (array): Quantized matrix packed in unsigned integers
scales (array): The scales to use per ``group_size`` elements of ``w``
biases (array): The biases to use per ``group_size`` elements of ``w``
biases (array, optional): The biases to use per ``group_size``
elements of ``w``. Default: ``None``.
lhs_indices (array, optional): Integer indices for ``x``. Default: ``None``.
rhs_indices (array, optional): Integer indices for ``w``. Default: ``None``.
transpose (bool, optional): Defines whether to multiply with the
@@ -4320,6 +4344,7 @@ void init_ops(nb::module_& m) {
shares a scale and bias. Default: ``64``.
bits (int, optional): The number of bits occupied by each element in
``w``. Default: ``4``.
mode (str, optional): The quantization mode. Default: ``"affine"``.
sorted_indices (bool, optional): May allow a faster implementation
if the passed indices are sorted. Default: ``False``.