This commit is contained in:
Awni Hannun
2025-10-24 09:19:43 -07:00
parent 6286e528e4
commit aa44e3c45d
2 changed files with 19 additions and 12 deletions

View File

@@ -273,7 +273,7 @@ class ConvertFP8 : public Primitive {
}; };
bool is_equivalent(const Primitive& other) const override; bool is_equivalent(const Primitive& other) const override;
DEFINE_INPUT_OUTPUT_SHAPE() DEFINE_INPUT_OUTPUT_SHAPE();
private: private:
bool to_fp8_; bool to_fp8_;

View File

@@ -4249,8 +4249,8 @@ void init_ops(nb::module_& m) {
``quantize`` currently only supports 2D inputs with the second ``quantize`` currently only supports 2D inputs with the second
dimension divisible by ``group_size`` dimension divisible by ``group_size``
The supported quantization modes are ``"affine"`` and ``"mxfp4"``. They The supported quantization modes are ``"affine"``, ``"mxfp4"``,
are described in more detail below. ``"mxfp8"``, and ``"nvfp4"``. They are described in more detail below.
Args: Args:
w (array): Matrix to be quantized w (array): Matrix to be quantized
@@ -4268,7 +4268,7 @@ void init_ops(nb::module_& m) {
* biases (array): The quantization biases (returned for ``mode=="affine"``). * biases (array): The quantization biases (returned for ``mode=="affine"``).
Notes: Notes:
The ``affine`` mode quantizes groups of :math:`g` consecutive The ``"affine"`` mode quantizes groups of :math:`g` consecutive
elements in a row of ``w``. For each group the quantized elements in a row of ``w``. For each group the quantized
representation of each element :math:`\hat{w_i}` is computed as follows: representation of each element :math:`\hat{w_i}` is computed as follows:
@@ -4291,11 +4291,17 @@ void init_ops(nb::module_& m) {
:math:`\beta` which are the returned ``scales`` and :math:`\beta` which are the returned ``scales`` and
``biases`` respectively. ``biases`` respectively.
The ``mxfp4`` mode similarly quantizes groups of :math:`g` elements The ``"mxfp4"``, ``"mxfp8"``, and ``"nvfp4"`` modes similarly
of ``w``. For ``mxfp4`` the group size must be ``32``. The elements quantize groups of :math:`g` elements of ``w``. For the ``"mx"``
are quantized to 4-bit precision floating-point values (E2M1) with a modes, the group size must be ``32``. For ``"nvfp4"`` the group
shared 8-bit scale per group. Unlike ``affine`` quantization, size must be 16. The elements are quantized to 4-bit or 8-bit
``mxfp4`` does not have a bias value. More details on the format can precision floating-point values: E2M1 for ``"fp4"`` and E4M3 for
``"fp8"``. There is a shared 8-bit scale per group. The ``"mx"``
modes us an E8M0 scale and the ``"nv"`` mode uses an E4M3 scale.
Unlike ``affine`` quantization, these modes does not have a bias
value.
More details on the ``"mx"`` formats can
be found in the `specification <https://www.opencompute.org/documents/ocp-microscaling-formats-mx-v1-0-spec-final-pdf>`_. be found in the `specification <https://www.opencompute.org/documents/ocp-microscaling-formats-mx-v1-0-spec-final-pdf>`_.
)pbdoc"); )pbdoc");
m.def( m.def(
@@ -4326,15 +4332,16 @@ void init_ops(nb::module_& m) {
``w``. Default: ``4``. ``w``. Default: ``4``.
dtype (Dtype, optional): The data type of the dequantized output. If dtype (Dtype, optional): The data type of the dequantized output. If
``None`` the return type is inferred from the scales and biases ``None`` the return type is inferred from the scales and biases
when possible and otherwise defaults to ``bfloat16``. when possible and otherwise defaults to ``bfloat16``.
Default: ``None``. Default: ``None``.
mode (str, optional): The quantization mode. Default: ``"affine"``. mode (str, optional): The quantization mode. Default: ``"affine"``.
Returns: Returns:
array: The dequantized version of ``w`` array: The dequantized version of ``w``
Notes: Notes:
The currently supported quantization modes are ``"affine"`` and ``mxfp4``. The currently supported quantization modes are ``"affine"``,
``"mxfp4``, ``"mxfp8"``, and ``"nvfp4"``.
For ``affine`` quantization, given the notation in :func:`quantize`, For ``affine`` quantization, given the notation in :func:`quantize`,
we compute :math:`w_i` from :math:`\hat{w_i}` and corresponding :math:`s` we compute :math:`w_i` from :math:`\hat{w_i}` and corresponding :math:`s`