Improve names of quantization arguments (#235)

* Change the default quantization group_size to 64
* Rename groups to group_size and width to bits
This commit is contained in:
Angelos Katharopoulos
2023-12-20 16:53:53 -08:00
committed by GitHub
parent 57fe918cf8
commit b3916cbf2b
11 changed files with 184 additions and 180 deletions

View File

@@ -3011,26 +3011,27 @@ void init_ops(py::module_& m) {
py::pos_only(),
"scales"_a,
"biases"_a,
"groups"_a = 128,
"width"_a = 4,
"group_size"_a = 64,
"bits"_a = 4,
py::kw_only(),
"stream"_a = none,
R"pbdoc(
quantized_matmul(x: array, w: array, scales: array, biases: array, /, groups: int = 128, width: int = 4, *, stream: Union[None, Stream, Device] = None) -> array
quantized_matmul(x: array, w: array, scales: array, biases: array, /, group_size: int = 64, bits: int = 4, *, stream: Union[None, Stream, Device] = None) -> array
Perform the matrix multiplication with the quantized matrix ``w``. The
quantization uses one floating point scale and bias per ``groups`` of
elements. Each element in ``w`` takes ``width`` bits and is packed in an
quantization uses one floating point scale and bias per ``group_size`` of
elements. Each element in ``w`` takes ``bits`` bits and is packed in an
unsigned 32 bit integer.
Args:
x (array): Input array
w (array): Quantized matrix packed in unsigned integers
scales (array): The scales to use per ``groups`` elements of ``w``
biases (array): The biases to use per ``groups`` elements of ``w``
groups (int): The size of the group in ``w`` that shares a scale and
bias. (default: 128)
width (int): The bitwidth of the elements in ``w``. (default: 4)
scales (array): The scales to use per ``group_size`` elements of ``w``
biases (array): The biases to use per ``group_size`` elements of ``w``
group_size (int, optional): The size of the group in ``w`` that
shares a scale and bias. (default: 64)
bits (int, optional): The number of bits occupied by each element in
``w``. (default: 4)
Returns:
result (array): The result of the multiplication of ``x`` with ``w``.
@@ -3040,19 +3041,19 @@ void init_ops(py::module_& m) {
&quantize,
"w"_a,
py::pos_only(),
"groups"_a = 128,
"width"_a = 4,
"group_size"_a = 64,
"bits"_a = 4,
py::kw_only(),
"stream"_a = none,
R"pbdoc(
quantize(w: array, /, groups: int = 128, width: int = 4, *, stream: Union[None, Stream, Device] = None) -> Tuple[array, array, array]
quantize(w: array, /, group_size: int = 64, bits : int = 4, *, stream: Union[None, Stream, Device] = None) -> Tuple[array, array, array]
Quantize the matrix ``w`` using ``width`` bits per element.
Quantize the matrix ``w`` using ``bits`` bits per element.
Note, every ``groups`` elements in a row of ``w`` are quantized
Note, every ``group_size`` elements in a row of ``w`` are quantized
together. Hence, number of columns of ``w`` should be divisible by
``groups``. In particular, the rows of ``w`` are divided into groups of
size ``groups`` which are quantized together.
``group_size``. In particular, the rows of ``w`` are divided into groups of
size ``group_size`` which are quantized together.
.. warning::
@@ -3083,10 +3084,10 @@ void init_ops(py::module_& m) {
Args:
w (array): Matrix to be quantized
groups (int, optional): The size of the group in ``w`` that shares a
scale and bias. (default: 128)
width (int, optional): The bitwidth of the elements in ``w``.
(default: 4)
group_size (int, optional): The size of the group in ``w`` that shares a
scale and bias. (default: 64)
bits (int, optional): The number of bits occupied by each element of
``w`` in the returned quantized matrix. (default: 4)
Returns:
(tuple): A tuple containing
@@ -3102,15 +3103,15 @@ void init_ops(py::module_& m) {
py::pos_only(),
"scales"_a,
"biases"_a,
"groups"_a = 128,
"width"_a = 4,
"group_size"_a = 64,
"bits"_a = 4,
py::kw_only(),
"stream"_a = none,
R"pbdoc(
dequantize(w: array, /, scales: array, biases: array, groups: int = 128, width: int = 4, *, stream: Union[None, Stream, Device] = None) -> array
dequantize(w: array, /, scales: array, biases: array, group_size: int = 64, bits: int = 4, *, stream: Union[None, Stream, Device] = None) -> array
Dequantize the matrix ``w`` using the provided ``scales`` and
``biases`` and the ``groups`` and ``width`` configuration.
``biases`` and the ``group_size`` and ``bits`` configuration.
Formally, given the notation in :func:`quantize`, we compute
:math:`w_i` from :math:`\hat{w_i}` and corresponding :math:`s` and
@@ -3122,14 +3123,14 @@ void init_ops(py::module_& m) {
Args:
w (array): Matrix to be quantized
scales (array): The scales to use per ``groups`` elements of ``w``
biases (array): The biases to use per ``groups`` elements of ``w``
groups (int, optional): The size of the group in ``w`` that shares a
scale and bias. (default: 128)
width (int, optional): The bitwidth of the elements in ``w``.
(default: 4)
scales (array): The scales to use per ``group_size`` elements of ``w``
biases (array): The biases to use per ``group_size`` elements of ``w``
group_size (int, optional): The size of the group in ``w`` that shares a
scale and bias. (default: 64)
bits (int, optional): The number of bits occupied by each element in
``w``. (default: 4)
Returns:
result (array): The dequantized version of w
result (array): The dequantized version of ``w``
)pbdoc");
}