Files
mlx-examples/llms/mlx_lm/models
Kevin Wang c0019c4908 Pad mask with zeros for non-square attention matrices (#715)
* Pad mask with zeros for non-square attention matrices

The current implementation of the mask assumes the attention matrix is square, which is true if there is no cache. However, if one wishes to produce multiple tokens at a time, such as in speculative decoding implementations, a rectangular mask is necessary.

This change pads the bottom of the mask with zeros so multi-token decoding with a cache works correctly.

* Directly create mask instead of padding

* Update llama.py
2024-05-04 16:32:25 -07:00
..
2024-01-12 10:25:56 -08:00
2024-01-12 10:25:56 -08:00
2024-04-16 07:50:32 -07:00
2024-04-25 15:29:28 -07:00
2024-04-25 16:49:28 -07:00
2024-04-23 09:20:00 -07:00
2024-03-23 07:13:51 -07:00
2024-03-29 13:41:10 -07:00
2024-05-02 21:55:09 -07:00
2024-03-23 07:13:51 -07:00
2024-04-08 14:18:55 -07:00
2024-04-29 13:14:45 -07:00