mirror of
https://github.com/ml-explore/mlx-examples.git
synced 2025-06-24 09:21:18 +08:00
![]() * Pad mask with zeros for non-square attention matrices The current implementation of the mask assumes the attention matrix is square, which is true if there is no cache. However, if one wishes to produce multiple tokens at a time, such as in speculative decoding implementations, a rectangular mask is necessary. This change pads the bottom of the mask with zeros so multi-token decoding with a cache works correctly. * Directly create mask instead of padding * Update llama.py |
||
---|---|---|
.. | ||
__init__.py | ||
base.py | ||
cohere.py | ||
dbrx.py | ||
gemma.py | ||
llama.py | ||
minicpm.py | ||
mixtral.py | ||
olmo.py | ||
openelm.py | ||
phi3.py | ||
phi.py | ||
phixtral.py | ||
plamo.py | ||
qwen2_moe.py | ||
qwen2.py | ||
qwen.py | ||
stablelm.py | ||
starcoder2.py |