Attention Mechanism Optimizations
Sparse Attention
Monarch Mixer
Flash Attention
Dilated Attention
PagedAttention
Group Query Attentiion
Multi Query Attention
Clustered attention
Layer Selective Rank Reduction
KV Cache
FAVOR+
Paged Attention
Chunk Attention
Memory-efficient Attention
Gated Attention
FlexAttention
Selective Attention
Fire Attention
FNet
Instead of having unique key and value matrices for each attention head, MQA share a single key and value matrix across all heads. However this modification does impact model performance. However, GQA, instead of forcing all attention heads in a given layer to share the same key and value matrices (Multi-query Attention), creates multiple groups of attention heads that share the same key and value matrices. MLA reduces KV cache size and at the same time improved the performance. What if the model could learn to efficiently compress its own keys and values. MLA adds an extra step between each attention head’s input and the key and value matrices. Then MLA projects the input into compressed shared latent space and the latent space projected back up to keys and value using another set of learned weights for each head. This is possible since the attention heads shares similar keys and values while sharing this in one latent space is efficient for reducing KV cache. Furthermore, this shared latent space approach results improved performance than non-shared model which may caused by a noise reduction effect of shared latent space.
Multi-head Attention Optimization
Sigmoid Attention, replacing the traditional softmax with a sigmoid and a constant bias