Attention Mechanism Optimization

Creator
Creator
Seonglae Cho
Created
Created
2023 Oct 6 8:0
Editor
Edited
Edited
2025 May 9 19:42
https://arxiv.org/pdf/1706.03762.pdf
notion image
Attention Mechanism Optimizations
 
 
 
Instead of having unique key and value matrices for each attention head, MQA share a single key and value matrix across all heads. However this modification does impact model performance. However, GQA, instead of forcing all attention heads in a given layer to share the same key and value matrices (
Multi-query Attention
), creates multiple groups of attention heads that share the same key and value matrices. MLA reduces KV cache size and at the same time improved the performance. What if the model could learn to efficiently compress its own keys and values. MLA adds an extra step between each attention head’s input and the key and value matrices. Then MLA projects the input into compressed shared latent space and the latent space projected back up to keys and value using another set of learned weights for each head. This is possible since the attention heads shares similar keys and values while sharing this in one latent space is efficient for reducing KV cache. Furthermore, this shared latent space approach results improved performance than non-shared model which may caused by a noise reduction effect of shared latent space.
Multi-head Attention Optimization
 
 
 
 
 
Sigmoid Attention, replacing the traditional softmax with a sigmoid and a constant bias

Optimization

 
 
 

Recommendations