Attention Mechanism Optimization

Creator
Creator
Seonglae ChoSeonglae Cho
Created
Created
2023 Oct 6 8:0
Editor
Edited
Edited
2025 May 9 19:42
https://arxiv.org/pdf/1706.03762.pdf
notion image
Attention Mechanism Optimizations
 
 
 
Instead of having unique key and value matrices for each attention head, MQA share a single key and value matrix across all heads. However this modification does impact model performance. However, GQA, instead of forcing all attention heads in a given layer to share the same key and value matrices (
Multi-query Attention
), creates multiple groups of attention heads that share the same key and value matrices. MLA reduces KV cache size and at the same time improved the performance. What if the model could learn to efficiently compress its own keys and values. MLA adds an extra step between each attention head’s input and the key and value matrices. Then MLA projects the input into compressed shared latent space and the latent space projected back up to keys and value using another set of learned weights for each head. This is possible since the attention heads shares similar keys and values while sharing this in one latent space is efficient for reducing KV cache. Furthermore, this shared latent space approach results improved performance than non-shared model which may caused by a noise reduction effect of shared latent space.
Multi-head Attention Optimization
 
 
 
 
 
Sigmoid Attention, replacing the traditional softmax with a sigmoid and a constant bias
Theory, Analysis, and Best Practices for Sigmoid Self-Attention
Attention is a key part of the transformer architecture. It is a sequence-to-sequence mapping that transforms each sequence element into a weighted sum of values. The weights are typically...
Theory, Analysis, and Best Practices for Sigmoid Self-Attention
How to make LLMs go fast
Blog about linguistics, programming, and my projects
A guide to LLM inference and performance
To attain the full power of a GPU during LLM inference, you have to know if the inference is compute bound or memory bound. Learn how to better utilize GPU resources.
A guide to LLM inference and performance

Optimization

Hugging Face Reads, Feb. 2021 - Long-range Transformers
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
Hugging Face Reads, Feb. 2021 - Long-range Transformers
 
 
 

Recommendations