Lost in the middle

Creator
Creator
Seonglae ChoSeonglae Cho
Created
Created
2024 Mar 2 3:45
Editor
Edited
Edited
2024 May 1 3:35
Refs
Refs
The first appearance was the hacking using
Positional Embedding
. The trigonometric-based absolute positional encoding worked well by separating frequencies, but there is a problem that it does not work well unless it is data of the length often seen during training because extrapolation does not work.
What appeared was Relative Positional Encoding, which changes the Attention score calculation according to the relative distance between tokens.
RoPE
is representative, and the feature of this encoding is to indicate the position using vector rotation operation, where the distance between two tokens and the angle rotated by the max context window size are determined. So, wouldn't it be possible to process long data while maintaining the information learned for short data by first learning for short data, then increasing the model's context windows size and proportionally reducing the rotation speed for fine-tuning for long data? The model, trained for lengths of 2k and 4k, worked well without a significant drop in perplexity even when extended to 16k and 32k. Various methods of position interpolation using RoPE's characteristics have been studied. Instead of finetuning the model, there was a bright prospect that it could be applied with RAG to any desired service as long as there was enough data, utilizing the in-context learning ability of the transformer.
Lost in the middle
poured cold water on
RoPE
’s prospect. It showed that these models, which expanded the context using
RoPE
interpolation, referenced the beginning and end of the prompt well but did not capture the middle part well. It even stated that performance could be worse than when no data was given at all.
Gemini Google
is seeking a solution with
Ring Attention
.
With Reformer and Performer in mind, many efficient transformer structures have been poured out to support long context while reducing the cost of Attention's O(N^2), and there have been some promising things, but in the end, it turned out that the ordinary vanilla attention is the best as the data increases. If there is a lesson that the transformer constantly tells us since it first appeared, it is to follow the rules without tricks.
RoPE에서 middle positional interpolation으로 해결할 순 없나.
 
 
 
 

Dataset Approach FILM-7B

Lost in the Middle: How Language Models Use Long Contexts

 
 

Recommendations