RoPE

Creator
Creator
Seonglae ChoSeonglae Cho
Created
Created
2023 Jul 9 12:44
Editor
Edited
Edited
2024 Jul 8 9:53

Rotary Positional Embedding

It applies to LLaMA, GPTNeoX, Mistral and PaLM model families.
Positional Interpolation between start and end of the sentence (which cause
Lost in the middle
)
Effectively encode positional information in transformer-based language models
Fail to generalize past the sequence length they were trained on
The belief that LLMs with RoPE have a natural ability to handle long texts, even if they haven't encountered super-long ones during training.
What appeared was Relative Positional Encoding, which changes the Attention score calculation according to the relative distance between tokens.
RoPE
is representative, and the feature of this encoding is to indicate the position using vector rotation operation, where the distance between two tokens and the angle rotated by the max context window size are determined. So, wouldn't it be possible to process long data while maintaining the information learned for short data by first learning for short data, then increasing the model's context windows size and proportionally reducing the rotation speed for fine-tuning for long data? The model, trained for lengths of 2k and 4k, worked well without a significant drop in perplexity even when extended to 16k and 32k. Various methods of position interpolation using RoPE's characteristics have been studied. Instead of finetuning the model, there was a bright prospect that it could be applied with RAG to any desired service as long as there was enough data, utilizing the in-context learning ability of the transformer.
RoPE Extensions
 
 
 
 

LongRope

 
 
 

Recommendations