LLMLingua

Created
Created
2024 Jan 9 1:57
Creator
Creator
Seonglae ChoSeonglae Cho
Editor
Edited
Edited
2024 Jan 31 3:12
In terms of information entropy, tokens with lower perplexity (PPL) contribute less to the overall entropy gains of the language model. In other words, removing tokens with lower perplexity has a relatively minor impact on the LLM’s comprehension of the context.
 
 
 
 
 
 
 
 

Recommendations