In-context Representation

Creator
Creator
Seonglae Cho
Created
Created
2025 Jan 12 14:38
Editor
Edited
Edited
2025 Jan 12 14:54
Refs
Refs
They explored whether semantics could be reconstructed in a new context, specifically examining the model's ability to reorganize its representational structure when presented with a new context that requires concepts to perform roles different from their pre-trained meanings.
In the graph, each node is mapped to concepts learned during pre-training (ex: apple, bird), and was required to predict the next node according to the graph structure given as context. As the context length increases, the model's internal representations are reconstructed from pre-trained semantic structures to graph structures defined by the context.
Based on the distance between nodes,
Dirichlet energy
was calculated and connectivity between the adjacency matrix and graphs was measured. The representation reconstruction process was achieved through energy optimization (Dirichlet Energy Minimization). As the context size increased, the representations between graph nodes formed increasingly connectivity-based structures while energy decreased.
Scaling context-size can flexibly re-organize model representations, possibly unlocking novel capabilities. This multi-faceted examination of LLM capabilities suggests that LLMs can achieve scaling through
Language Model Context
scaling and
LM Context Extending
, demonstrating that this scaling improves connectivity and reconstruction capabilities between concepts in in-context learning and representation.
 
 
 
 
 
 
 

Recommendations