Weight Interpretability

Creator
Creator
Seonglae ChoSeonglae Cho
Created
Created
2024 Nov 18 22:24
Editor
Edited
Edited
2025 Dec 22 23:39

Parameter Interpretability

Weights are a vector in parameter space. Attribution is an effect of weight and feature is an effect of representation. The motivation for the weight-similarity is to avoid components sharing param.
Weight Interpretability Notion
 
 
Weight Interpretability Methods
 
 
 
Bilinear MLPs
arxiv.org
Achille and Soatto (2018) studied the amount of information stored in the weights of deep networks
Emergence of Invariance and Disentanglement in Deep Representations
Using established principles from Statistics and Information Theory, we show that invariance to nuisance factors in a deep neural network is equivalent to information minimality of the learned...
Emergence of Invariance and Disentanglement in Deep Representations
There is little superposition in parameter space. Linearity in parameter space is a reasonable assumption.
arxiv.org
arxiv.org
Transformers contain a core subnetwork with very few parameters (≈10 million) that can nearly perfectly perform bigram (previous token only) next-token prediction (achieving r>0.95 bigram reproduction even in models up to 1B parameters). These are essential to model performance (concentrated primarily in the first MLP layer). Ablating them causes performance to collapse dramatically.
The first layer induces a sharp rotation from current token → next token space. The layer simply reorients activations from a 'coordinate system that describes the current token' to a 'coordinate system that describes the next token'. This serves as the minimal starting point for complex circuit analysis (minimal circuit).
arxiv.org
 

Recommendations