Dense SAE Feature

Creator
Creator
Seonglae ChoSeonglae Cho
Created
Created
2025 Dec 22 18:20
Editor
Edited
Edited
2025 Dec 24 0:5
Refs
Refs
 
 
 
 
 
High Activation Density could mean either that sparsity was not properly learned, or that it is an important feature needed in various situations. In the Feature Browser, SAE features show higher feature interpretability when they have more high activation
Quantile
, which demonstrates a limitation where SAE features have low interpretability for low activations and exhibit certain skewness.
However, features with the highest
Activation Density
in the
Activation Distribution
are less interpretable, mainly because these features typically don't have high activation values in absolute terms (not quantile). A well-classified and highly interpretable SAE feature should not show density that simply decreases with activation value, but rather should show clustering at high activation levels after an initial decrease.

Dense SAE Latents Are Features, Not Bugs

The residual stream contains directions that change "next token semantics (which word will appear)" as well as directions that barely change semantics but only alter "confidence/entropy (sharpness of distribution)" is the claim. This paper shows that the latter (=confidence control) is predominantly captured as dense SAE latents.
Captures the intrinsically existing dense subspace in the residual stream. When retraining on a subspace with dense latents removed, almost no dense latents emerge → not a training artifact. Dense latents appear as antipodal pairs (±directional pairs) representing one direction.
Role classification: position tracking, context binding, entropy regulation (nullspace,
Kernel
), alphabet/output signals, POS/semantic words, PCA reconstruction. Previous thought: nullspace = meaningless / garbage dimensions, but this result shows: nullspace = control channels intentionally used by the model
One potential from
Adam Optimizer
 
 

Recommendations