Gated SAE
While L1 loss enforces sparsity, it can excessively reduce important features through shrinkage, making it difficult to properly represent the data.
- L1 penalty is only applied during the process of selecting (gating) which features to activate
- The degree of activation for selected neurons is determined without L1 penalty
In other words, feature activation remains sparse while preventing the L1 penalty from reducing the actual magnitude of values
Gated SAEs

anthropic analysis
Circuits Updates - June 2024
We report a number of developing ideas on the Anthropic interpretability team, which might be of interest to researchers working actively in this space. Some of these are emerging strands of research where we expect to publish more on in the coming months. Others are minor points we wish to share, since we're unlikely to ever write a paper about them.
https://transformer-circuits.pub/2024/june-update/index.html#topk-gated-comparison
Gated SAE
openai comparison against topk

Seonglae Cho