Gated SAE

Creator
Creator
Seonglae ChoSeonglae Cho
Created
Created
2024 Oct 24 0:13
Editor
Edited
Edited
2025 Mar 28 12:0
Refs

Gated SAE

While L1 loss enforces sparsity, it can excessively reduce important features through shrinkage, making it difficult to properly represent the data.
  • L1 penalty is only applied during the process of selecting (gating) which features to activate
  • The degree of activation for selected neurons is determined without L1 penalty
In other words, feature activation remains sparse while preventing the L1 penalty from reducing the actual magnitude of values
Gated SAEs
https://www.lesswrong.com/posts/EWhA4pyfrbdSkCd4G/evaluating-sparse-autoencoders-with-board-game-models
 

JumpReLU SAE with
Unit step function

Does this mean they efficiently implemented the gating mechanism using JumpReLU activation?
notion image
notion image
 
 
 
 
anthropic analysis
Gated SAE
openai comparison against topk
 
 

Recommendations