Gated SAE
While L1 loss enforces sparsity, it can excessively reduce important features through shrinkage, making it difficult to properly represent the data.
- L1 penalty is only applied during the process of selecting (gating) which features to activate
- The degree of activation for selected neurons is determined without L1 penalty
In other words, feature activation remains sparse while preventing the L1 penalty from reducing the actual magnitude of values
Gated SAEs
JumpReLU SAE with Unit step function
Does this mean they efficiently implemented the gating mechanism using JumpReLU activation?
anthropic analysis
Gated SAE
openai comparison against topk