SAE Transferability

Creator
Creator
Seonglae Cho
Created
Created
2025 Jan 21 13:28
Editor
Edited
Edited
2025 Feb 12 22:38
SAE Transferability Types
 
 
 
 

Transfer Learning
across layers

By leveraging shared representations between adjacent layers, training costs and time can be significantly reduced by applying transfer learning instead of training Sparse AutoEncoder (SAE) from scratch. Backward was better than forward, which can be understood as starting with prior knowledge of computation results.
  • forward SAE
  • backward SAE
 
 

Recommendations