AutoSteer

Creator
Creator
Seonglae Cho
Created
Created
2024 Dec 19 23:57
Editor
Edited
Edited
2025 Jan 1 21:55
Refs
Refs

RL based SAE activation control

It improves LLM performance to interact with environment with
PPO
like RL techniques without fine-tuning. I guess it automatic policy learning could be helpful to LLM for adapting
Sweet Spot of Feature Steering
sae-rl
JazhycUpdated 2024 Dec 8 20:40
 
 
 
AutoSteer: Weight-Preserving Reinforcement Learning for Interpretable Model Control
Traditional fine-tuning methods for language models, while effective, often disrupt internal model features that could provide valuable insights into model behavior. We present a novel approach combining Reinforcement Learning (RL) with Activation Steering to modify model behavior while preserving interpretable features discovered through Sparse Autoencoders. Our method automates the typically manual process of activation steering by training an RL agent to manipulate labeled model features, enabling targeted behavior modification without altering model weights. We demonstrate our approach by reprogramming a language model to play Tic Tac Toe, achieving a 3X improvement in performance compared to the baseline model when playing against an optimal opponent. The method remains agnostic to both the underlying language model and RL algorithm, offering flexibility for diverse applications. Through visualization tools, we observe interpretable feature manipulation patterns, such as the suppression of features associated with illegal moves while promoting those linked to optimal strategies. Additionally, our approach presents an interesting theoretical complexity trade-off: while potentially increasing complexity for simple tasks, it may simplify action spaces in more complex domains. This work contributes to the growing field of model reprogramming by offering a transparent, automated method for behavioral modification that maintains model interpretability and stability.
 
 

Recommendations