AI Task vector

Creator
Creator
Seonglae ChoSeonglae Cho
Created
Created
2025 Jan 26 18:30
Editor
Edited
Edited
2026 Jan 3 22:34
Refs
Refs

Function Vector, Task feature

sparse SAE task vector fine-tuning (gradient-based cleanup)

https://www.lesswrong.com/posts/5FGXmJ3wqgGRcbyH7/extracting-sae-task-features-for-in-context-learning
Obtain a more accurate steering vector through gradient-based cleanup of the steering vector obtained from the SAE decoder since it has reconstruction error with linear combination of SAE features.

Gradient-based cleanup

Fine-tuning is applied to the target vector to efficiently reconstruct neuron activation patterns present in the residual using the SAE basis. Through gradient-based cleanup, features with small gradients were removed to create a compact SAE. This shows improved performance compared to the existing task vector and provides interpretability.
AI Task vectors
 
 
 
 

TaskVec 2023 EMNLP 2023

ICL compresses a set of examples S into a single task vector , then generates answers to question x using only θ without directly referencing S. In other words, ICL can be decomposed into: learning stage + application stage . is extracted from intermediate layer representations and is a stable, distinguishable vector for each task. When is patched, even tasks different from the original examples are performed based on is dominant. This decomposition still maintains 80–90% of general ICL accuracy.

Function Vector 2023.10 ICLR 2024

notion image
A small number of intermediate layer attention heads causally transmit task information in ICL. Even when FV is inserted into zero-shot or natural language contexts, the task is executed with transferability. FV cannot be explained by output word distribution alone and triggers nonlinear computation.

Visual task vector using
Policy Gradient Learning

SAE TaskVector 2024

SAE
features mimic task vector steering based on Task detector features and task feature from separator token’s residual mean as task vector with Gradient based Cleanup (2024)

Top-down (
In-context Vector
) vs Bottom-up (Feature Vector)

In-context learning

TVP-loss to emerge task vector of ICL into specific layer (2025)

Instruct Vector from base model to instruction model

 
 

 

Recommendations