Activation Patching

Creator
Creator
Seonglae Cho
Created
Created
2024 Oct 14 1:10
Editor
Edited
Edited
2025 Feb 1 22:42

Activation Probing

Monkey Patching for LLM

Activation patching is a technique used to understand how different parts of a model contribute to its behavior. In an activation patching experiment, we modify or “patch” the activations of certain model components and observe the impact on model output.
For example, Extracting
Steering Vector
for
CAA
  1. Clean run
  1. Corrupted run
  1. Patched run

Example

  1. Clean input “What city is the Eiffel Tower in?” → Save clean activation
  1. Corrupted input “What city is the Colosseum in?” → Save corrupted output
  1. Patch activation on the corrupted input from the clean activation → observe which activation layer or attention head is important for producing the correct answer, “Paris”
Activation Patching Methods
 
 
 

hands-on

Neel nanda method

Contrastive Activation Addition SVs operate unstably for specific inputs within the data in-distribution and have limitations in generalization OOD data

linear probing

runtime monitor
 
 

Recommendations