AI Neuron Activation

Created
Created
2023 Nov 12 18:44
Editor
Creator
Creator
Seonglae ChoSeonglae Cho
Edited
Edited
2025 Dec 24 0:42
Unfortunately, the most natural computational unit of the neural network – the neuron itself turns out not to be a natural unit for human understanding. This is because many neurons are polysemantic(
Superposition Hypothesis
). Superposition can arise naturally during the course of neural network training if the set of features useful to a model are sparse in the training data.
AI Neuron Activation Notion
 
 
 
 
 

Safety features

There are features representing more abstract properties of the input, might there also be more abstract, higher-level actions which trigger behaviors over the span of multiple tokens?
  • High-Level Actions
  • Planning
  • Social Reasoning
  • Personas
Circuits Updates - July 2023
We report a number of developing ideas on the Anthropic interpretability team, which might be of interest to researchers working actively in this space. Some of these are emerging strands of research where we expect to publish more on in the coming months. Others are minor points we wish to share, since we're unlikely to ever write a paper about them.

Monosemanticity

Towards Monosemanticity: Decomposing Language Models With Dictionary Learning
Using a sparse autoencoder, we extract a large number of interpretable features from a one-layer transformer.Browse A/1 Features →Browse All Features →

Runtime monitoring

Runtime Monitoring Neuron Activation Patterns
For using neural networks in safety critical domains, it is important to know if a decision made by a neural network is supported by prior similarities in training. We propose runtime neuron...
Runtime Monitoring Neuron Activation Patterns
 
 
 

Recommendations