To correctly use in-context information, language models (LMs) must bind entities to their attributes.
LMs’ internal activations represent binding information by attaching binding ID vectors to corresponding entities and attributes. We further show that binding ID vectors form a continuous subspace, in which distances between binding ID vectors reflect their discernability.