arxiv.org
https://arxiv.org/pdf/2601.11516
Attentive probe
Takes the entire token sequence / feature map output from the encoder as input, performs cross-attention on top of it using a single learnable query to select and gather task-relevant information from the tokens, and sends the result through an MLP + classifier.
The attentive probe can focus more attention on areas with motion, hand-object interactions, and patches important for action classification.
Attentive probe = a lightweight evaluation head that uses learnable query-based cross-attention to read and gather important information from token features produced by a frozen encoder for classification.
V-JEPA: The next step toward advanced machine intelligence
We’re releasing the Video Joint Embedding Predictive Architecture (V-JEPA) model, a crucial step in advancing machine intelligence with a more grounded understanding of the world.
https://ai.meta.com/blog/v-jepa-yann-lecun-ai-model-video-joint-embedding-predictive-architecture/


Seonglae Cho