AI Hallucination Detection

Creator
Creator
Seonglae ChoSeonglae Cho
Created
Created
2025 Oct 5 0:8
Editor
Edited
Edited
2025 Dec 24 0:57
 
 
 
 
Instead of focusing on short QA or external validation, this approach identifies hallucinations at the token level rather than sentence level. By attaching linear probes or LoRA probes to the hidden states of models like Llama, it predicts hallucination probability for each token. This method significantly outperforms existing uncertainty-based methods (semantic entropy 0.71). However, detecting reasoning errors beyond entity hallucinations remains challenging.
arxiv.org

HHEM

vectara/hallucination_evaluation_model · Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
vectara/hallucination_evaluation_model · Hugging Face
 
 

Recommendations