AI Hallucination

Creator
Creator
Seonglae Cho
Created
Created
2023 Mar 20 7:41
Editor
Edited
Edited
2025 Jun 9 15:24

Uncertainty of AI (
Epsilon Greedy
,
Creativity
,
Extrapolation
)

For example, hallucinations in robotics models pose significant dangers, unlike language models' hallucinations, which merely provide incorrect information. Mechanistic interpretability provides a promising and explicit method to control AI.
LLMs hallucinate when fine-tuned with new factual knowledge, as they learn new information slower than consistent knowledge
notion image

Triggering Prompts

  • Questions about non-existent terms or concepts
  • Inability to consistently handle different domains like numbers and dates
 
 
 
 

The Internal State of an LLM Knows When It’s Lying

Bigger AI chatbots more inclined to spew nonsense
Masking
Retrieval Head
or relevant
Induction head
could induce hallucinations
The "be concise" instruction reduces counter-explanations, decreasing accuracy by up to 20%. When questions are posed with high confidence, the model is up to 15% more likely to agree with false claims.
 

Recommendations