AI Evaluation

Creator
Creator
Seonglae ChoSeonglae Cho
Created
Created
2023 Jun 2 12:28
Editor
Edited
Edited
2024 Dec 21 1:41

Good benchmark principles

  • Make sure you can detect a 1% improvement
  • Easy to understand the result
  • Hard enough (SOTA model cannot do it)
  • Use a standard metric and make it comparable over time (do not update often)

Extension

  • Can include human baseline
  • Includes vetting by others
LLM Benchmarks
 
  • monotonicity
  • low variance
https://www.youtube.com/watch?v=2-SPH9hIKT8
Language Model Metrics
 
 
notion image
 
Model Evaluation Tools
 
 
 

Central Limit Theorem
to fix lacked statistical rigor form Anthropic

Benchmarks are unreliable, see results from arena or trustworthy 3rd party

LLM Leaderboard

Evaluating LLMs is complex so more comprehensive and purpose-specific evaluation methods is needed to assess their capabilities for various real-world applications
Types
 
 

Recommendations