Good benchmark principles
- Make sure you can detect a 1% improvement
- Easy to understand the result
- Hard enough (SOTA model cannot do it)
- Use a standard metric and make it comparable over time (do not update often)
Extension
- Can include human baseline
- Includes vetting by others
LLM Benchmarks
- monotonicity
- low variance
Language Model Metrics
Model Evaluation Tools
Central Limit Theorem to fix lacked statistical rigor form Anthropic
Benchmarks are unreliable, see results from arena or trustworthy 3rd party
LLM Leaderboard
Evaluating LLMs is complex so more comprehensive and purpose-specific evaluation methods is needed to assess their capabilities for various real-world applications
Types