LLM에 실제 데이터와 간단한 통계 정보를 제공했을 때 성능이 크게 향상됨
Evaluating and enhancing probabilistic reasoning in language models
Language models are capable of remarkably complex linguistic tasks. However, numerical reasoning is an area in which they frequently struggle. We systematically evaluate the probabilistic reasoning capabilities of LLMs and show that they can make more accurate inferences about distributions aided by the incorporation of real-world context and simplified assumptions.
https://research.google/blog/evaluating-and-enhancing-probabilistic-reasoning-in-language-models/


Seonglae Cho