Papers with Code - Reading Comprehension
Most current question answering datasets frame the task as reading comprehension where the question is about a paragraph or document and the answer often is a span in the document. Some specific tasks of reading comprehension include multi-modal machine reading comprehension and textual machine reading comprehension, among others. In the literature, machine reading comprehension can be divide into four categories: **cloze style**, **multiple choice**, **span prediction**, and **free-form answer**. Read more about each category [here](https://paperswithcode.com/paper/a-survey-on-machine-reading-comprehension-1). Benchmark datasets used for testing a model's reading comprehension abilities include [MovieQA](/dataset/movieqa), [ReCoRD](dataset/record), and [RACE](/dataset/race), among others. The Machine Reading group at UCL also provides an [overview of reading comprehension tasks](https://uclnlp.github.io/ai4exams/data.html). Figure source: [A Survey on Machine Reading Comprehension: Tasks, Evaluation Metrics and Benchmark Datasets](https://arxiv.org/pdf/2006.11880.pdf)
https://paperswithcode.com/task/reading-comprehension