stats
ICLR 2026 Statistics - Paper Copilot
How to interpret the columns above:- Count: The total number of submissions is calculated as: #Total = #Accept + #Reject + #Withdraw + #Desk Reject - #Post Decision Withdraw (when applicable).- Rates: Each status rate is computed as #Status Occurrence / #Total, where the status can be Accept, Reject, etc. For example, if there are 100 total submissions and 27 were accepted, the Accept Rate = 27 / 100 = 27%.- min / max / mean / std: These are statistical summaries of the reviewer average scores per submission within each decision tier (e.g., Accept tier). For example, if a paper received ratings of 3, 4, and 5, its average score is 4 — and this average is used in the distribution. Suppose the Accept tier contains submissions with reviewer averages: {4.0, 3.1, 3.6}. Then, the min = 3.1, max = 4.0, mean ≈ 3.57, and std ≈ 0.37. If rating scores are publicly available, the statistics are based on submissions that opted in for release. If scores are collected from the community via the Google Form, the statistics reflect only those samples. When both community-collected and officially released scores are available, only the official scores are used for the displayed statistics.- Reject: Refers only to submissions that opted in for public release. Please note that the number of opt-in records may be smaller than the actual number of rejections.- Withdraw: Includes papers withdrawn by the authors, including those withdrawn after acceptance. Post-decision withdrawals are specifically marked as such.
https://papercopilot.com/statistics/iclr-statistics/iclr-2026-statistics/
leak
Academic Circle in Uproar: ICLR Reviewers Reveal Identities, Low Scores Given by Friends
True Open Review: Unveiling Transparent and Authentic Evaluations
https://eu.36kr.com/en/p/3572028126116993

The OpenReview / ICLR 2026 Identity Leak: What Really Happened, Why It Matters, and What Comes Next
Unpack the OpenReview ICLR 2026 data leak: a critical security incident that exposed reviewer, author, and AC identities. Understand the timeline, technical cause, and its impact on peer review in AI/ML.
https://mgx.dev/blog/openreview-leak-2025
ICLR 2026 Response to LLM-Generated Papers and Reviews
https://blog.iclr.cc/2025/11/19/iclr-2026-response-to-llm-generated-papers-and-reviews/
Jiaxuan You on Twitter / X
The ICLR leak is a disaster, but let’s talk about how to actually save the Peer Review process.The Fix: Ending anonymity for irresponsible reviewers.ICLR maintains high submission quality (at least in early years) because rejected papers are public—authors fear the reputation… https://t.co/NFbiJOE2G7— Jiaxuan You (@youjiaxuan) November 27, 2025
https://x.com/youjiaxuan/status/1994123748698427621
[D] ICLR reviewers being doxed on OpenReview
121 votes, 29 comments. A quick warning to everyone: we've just found out that we were doxed by a public comment as reviewers. Someone posted a…
https://www.reddit.com/r/MachineLearning/comments/1p8qru0/d_iclr_reviewers_being_doxed_on_openreview/
[D] ICLR reverts score to pre-rebuttal and kicked all reviewers
67 votes, 88 comments. The new assigned AC will determine the results. Authors still can add comments.
https://www.reddit.com/r/MachineLearning/comments/1p8x463/d_iclr_reverts_score_to_prerebuttal_and_kicked/
workshop
ICLR 2026 Workshop
Welcome to the OpenReview homepage for ICLR 2026 Workshop
https://openreview.net/group?id=ICLR.cc/2026/Workshop

Seonglae Cho