Özer Tanrısever, Aligning Reviewer Guidelines and Reviewer Feedback: A Data-Driven Study

M.S. Candidate: Özer Tanrısever
Program: Data Informatics
Date: 14.01.2026 / 14:30
Place: 
A-212

Abstract: Academic venues have established reviewer guidelines to enable standardized evaluation. This study proposes a framework designed to categorize reviewer inquiries and systematically measure their alignment with institutional guidelines. The analysis utilizes peer review data from the 2024 International Conference on Learning Representations (ICLR) on the OpenReview platform. The framework employs a question extraction pipeline utilizing Large Language Models (LLMs) and embedding similarity to decompose unstructured review inquiry text into semantically coherent chunks. A stratified sample of 427 reviews selected from 22,358 yielded 760 question chunks following multi-agent consensus validation. Generative topic modeling identified 13 topics within this dataset, including \textit{Methodology} and \textit{Computational Efficiency}. Finally, these empirically derived topics are mapped against evaluation criteria extracted from eight unique reviewer guidelines representing ten top-tier AI venues. Topic generation and mapping underwent manual and LLM-based verification, with an additional cross-check by a senior academic to reduce subjectivity. The results reveal a divergence between prescribed and practiced criteria. \textit{Ethical Considerations} appears in six out of eight guidelines but ranks as the least frequent topic in the dataset. In contrast, \textit{Presentation and Figures} appears only in two guidelines, but accounts for 8.29\% of the reviewer inquiries. The findings offer actionable insights for conference organizers to refine guidelines and enable authors to understand the prioritized reviewer criteria in practice, ultimately enhancing the reliability of the peer review ecosystem.