Hasty Briefsbeta

Bilingual

2% of ICML papers desk rejected because the authors used LLM in their reviews

8 hours ago
  • #AI Ethics
  • #LLM Policies
  • #Peer Review
  • ICML 2026 implemented two policies regarding LLM use in peer review: Policy A (no LLM use) and Policy B (permissive LLM use).
  • 497 papers were desk-rejected due to violations of Policy A by 506 reviewers who agreed not to use LLMs but were detected doing so.
  • 795 reviews (~1% of all reviews) were found to violate Policy A, with 51 reviewers using LLMs in more than half of their reviews.
  • Detection method involved watermarking submission PDFs with hidden LLM instructions, influencing reviews if LLMs were used.
  • The watermarking technique had a success rate of over 80% for most LLM models, though it could be circumvented if known.
  • False positives were minimized by manual verification of flagged reviews, ensuring only clear violations were acted upon.
  • The initiative aimed to uphold trust in the peer review process by enforcing agreed-upon policies, despite the challenges of rapid AI advancements.