How Should We Peer Review Software?
8 days ago
- #scientific-publishing
- #research-software
- #peer-review
- Peer review is essential for publishing in scientific journals, with varying prestige based on journal or conference.
- In machine learning, top conferences like AAAI and NeurIPS are more prestigious than many journals.
- Author order matters, varying by field—first or high for significance, last for PIs, or alphabetical in cybersecurity.
- Peer review involves scientists vetting each other's work, with possible outcomes: reject, accept with revisions, or accept.
- Some academics criticize peer review, citing rejected papers that later became seminal works.
- Reviewing specialized scientific work requires expertise in the specific subfield, making peer review inherently challenging.
- Proposals to improve peer review include naming reviewers to encourage better feedback.
- Reviewing research software is difficult due to poor code quality, often written by non-software experts.
- Requiring software submission for review is impractical, as reviewers may lack time or expertise to inspect complex codebases.
- Simulation code in research is hard to verify without deep inspection, raising concerns about potential errors or falsification.
- Ensuring scientists write high-quality code is unrealistic due to the extensive training already required for scientific research.
- Hiring software engineers for research labs could help, but funding constraints make this difficult.
- Solutions for improving software peer review remain unclear, requiring incentives or alternative approaches.