Responsible use of large language models in manuscript authorship, peer review, and editorial processes: a Delphi consensus among editors-in-chief of anaesthesia and pain medicine journals (RULE-AP) -
5 hours ago
- #academic publishing
- #research integrity
- #large language models
- Delphi consensus on responsible use of large language models (LLMs) in academic publishing by anaesthesia and pain medicine journal editors.
- LLMs can assist with language editing, summarization, translation, and information organization but require human verification.
- Ethical and transparent use of LLMs is emphasized, with full disclosure of their use in academic tasks.
- LLMs should not generate original content, data, references, conclusions, or entire manuscripts.
- Concerns include 'hallucinations', confidentiality breaches, and erosion of critical skills due to LLM misuse.
- Guidelines stress human accountability and verification to maintain research integrity in scholarly workflows.