Responsible use of AI in peer review
Responsible use of AI in peer review
What is this about?
Why is this important?
For whom is this important?
What are the best practices?
Ethical guidelines and recent literature recommend the following practices:
Establish clear policies. Journals should define and publish explicit rules on AI use in peer review – for example, specifying whether reviewers may use AI for language editing only, and requiring disclosure of any AI assistance. Review guidelines and submission systems can prompt reviewers to declare AI use.
Preserve confidentiality. Reviewers must protect manuscript data. Do not upload any confidential or unpublished content into public AI platforms, as this can breach privacy rules. Some publishers recommend using only approved (e.g. in-house or licensed) AI tools, since free online services may reuse data.
Ensure transparency. If AI is used to help write a review (even for minor tasks like grammar or summaries), reviewers should disclose it. For instance, end reviewers’ reports with a statement like “This review was prepared with the assistance of [AI tool and version]”. Editors may request details (such as prompts and AI outputs) to verify how AI was used.
Maintain accountability. The human reviewer is fully responsible for the review content. Any AI-generated text must be carefully checked, corrected and contextualized. Reviewers should ensure that factual statements, critiques and citations are accurate and unbiased. Reliance on AI should never replace critical judgment; humans must verify all recommendations and conclusions.
Limit AI’s role. Use AI only for non-substantive tasks: language polishing, spelling/grammar, basic consistency checks or literature suggestions. Do not use AI to evaluate scientific merit, interpret novel results, or make editorial decisions. In other words, AI can assist with “the heavy lifting” of routine checks, but the core intellectual judgments belong to human experts.
Guard against bias and errors. Be aware that AI models can perpetuate biases in their training data and can “hallucinate” (provide plausible but false information). Reviewers should critique AI outputs as they would any source, and not assume the AI is infallible.
Use detection and oversight. Editors should remain vigilant for signs of undisclosed AI use. Some recommend using AI-text detectors (analogous to plagiarism checkers) to flag suspicious reviews. Journals may require reviewers to sign confidentiality agreements explicitly banning unauthorized AI use.
Provide training and guidance. Publishers and editors should educate reviewers on AI’s capabilities, limitations, and the ethics of its use. Many researchers seek guidance on “safe and responsible AI” – for instance, surveys show ~70% of scholars want publishers to offer AI literacy training. Training should emphasize when (and when not) to use AI, how to assess its output, and how to manage privacy concerns (L-O-C-A-D framework: limitations, ownership, confidentiality, accuracy, disclosure).
Follow ethical codes. All AI use should align with broader publication ethics. For example, the WAME recommends reviewers “specify any use of chatbots” in their reviews, and COPE stresses that any misconduct (whether AI-related or not) be investigated under existing ethics rules. In practice, this means avoiding plagiarism, unauthorized content sharing, or misrepresentation of authorship in AI-assisted reviews.In Detail
contributed to this theme. Latest contribution was Mar 09, 2026
Other information
Where
Good Practices & Misconduct
1. Kemal Ö. Artificial Intelligence in Peer Review: Ethical Risks and Practical Limits. Turkish Archives of Otorhinolaryngology. 63(3): 108–109. https://doi.org/10.4274/tao.2025.2025-8-12.
2. Peer Review in the Era of AI: Risks, Rewards, and Responsibilities - The Scholarly Kitchen. https://scholarlykitchen.sspnet.org/2025/09/17/peer-review-in-the-era-of-ai-risks-rewards-and-responsibilities/ [Accessed 6th March 2026].
3. Banik GM, Baysinger G, Kamat PV, Pienta N, [eds]. The ACS Guide to Scholarly Communication. Washington, DC: American Chemical Society; 2020. https://doi.org/10.1021/acsguide. [Accessed 6th March 2026].
4. Recommendations on the Use of AI in Scholarly Communication. EASE. https://ease.org.uk/communities/peer-review-committee/peer-review-toolkit/recommendations-on-the-use-of-ai-in-scholarly-communication/ [Accessed 6th March 2026].
5. NOT-OD-23-149: The Use of Generative Artificial Intelligence Technologies is Prohibited for the NIH Peer Review Process. https://grants.nih.gov/grants/guide/notice-files/NOT-OD-23-149.html [Accessed 6th March 2026].
6. Leung TI, de Azevedo Cardoso T, Mavragani A, Eysenbach G. Best Practices for Using AI Tools as an Author, Peer Reviewer, or Editor. Journal of Medical Internet Research. 2023;25: e51584. https://doi.org/10.2196/51584.
Ivona Škrinjar and Jelena Međaković contributed to this theme on March 09, 2026.
