AI use in scientific writing
AI use in scientific writing
What is this about?
Why is this important?
AI use is becoming part of many processes, as well as improving the writing of scientific publications. AI tools can be used for basic technical assistance, linguistic enhancement, substantial content involvement, or even extensive content creation (2). However, a good scientific practice includes accountability, objectivity, reproducibility, transparency, integrity, honesty (3).
The research should be original, even though some AI tool is utilised. It seems that the scientific journals have their own policies on AI-generated content, and the authors are recommended to review those before attempting to use AI for their manuscript. The level of AI use should be thought through by the authors, and when utilised, the ethical rigor warrants acknowledgment of such an action in the manuscript (4). Nondisclosure of AI use can be classified as misconduct in some circumstances (5).
Another issue is confidentiality, as some AI tools do not ensure that the content will not be taken up (5).For whom is this important?
What are the best practices?
The International Committee of Medical Journal Editors (ICMJE) laid out the recommendations for the use of AI by authors (5):
· “The journals should require authors to disclose the use of AI-assisted technologies;
· Level of AI use should be described;
· Specific AI tools should not be listed as authors;
· Authors should carefully review and edit the AI-generated content;
· Authors should be able to claim that there is no plagiarism in their work.”
Use of AI tools for grammar and linguistic refinement
A doctoral student uses AI tools for language polishing and grammar correction while keeping full authorship responsibility for the scientific content. Non-native English-speaking researchers utilize AI-assisted editing to enhance clarity and readability before journal submission. Researchers use AI tools to structure abstracts and improve coherence without creating original scientific data or conclusions.
Transparent disclosure of AI assistance
Authors include an AI disclosure statement in the acknowledgments section that explains the use of AI for linguistic editing. A research team specifies the level and purpose of AI use (such as language refinement or summarization) in accordance with journal policies. Manuscripts submitted to journals following ICMJE recommendations explicitly disclose AI-assisted technologies used during writing.
Ethical debates on authorship and responsibility
Discussions in academic publishing about whether AI-generated text challenges traditional ideas of authorship and intellectual responsibility. Editorial debates focus on the responsibility of human authors for the accuracy and originality of AI-assisted content. Cases where journals clarified that AI tools cannot be listed as authors due to a lack of accountability and scientific responsibility.
Institutional recommendations on responsible AI use in doctoral research
Universities issuing guidelines for ethical AI use in thesis writing and doctoral research. Doctoral programs promoting supervised and transparent AI assistance for academic writing. Research institutions developing policies that mandate critical human oversight of AI-supported academic content.
Journal policies requiring acknowledgment of AI-assisted technologies
Major publishers, such as Elsevier and Springer Nature, are implementing mandatory AI disclosure policies for manuscript submissions. Editorial guidelines specify that AI can assist in writing but cannot replace human intellectual contribution. Journal instructions require authors to verify the originality and absence of plagiarism in manuscripts that utilize AI assistances.In Detail
Related guidelines
Cochrane – Setting the standards for responsible AI use in evidence synthesis (6)
Cochrane, the Campbell Collaboration, JBI, and the Collaboration for Environmental Evidence have published a joint position statement on the responsible use of artificial intelligence (AI) in evidence synthesis. Link: https://www.cochrane.org/about-us/news/setting-standards-responsible-ai-use-evidence-synthesis
COPE (Committee on Publication Ethics) – AI and Publication Ethics (7)
COPE stresses transparency, accountability, and ethical oversight in the use of AI tools in scholarly writing. It warns about the risks of undisclosed AI usage, such as potential misconduct, plagiarism, and lack of author responsibility. Link: https://publicationethics.org/topic-discussions/emerging-ai-dilemmas-scholarly-publishing
EQUATOR Network – Reporting and Research Transparency Standards (8) (9)
The EQUATOR (Enhancing the QUAlity and Transparency Of health Research) Network promotes transparency, completeness, and integrity in research reporting, including responsible disclosure of any AI-assisted methodologies in manuscript preparation. This supports reproducibility and scientific credibility. The CLAIM guideline offers a standardized reporting framework for artificial intelligence studies in medical imaging, emphasizing transparency, reproducibility, and methodological clarity. It encourages responsible AI use by requiring detailed reporting of data, models, validation, and analytical processes in scientific manuscripts. Links: https://www.equator-network.org/ , https://pubs.rsna.org/page/ai/claim
ICMJE Recommendations on the Use of AI in Scientific Publishing (5) (10) (11)
The ICMJE states that AI-assisted technologies must be transparently disclosed and cannot be listed as authors, as they do not meet authorship criteria or accountability standards. Human authors remain fully responsible for the accuracy, integrity, and originality of AI-assisted content. Link: https://icmje.org/recommendations/browse/artificial-intelligence
UNESCO Recommendation on the Ethics of Artificial Intelligence (12)
UNESCO (United Nations Educational, Scientific and Cultural Organization) offers a global ethical framework for responsible AI use, highlighting transparency, human oversight, accountability, and the protection of research integrity. These principles are directly relevant to AI-assisted academic writing and research practices. Link: https://www.unesco.org/en/artificial-intelligence/recommendation-ethics
WAME Recommendations on AI and Chatbots in Scholarly Publications (13)
WAME (World Association of Medical Editors) clearly states that AI tools cannot qualify as authors because they cannot take responsibility for the content or ensure scientific integrity. The organization recommends explicit disclosure and careful human review of all AI-assisted text. Link: https://wame.org/page3.php?id=106contributed to this theme. Latest contribution was Feb 19, 2026
