AI Generated Content

From The Embassy of Good Science

AI Generated Content

What is this about?

Artificial Intelligence is a broad term, with applications that are not all relevant to issues concerning research and research integrity.

AI Generated Content refers in this instance to texts, academic or otherwise, that are either produced by or in collaboration with AI tools like Chat GPT that use Large Language Models (LLMs) to generate text. These tools can be used in research, and raise interesting questions relating to authorship.

Why is this important?

AI generated content presents important questions for conceptions of integrity in research and authorship, because they create a grey area when deciding attribution. AI tools like Chat GPT are already capable of producing student essays that are indistinguishable from those written by students, and these developments have led to calls for a reevaluation of the role of writing in assessment.[1] These tools are also being used by researchers in the production of research, and have been credited with formal authorship in several articles.[2] Policy-makers have already sought expert advice on how science should accommodate these changes. [3]

This is a developing issue, and raises questions about attribution and authorship for which there are not yet clear answers. Nature recently issued a new policy about the ethical use of LLM tools like Chat GPT, arguing that these tools will not be credited with authorship on future research papers as they cannot take accountability for the research that they produce. [2] This is just one policy though, and there are not yet clear guidelines on what the best practices are for research produced using these tools. The Chief Scientific Advisors to the European Commission have highlighted the ability of AI tools to generate and spread fraudulent content at scale as posing a significant risk to scientific communication, and have emphasised the need for greater AI literacy and competency in researchers to deal with these challenges.[3]

The use of these tools also raise further questions about the notion of plagiarism. For example, an LLM might be used to produce an outline for the structure rather than the content of an article. If a researcher then makes use of this structure without admitting to having used an LLM, would that constitute plagiarism? And if the answer to this question is yes, then how should we accommodate the fact that many researchers effectively do the same thing - using existing research produced by others as structural inspiration for their own work - without attribution? These are questions for which we do not yet have clear answers with widespread agreement.

  1. Lancaster, Thomas. ‘The Past and Future of Contract Cheating’. In Cheating Academic Integrity: Lessons from 30 Years of Research, edited by David A. Rettinger and Tricia Bertram Gallant, First Edition., 45–64. Hoboken, NJ: Jossey-Bass, 2022.
  2. 2.0 2.1 ‘Tools Such as ChatGPT Threaten Transparent Science; Here Are Our Ground Rules for Their Use’. Nature 613, no. 7945 (26 January 2023): 612–612. https://doi.org/10.1038/d41586-023-00191-1.
  3. 3.0 3.1 European Commission’s Group of Chief Scientific Advisors (2024). Successful and timely uptake of artificial intelligence in science in the EU: scientific opinion. Brussels: European Commission.

For whom is this important?

Other information

Cookies help us deliver our services. By using our services, you agree to our use of cookies.
5.1.6