Skip to main content

Recommendations on the Use of Generative AI in Research and Scholarly Activity

Date Issued: December 18, 2023

  • Generative AI is a powerful new tool that is rapidly evolving
  • When incorporating AI into thesis, dissertation, capstone, and comprehensive exams, always align with your committee’s guidelines on AI usage
  • Be cautious when using AI in research due to concerns about potential inaccuracies
  • Always protect confidential data when using AI tools, and make sure your data is not being used to train or improve the model powering the tool you are using
  • AI tools are excellent for text editing where privacy is not a concern
  • Remember, you are responsible for any inaccuracies produced by any AI tool you use
  • The use of AI for peer-reviewed materials is generally not adviseable

The public release and widespread availability of generative artificial intelligence (generative AI) tools represents a significant moment for the research and scholarly enterprise. Since late 2022, the capabilities of generative AI have proliferated and advanced at a rapid pace. Machine learning and artificial intelligence are established methods for use in research, but generative AI is novel in that it produces content—text, code, image, sound, video—based on user input and dialogue and presents additional questions.  There is currently no University Policy governing the use of generative AI in research and scholarship, and this guidance serves to assist scholars on our campus with this rapidly evolving technology. 

Emerging research has explored the potential for innovation and efficiency that generative AI presents. ISU has foundational research strengths across many disciplines that use and benefit from AI, machine learning and deep learning. There are tremendous opportunities for good that can come from the field and can be harnessed for innovation.  However, most questions at present surround the concerns with the technology, not the promise.  There are documented concerns that bear significant implications for research and scholarly activity such as the accuracy or bias of generated information and issues around authorship, transparency, and intellectual property rights such as copyright and data privacy. Currently there are a growing number of lawsuits associated with generative AI data gathering practices, training, services, and other related issues.

ISU promotes and expects a culture of research integrity including responsible and ethical conduct of research. Research integrity depends on the reliability and trustworthiness of research. Responsible conduct of research and scholarly activity (RCR) is founded on core values such as honesty, accuracy, efficiency, and impartiality. The availability of generative AI tools has the potential of advancing and enhancing research and scholarly activity when used responsibly. These recommendations are offered for all faculty, staff, students and others (visiting scholars, postdoctoral fellows,  etc.) who participate in research as well as scholarly or creative activity.   Guidance for classroom instruction is available from the ISU Center for Integrated Professional Development:

Regarding the use of generative AI tools, groups of individuals across campus have conducted an initial review of emerging U.S. federal agency rules,1 examined the guidelines from UNESCO,2 several professional organizations and journals, and consulted with our colleagues on other campuses, among other sources.  One especially concise set of guidance was issued by the University of Kentucky, and the UK ADVANCE AI Initiative,3 which we used as a starting point, with permission, to adapt to offer the following guidance in response to frequently asked questions from our ISU community of scholars.


  • But wait, haven’t we been using AI for research already?
Yes, you likely have been using AI in your research or creative scholarship! Predictive Artificial intelligence has been used in research and creative scholarship for many years with tools such as spell check, grammar check, or image analysis. The current discussion of AI is largely concerned with  generative AI and the creation of new content that is similar to, but not exactly the same as, existing content, and not the analysis or processing of already existing content.
  • Should generative AI be used for research?

Potentially. Generative AI tools have the ability to enhance research outputs and contribute meaningfully to knowledge creation. There are, however, considerations to using generative AI tools for research and creative scholarship, including the potential for AI to generate data that is inaccurate, inappropriate, not novel, or biased. Specifically, text generation tools such as ChatGPT are vulnerable to confabulations called “hallucinations” or generating new “facts” that are not true in the real world. So, although such tools may be used for organizing text, all statements made by generative AI should be verified by the user for correctness before retaining them in their final output.  Since these tools are trained on a large corpus of knowledge, encompassing much of the publicly available internet, generative AI tools will likely paraphrase from various sources, which may raise issues with plagiarism and intellectual property. These tools have also been found to reference incorrect sources or provide false references in some cases.

When using a generative AI tool, it is best practice to verify or validate all generated content using additional factors and reliable resources.

The appropriate use of generative AI for research and creative scholarship will differ by discipline. Check with your disciplinary authorities, organizations, funding agencies, and publications for a more context-specific understanding of how generative AI may be used in your area.
  • Can generative AI be used for theses, dissertations, capstones, or comprehensive exams?

Potentially. The appropriate use of generative AI in theses, dissertations , capstones, and comprehensive exams (and the research underlying them) will differ by discipline. ISU’s Graduate Council has issued a statement on the matter that encourages a discussion between the emerging scholar/student and their advisor to confirm usage is aligned with the guidelines and expectations of the advisor and/or committee.  The latest guidance can be found on the Graduate School website:  As of this writing the Graduate School statement read:

The use of generative artificial intelligence (AI) can support innovative and creative scholarship when used within appropriate guidelines, which may vary by discipline. It is vital that students uphold the core principles of academic and research integrity. As such, transparency around the usage of generative AI is required between the student, their committee and school/department, as well as the student and the audience of their completed thesis/dissertation, capstone, and/or comprehensive exams. Students need to take responsibility for their work, including using their own words and proper citations.

All use of generative AI in the thesis/dissertation/capstone/comprehensive exam process must be disclosed to the student’s committee by the student. Generative AI use should be verified to be within the standards of the discipline and school/department by the student and the student’s committee. Unauthorized use of generative AI may be considered a violation of Policy 1.8 Integrity in Research, Scholarly, and Creative Activities. Usage of generative AI must be appropriately cited following the guidelines of the style manual used by the discipline.

While these guidelines serve as the Graduate School’s position on the use of generative AI in the thesis/dissertation/capstone/comprehensive exam process, individual schools/departments may have additional policies, guidance, or restrictions. It is the student’s responsibility to understand and follow these standards and contact their school/department or the Graduate School with any questions or concerns. As generative AI is relatively new and evolving, additional or more detailed guidance may be issued in the future.
  • Can generative AI be used to edit text for research or scholarship?
Potentially. The use of generative AI for editing is incredibly powerful and has many applications (e.g., a tool such as ChatGPT can easily shorten an abstract to the correct word length). However, using these tools must be permissible in your field of scholarship, and appropriate attribution must be made for any and all AI content. Additionally, you must be aware that data submitted to an AI tool may be used to train future versions of the model and will result in loss of privacy and confidentiality of your data. This topic is addressed in more detail in below.
  • Can generative AI be listed as an author of research work?
Potentially. Generative AI cannot be designated authorship as it cannot be held accountable for issues such as research misconduct, plagiarism, or intellectual property misuse. Most journals’ criteria for authorship would not qualify generative AI as an author, and the Committee on Publication Ethics (COPE) has asserted that “AI tools cannot meet the requirements for authorship as they cannot take responsibility for the submitted work. As non-legal entities, they cannot assert the presence or absence of conflicts of interest nor manage copyright and license agreements.” The International Committee of Medical Journal Editors (ICMJE) lists criteria for authorship. Please consult with your department/school and college for any local authorship criteria.
  • Who is responsible for content generated by AI?
Researchers are responsible for the content and accuracy of all aspects of their work, including source material. Human authors must take responsibility for the content and the accuracy, factualness, and veracity of the data and analysis presented in the research. Generative AI enables potentially transformative research but “it does not excuse our judgment or accountability.”4  Generative AI cannot be held accountable for issues such as research misconduct, plagiarism, or intellectual property misuse and cannot hold a copyright in the work.
  • How should generative AI use be described in reported research results?
Journals have different rules for reporting the use of generative AI in manuscripts. Generally, journals, including those by the publishing houses of Taylor and Francis and Springer,have stated that input from AI must be detailed in the Materials and Methods section, Acknowledgement section, or similar section for transparency. Any publication of reported results should disclose the use of a generative AI tool, which tool, for what parts of the publication and how it was used. Other best practices include indicating the specific language model in addition to the generative AI tool, as well as the date(s) of use, e.g., “ChatGPT Plus, GPT-4, 19-20 September 2023.”
  • Is it permissible to use generative AI for grant writing?
Potentially. Funding agencies and other sponsors expect original ideas and concepts from grant applicants. Concerns raised when using generative AI for grant writing include that AI may generate data that is inaccurate, outdated, and possibly biased. AI tools paraphrase from various sources which could could raise questions regarding plagiarism, research misconduct, or other intellectual property issues. AI tools have also been found to reference incorrect sources or create false references. When using a generative AI tool, it is best practice to verify or validate the content provided using additional factors and reliable resources. As with all questions of grant writing protocol, it is recommended to check any individual grant agency’s guidelines for regulations from the agency regarding the use of generative AI in writing proposals for their programs.
  • Can generative AI be used in peer review of manuscripts or grant applications?

One federal agency, the National Institute of Health (NIH), has stated that AI cannot be used in peer review. “NIH prohibits NIH scientific peer reviewers from using natural language processors, large language models, or other generative Artificial Intelligence (AI) technologies for analyzing and formulating peer review critiques for grant applications and R&D contract proposals. NIH is revising its Security, Confidentiality, and Non-disclosure Agreements for Peer Reviewers to clarify this prohibition. Reviewers should be aware that uploading or sharing content or original concepts from an NIH grant application, contract proposal, or critique to online generative AI tools violates the NIH peer review confidentiality and integrity requirements.”5 The National Science Foundation (NSF) has issued very similar language.6

For information on other agencies or sponsors’ policies on generative AI use in peer review of grant applications please contact the agencies or sponsors directly.

For manuscript review, confidentiality and privacy concerns exist just as for grant application review.
  • What privacy concerns arise in using generative AI in research?

Depending on the tool, inputting any research data into the AI tool could potentially allow that tool to use the data for future training purposes (i.e., your data could inform querries made by other users).

Generative AI tools that may not contain appropriate privacy policies to protect data such as research data, in particular protected health information (PHI), personally identifying information or other personal information protected by law such as FERPA, and any proprietary information.  These types of data should not be entered into a Generative AI tool without consulting appropriate University authorities in Purchasing.
  • What Patent and Copyright considerations arise when using a generative AI tool?

Referring to numerous statutes applicable to patents, as well as case law and regulations, the United States Patent and Trademark Office (USPTO) has determined that only natural persons can be named as inventors, which precludes generative AI from being referred to as an inventor. The sole or significant use of generative AI to produce or contribute to an invention could potentially preclude the ability to gain patent protection or be named as an inventor, as inventorship requires material intellectual contribution of a person inventor.

The US Copyright Office (USCO) has published a statement in the Federal Register regarding generative AI. In policy guidance issued in March 2023, the USCO wrote that it would extend copyright to protect only material that is the product of human creatvitity, and a non-human (e.g. Generative AI tool) cannot hold a copyright. 7 For works that include human-author elements and AI-generated content, the Office wrote that “Based on the Office’s understanding of generative AI technologies currently available, users do not exercise ultimate creative control over how such systems interpret prompts and generate material.” It goes on to state that each case is dependent on the extent of creative control that the human had over the work including the traditional elements of authorship.8
  • Is there a reliable generative AI detection tool?
No, there is currently not a reliable generative AI detection tool. In addition to strong concerns about accuracy, keep in mind that putting content into a detection tool or other LLM without consent may have intellectual property ramifications. For guidance on mitigating the use of AI in a classroom setting, please consult the Center for Integrated Professional Development’s guide on generative AI in the classroom.
  • How can generative AI augment research and scholarly activity?

This link includes some examples of areas where generative AI could augment research and scholarly activity. All examples come with the caveat that some of the use-cases could present intellectual property challenges, such as copyright infringement. For example, there are currently legal cases pending that center on the question of how a generative AI tool uses copyrighted text or images in generating outputs. If for example, an output result closely resemble a particular owner’s work, that particular use could be challenged by the original owner of the work.

Adapted from University of Kentucky, Advance Initiative with permission, 2023.