Recommendations on the Use of Generative AI in Research and Scholarly Activity
Date Issued: December 18, 2023
- Generative AI is a powerful new tool that is rapidly evolving
- When incorporating AI into thesis, dissertation, capstone, and comprehensive exams, always align with your committee’s guidelines on AI usage
- Be cautious when using AI in research due to concerns about potential inaccuracies
- Always protect confidential data when using AI tools, and make sure your data is not being used to train or improve the model powering the tool you are using
- AI tools are excellent for text editing where privacy is not a concern
- Remember, you are responsible for any inaccuracies produced by any AI tool you use
- The use of AI for peer-reviewed materials is generally not adviseable
The public release and widespread availability of generative artificial intelligence (generative AI) tools represents a significant moment for the research and scholarly enterprise. Since late 2022, the capabilities of generative AI have proliferated and advanced at a rapid pace. Machine learning and artificial intelligence are established methods for use in research, but generative AI is novel in that it produces content—text, code, image, sound, video—based on user input and dialogue and presents additional questions. There is currently no University Policy governing the use of generative AI in research and scholarship, and this guidance serves to assist scholars on our campus with this rapidly evolving technology.
Emerging research has explored the potential for innovation and efficiency that generative AI presents. ISU has foundational research strengths across many disciplines that use and benefit from AI, machine learning and deep learning. There are tremendous opportunities for good that can come from the field and can be harnessed for innovation. However, most questions at present surround the concerns with the technology, not the promise. There are documented concerns that bear significant implications for research and scholarly activity such as the accuracy or bias of generated information and issues around authorship, transparency, and intellectual property rights such as copyright and data privacy. Currently there are a growing number of lawsuits associated with generative AI data gathering practices, training, services, and other related issues.
ISU promotes and expects a culture of research integrity including responsible and ethical conduct of research. Research integrity depends on the reliability and trustworthiness of research. Responsible conduct of research and scholarly activity (RCR) is founded on core values such as honesty, accuracy, efficiency, and impartiality. The availability of generative AI tools has the potential of advancing and enhancing research and scholarly activity when used responsibly. These recommendations are offered for all faculty, staff, students and others (visiting scholars, postdoctoral fellows, etc.) who participate in research as well as scholarly or creative activity. Guidance for classroom instruction is available from the ISU Center for Integrated Professional Development: https://prodev.illinoisstate.edu/pedagogy/ai/.
Regarding the use of generative AI tools, groups of individuals across campus have conducted an initial review of emerging U.S. federal agency rules,1 examined the guidelines from UNESCO,2 several professional organizations and journals, and consulted with our colleagues on other campuses, among other sources. One especially concise set of guidance was issued by the University of Kentucky, and the UK ADVANCE AI Initiative,3 which we used as a starting point, with permission, to adapt to offer the following guidance in response to frequently asked questions from our ISU community of scholars.
- But wait, haven’t we been using AI for research already?
- Should generative AI be used for research?
Potentially. Generative AI tools have the ability to enhance research outputs and contribute meaningfully to knowledge creation. There are, however, considerations to using generative AI tools for research and creative scholarship, including the potential for AI to generate data that is inaccurate, inappropriate, not novel, or biased. Specifically, text generation tools such as ChatGPT are vulnerable to confabulations called “hallucinations” or generating new “facts” that are not true in the real world. So, although such tools may be used for organizing text, all statements made by generative AI should be verified by the user for correctness before retaining them in their final output. Since these tools are trained on a large corpus of knowledge, encompassing much of the publicly available internet, generative AI tools will likely paraphrase from various sources, which may raise issues with plagiarism and intellectual property. These tools have also been found to reference incorrect sources or provide false references in some cases.
When using a generative AI tool, it is best practice to verify or validate all generated content using additional factors and reliable resources.
The appropriate use of generative AI for research and creative scholarship will differ by discipline. Check with your disciplinary authorities, organizations, funding agencies, and publications for a more context-specific understanding of how generative AI may be used in your area.- Can generative AI be used for theses, dissertations, capstones, or comprehensive exams?
Potentially. The appropriate use of generative AI in theses, dissertations , capstones, and comprehensive exams (and the research underlying them) will differ by discipline. ISU’s Graduate Council has issued a statement on the matter that encourages a discussion between the emerging scholar/student and their advisor to confirm usage is aligned with the guidelines and expectations of the advisor and/or committee. The latest guidance can be found on the Graduate School website: https://grad.illinoisstate.edu/students/thesis-dissertation/ethical-ai/. As of this writing the Graduate School statement read:
The use of generative artificial intelligence (AI) can support innovative and creative scholarship when used within appropriate guidelines, which may vary by discipline. It is vital that students uphold the core principles of academic and research integrity. As such, transparency around the usage of generative AI is required between the student, their committee and school/department, as well as the student and the audience of their completed thesis/dissertation, capstone, and/or comprehensive exams. Students need to take responsibility for their work, including using their own words and proper citations.
All use of generative AI in the thesis/dissertation/capstone/comprehensive exam process must be disclosed to the student’s committee by the student. Generative AI use should be verified to be within the standards of the discipline and school/department by the student and the student’s committee. Unauthorized use of generative AI may be considered a violation of Policy 1.8 Integrity in Research, Scholarly, and Creative Activities. Usage of generative AI must be appropriately cited following the guidelines of the style manual used by the discipline.
While these guidelines serve as the Graduate School’s position on the use of generative AI in the thesis/dissertation/capstone/comprehensive exam process, individual schools/departments may have additional policies, guidance, or restrictions. It is the student’s responsibility to understand and follow these standards and contact their school/department or the Graduate School with any questions or concerns. As generative AI is relatively new and evolving, additional or more detailed guidance may be issued in the future.- Can generative AI be used to edit text for research or scholarship?
- Can generative AI be listed as an author of research work?
- Who is responsible for content generated by AI?
- How should generative AI use be described in reported research results?
- Is it permissible to use generative AI for grant writing?
- Can generative AI be used in peer review of manuscripts or grant applications?
One federal agency, the National Institute of Health (NIH), has stated that AI cannot be used in peer review. “NIH prohibits NIH scientific peer reviewers from using natural language processors, large language models, or other generative Artificial Intelligence (AI) technologies for analyzing and formulating peer review critiques for grant applications and R&D contract proposals. NIH is revising its Security, Confidentiality, and Non-disclosure Agreements for Peer Reviewers to clarify this prohibition. Reviewers should be aware that uploading or sharing content or original concepts from an NIH grant application, contract proposal, or critique to online generative AI tools violates the NIH peer review confidentiality and integrity requirements.”5 The National Science Foundation (NSF) has issued very similar language.6
For information on other agencies or sponsors’ policies on generative AI use in peer review of grant applications please contact the agencies or sponsors directly.
For manuscript review, confidentiality and privacy concerns exist just as for grant application review.- What privacy concerns arise in using generative AI in research?
Depending on the tool, inputting any research data into the AI tool could potentially allow that tool to use the data for future training purposes (i.e., your data could inform querries made by other users).
Generative AI tools that may not contain appropriate privacy policies to protect data such as research data, in particular protected health information (PHI), personally identifying information or other personal information protected by law such as FERPA, and any proprietary information. These types of data should not be entered into a Generative AI tool without consulting appropriate University authorities in Purchasing.- What Patent and Copyright considerations arise when using a generative AI tool?
Referring to numerous statutes applicable to patents, as well as case law and regulations, the United States Patent and Trademark Office (USPTO) has determined that only natural persons can be named as inventors, which precludes generative AI from being referred to as an inventor. The sole or significant use of generative AI to produce or contribute to an invention could potentially preclude the ability to gain patent protection or be named as an inventor, as inventorship requires material intellectual contribution of a person inventor.
The US Copyright Office (USCO) has published a statement in the Federal Register regarding generative AI. In policy guidance issued in March 2023, the USCO wrote that it would extend copyright to protect only material that is the product of human creatvitity, and a non-human (e.g. Generative AI tool) cannot hold a copyright. 7 For works that include human-author elements and AI-generated content, the Office wrote that “Based on the Office’s understanding of generative AI technologies currently available, users do not exercise ultimate creative control over how such systems interpret prompts and generate material.” It goes on to state that each case is dependent on the extent of creative control that the human had over the work including the traditional elements of authorship.8- Is there a reliable generative AI detection tool?
- How can generative AI augment research and scholarly activity?
This link includes some examples of areas where generative AI could augment research and scholarly activity. All examples come with the caveat that some of the use-cases could present intellectual property challenges, such as copyright infringement. For example, there are currently legal cases pending that center on the question of how a generative AI tool uses copyrighted text or images in generating outputs. If for example, an output result closely resemble a particular owner’s work, that particular use could be challenged by the original owner of the work.
Adapted from University of Kentucky, Advance Initiative https://advance.uky.edu/ with permission, 2023.
Notes