skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Generative artificial intelligence for academic research: evidence from guidance issued for researchers by higher education institutions in the United States
Abstract The recent development and use of generative AI (GenAI) has signaled a significant shift in research activities such as brainstorming, proposal writing, dissemination, and even reviewing. This has raised questions about how to balance the seemingly productive uses of GenAI with ethical concerns such as authorship and copyright issues, use of biased training data, lack of transparency, and impact on user privacy. To address these concerns, many Higher Education Institutions (HEIs) have released institutional guidance for researchers. To better understand the guidance that is being provided we report findings from a thematic analysis of guidelines from thirty HEIs in the United States that are classified as R1 or “very high research activity.” We found that guidance provided to researchers: (1) asks them to refer to external sources of information such as funding agencies and publishers to keep updated and use institutional resources for training and education; (2) asks them to understand and learn about specific GenAI attributes that shape research such as predictive modeling, knowledge cutoff date, data provenance, and model limitations, and educate themselves about ethical concerns such as authorship, attribution, privacy, and intellectual property issues; and (3) includes instructions on how to acknowledge sources and disclose the use of GenAI, how to communicate effectively about their GenAI use, and alerts researchers to long term implications such as over reliance on GenAI, legal consequences, and risks to their institutions from GenAI use. Overall, guidance places the onus of compliance on individual researchers making them accountable for any lapses, thereby increasing their responsibility.  more » « less
Award ID(s):
2319137 1954556
PAR ID:
10575643
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
Springer Science + Business Media
Date Published:
Journal Name:
AI and Ethics
Volume:
5
Issue:
4
ISSN:
2730-5953
Format(s):
Medium: X Size: p. 3917-3933
Size(s):
p. 3917-3933
Sponsoring Org:
National Science Foundation
More Like this
  1. The release of ChatGPT in November 2022 prompted a massive uptake of generative artificial intelligence (GenAI) across higher education institutions (HEIs). In response, HEIs focused on regulating its use, particularly among students, before shifting towards advocating for its productive integration within teaching and learning. Since then, many HEIs have increasingly provided policies and guidelines to direct GenAI. This paper presents an analysis of documents produced by 116 US universities classified as as high research activity or R1 institutions providing a comprehensive examination of the advice and guidance offered by institutional stakeholders about GenAI. Through an extensive analysis, we found a majority of universities (N = 73, 63%) encourage the use of GenAI, with many offering detailed guidance for its use in the classroom (N = 48, 41%). Over half the institutions provided sample syllabi (N = 65, 56%) and half (N = 58, 50%) provided sample GenAI curriculum and activities that would help instructors integrate and leverage GenAI in their teaching. Notably, the majority of guidance focused on writing activities focused on writing, whereas references to code and STEM-related activities were infrequent, and often vague, even when mentioned (N = 58, 50%). Based on our findings we caution that guidance for faculty can become burdensome as policies suggest or imply substantial revisions to existing pedagogical practices. 
    more » « less
  2. This column explores the practical considerations and institutional strategies for adopting Generative Artificial Intelligence (GenAI) tools in academic libraries. As higher education institutions increasingly integrate AI into teaching, research, and student support, libraries play a pivotal role in guiding ethical, inclusive, and pedagogically sound implementation. Drawing on case studies from Clemson University, Wake Forest University, and Goldey-Beacom College, the column examines key areas of GenAI integration: contract negotiations, licensing models, trial and pilot program design, data privacy, accessibility, authentication, analytics, training, and ethical use. The article emphasizes the importance of aligning AI adoption with institutional missions, user agency, and evolving frameworks of AI literacy. Recommendations are provided for libraries of all sizes to navigate the dynamic GenAI landscape responsibly and equitably, ensuring that academic integrity and student-centered values remain at the core of AI integration. 
    more » « less
  3. Since the release of ChatGPT in 2022, Generative AI (GenAI) is increasingly being used in higher education computing classrooms across the United States. While scholars have looked at overall institutional guidance for the use of GenAI and reports have documented the response from schools in the form of broad guidance to instructors, we do not know what policies and practices instructors are actually adopting and how they are being communicated to students through course syllabi. To study instructors' policy guidance, we collected 98 computing course syllabi from 54 R1 institutions in the U.S. and studied the GenAI policies they adopted and the surrounding discourse. Our analysis shows that 1) most instructions related to GenAI use were as part of the academic integrity policy for the course and 2) most syllabi prohibited or restricted GenAI use, often warning students about the broader implications of using GenAI, e.g. lack of veracity, privacy risks, and hindering learning. Beyond this, there was wide variation in how instructors approached GenAI including a focus on how to cite GenAI use, conceptualizing GenAI as an assistant, often in an anthropomorphic manner, and mentioning specific GenAI tools for use. We discuss the implications of our findings and conclude with current best practices for instructors. 
    more » « less
  4. Background: Despite the rise of big-team science and multiinstitutional, multidisciplinary research networks, little research has explored the unique challenges that large, distributed research networks face in ensuring the ethical and responsible conduct of research (RCR) at the network level. Methods: This qualitative case study explored the views of the scientists, engineers, clinicians, and trainees within a large Engineering Research Center (ERC) on ethical and RCR issues arising at the network level. Results: Semi-structured interviews of 26 ERC members were analyzed and revealed five major themes: (1) data sharing, (2) authorship or inventorship credit, (3) ethics and regulation, (4) collaboration, and (5) network leadership, norms, and policy. Interviews revealed cross-laboratory differences and disciplinary differences as sources of challenge. Conclusions: This study illuminates ethical challenges that a large, multi-institutional research network is likely to face. Research collaboration across disciplines, laboratories, and institutions invites conflict over norms and practices. Network leadership requires anticipating, monitoring, and addressing the ethical challenges in order to ensure the network’s ethical and responsible conduct of research and optimize research collaboration. Studying perceived ethical issues that arise at the meso-level of a research network is essential for understanding how to advance network ethics. 
    more » « less
  5. It is emphasized in national legislation, such as the America COMPETES Act and the more recent CHIPS and Science Act, that research integrity is considered essential to the competitiveness and innovation of the U.S. economy. Various stakeholders, particularly research universities, have been developing interventions and programs to foster an ethical culture in STEM (science, technology, engineering, and mathematics) research and practice among faculty and students. Dominant approaches to research ethics education have historically been shaped by biomedical ethics and the broader ethics of science, placing significant emphasis on misconduct of individual researchers, including the falsification, fabrication, and plagiarism (FFP) of research results. Although these approaches have contributed to promoting ethical conduct among individual researchers, we argue that they still face several challenges. Most notably, due to their narrow scope, traditional research ethics education approaches fail to consider the role of disciplinary cultures in shaping research ethics issues. Additionally, they do not leverage the agency of STEM researchers to identify and address these issues or to generate scalable and sustainable impacts within institutions. To address these issues, this paper introduces the IREI (Innovative Research and Ethical Impact) project, which provides an institutional transformation approach to research ethics education for faculty in STEM fields. This approach aims to transform the institutional culture for ethical STEM research by helping faculty develop and enhance their capacity to identify and address ethical issues in their daily work, while generating scalable and sustainable impacts by leveraging their social networks. More specifically, this paper introduces the curriculum design for a professional development workshop for STEM faculty, which is a key component of the IREI project. This faculty development workshop begins by broadening the understanding of ethics, shifting the focus from aligning the conduct of individual researchers with predetermined ethical principles to the impacts of their actions on the lives of others, as well as on the broader environment and society. This expanded definition is used for two main reasons. First, it emphasizes that it is the actions themselves that ultimately affect others, rather than merely a researcher’s intent or the ethical justification of their behavior. Second, it highlights that future potential impacts are as crucial in research as present, actual impacts—if not more so—since research is intrinsically novel and often future-oriented. Based on this definition, researchers are introduced to steps in the research process, from formulating questions to disseminating results. Participants are then provided with reflective tools and hands-on activities to enhance their ethical sensitivity and expertise throughout the entire research process. This enables them to identify (1) who is affected by their research at various stages and how they are impacted, and (2) strategies to maximize positive effects while minimizing any negative consequences. Finally, faculty are provided with mentoring opportunities to incorporate these reflective insights into broader impacts statements of their own research proposals and projects. Given that these statements directly pertain to their research, we hope that participants will view this workshop as both significant and relevant, as they have a natural interest in making their statements as clear and compelling as possible. 
    more » « less