skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Generative artificial intelligence for academic research: evidence from guidance issued for researchers by higher education institutions in the United States
Abstract The recent development and use of generative AI (GenAI) has signaled a significant shift in research activities such as brainstorming, proposal writing, dissemination, and even reviewing. This has raised questions about how to balance the seemingly productive uses of GenAI with ethical concerns such as authorship and copyright issues, use of biased training data, lack of transparency, and impact on user privacy. To address these concerns, many Higher Education Institutions (HEIs) have released institutional guidance for researchers. To better understand the guidance that is being provided we report findings from a thematic analysis of guidelines from thirty HEIs in the United States that are classified as R1 or “very high research activity.” We found that guidance provided to researchers: (1) asks them to refer to external sources of information such as funding agencies and publishers to keep updated and use institutional resources for training and education; (2) asks them to understand and learn about specific GenAI attributes that shape research such as predictive modeling, knowledge cutoff date, data provenance, and model limitations, and educate themselves about ethical concerns such as authorship, attribution, privacy, and intellectual property issues; and (3) includes instructions on how to acknowledge sources and disclose the use of GenAI, how to communicate effectively about their GenAI use, and alerts researchers to long term implications such as over reliance on GenAI, legal consequences, and risks to their institutions from GenAI use. Overall, guidance places the onus of compliance on individual researchers making them accountable for any lapses, thereby increasing their responsibility.  more » « less
Award ID(s):
2319137 1954556
PAR ID:
10589954
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
Springer
Date Published:
Journal Name:
AI and Ethics
ISSN:
2730-5953
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The release of ChatGPT in November 2022 prompted a massive uptake of generative artificial intelligence (GenAI) across higher education institutions (HEIs). In response, HEIs focused on regulating its use, particularly among students, before shifting towards advocating for its productive integration within teaching and learning. Since then, many HEIs have increasingly provided policies and guidelines to direct GenAI. This paper presents an analysis of documents produced by 116 US universities classified as as high research activity or R1 institutions providing a comprehensive examination of the advice and guidance offered by institutional stakeholders about GenAI. Through an extensive analysis, we found a majority of universities (N = 73, 63%) encourage the use of GenAI, with many offering detailed guidance for its use in the classroom (N = 48, 41%). Over half the institutions provided sample syllabi (N = 65, 56%) and half (N = 58, 50%) provided sample GenAI curriculum and activities that would help instructors integrate and leverage GenAI in their teaching. Notably, the majority of guidance focused on writing activities focused on writing, whereas references to code and STEM-related activities were infrequent, and often vague, even when mentioned (N = 58, 50%). Based on our findings we caution that guidance for faculty can become burdensome as policies suggest or imply substantial revisions to existing pedagogical practices. 
    more » « less
  2. This column explores the practical considerations and institutional strategies for adopting Generative Artificial Intelligence (GenAI) tools in academic libraries. As higher education institutions increasingly integrate AI into teaching, research, and student support, libraries play a pivotal role in guiding ethical, inclusive, and pedagogically sound implementation. Drawing on case studies from Clemson University, Wake Forest University, and Goldey-Beacom College, the column examines key areas of GenAI integration: contract negotiations, licensing models, trial and pilot program design, data privacy, accessibility, authentication, analytics, training, and ethical use. The article emphasizes the importance of aligning AI adoption with institutional missions, user agency, and evolving frameworks of AI literacy. Recommendations are provided for libraries of all sizes to navigate the dynamic GenAI landscape responsibly and equitably, ensuring that academic integrity and student-centered values remain at the core of AI integration. 
    more » « less
  3. Since the release of ChatGPT in 2022, Generative AI (GenAI) is increasingly being used in higher education computing classrooms across the United States. While scholars have looked at overall institutional guidance for the use of GenAI and reports have documented the response from schools in the form of broad guidance to instructors, we do not know what policies and practices instructors are actually adopting and how they are being communicated to students through course syllabi. To study instructors' policy guidance, we collected 98 computing course syllabi from 54 R1 institutions in the U.S. and studied the GenAI policies they adopted and the surrounding discourse. Our analysis shows that 1) most instructions related to GenAI use were as part of the academic integrity policy for the course and 2) most syllabi prohibited or restricted GenAI use, often warning students about the broader implications of using GenAI, e.g. lack of veracity, privacy risks, and hindering learning. Beyond this, there was wide variation in how instructors approached GenAI including a focus on how to cite GenAI use, conceptualizing GenAI as an assistant, often in an anthropomorphic manner, and mentioning specific GenAI tools for use. We discuss the implications of our findings and conclude with current best practices for instructors. 
    more » « less
  4. Authorship practices in collaborative research teams are often complex and influence perceptions surrounding fairness, responsibility, and accountability in scholarly work. This study investigates prevailing authorship norms, the frequency of authorship disagreements, and differences in authorship perceptions across faculty and student roles, genders, and disciplinary contexts. Survey results reveal systematic differences, particularly between faculty and students, in how authorship distribution is perceived and how different types of researcher contributions are valued towards authorship credit. We further assess changes in authorship norms and ethical perceptions through a follow-up survey after a three-year effort to improve authorship ethics on our campus, which included training on ethical authorship practices and adoption of a formal institutional authorship policy. The results show notable shifts in researchers’ awareness, expectations, and attitudes toward authorship ethics and responsibilities. This suggests proactive education and policymaking can promote integrity in collaborative scholarly work and recalibrate local norms. Based on these insights, we offer recommendations for supporting transparent authorship communication and fostering more ethical research environments. 
    more » « less
  5. Background: Despite the rise of big-team science and multiinstitutional, multidisciplinary research networks, little research has explored the unique challenges that large, distributed research networks face in ensuring the ethical and responsible conduct of research (RCR) at the network level. Methods: This qualitative case study explored the views of the scientists, engineers, clinicians, and trainees within a large Engineering Research Center (ERC) on ethical and RCR issues arising at the network level. Results: Semi-structured interviews of 26 ERC members were analyzed and revealed five major themes: (1) data sharing, (2) authorship or inventorship credit, (3) ethics and regulation, (4) collaboration, and (5) network leadership, norms, and policy. Interviews revealed cross-laboratory differences and disciplinary differences as sources of challenge. Conclusions: This study illuminates ethical challenges that a large, multi-institutional research network is likely to face. Research collaboration across disciplines, laboratories, and institutions invites conflict over norms and practices. Network leadership requires anticipating, monitoring, and addressing the ethical challenges in order to ensure the network’s ethical and responsible conduct of research and optimize research collaboration. Studying perceived ethical issues that arise at the meso-level of a research network is essential for understanding how to advance network ethics. 
    more » « less