skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on October 3, 2026

Title: AI Policies in U.S. Universities: A Critical Analysis of Policy Gaps and Library Involvement
As generative AI technologies proliferate across higher education, many U.S. universities are still developing institutional policies to address their ethical, pedagogical, and accessibility implications. This posIT column critically examines AI policies and resources at 50 four year universities—one from each U.S. state—to assess alignment with the Association of Research Libraries’ (ARL) Guiding Principles for Artificial Intelligence. Through content analysis of LibGuides, AI taskforce membership, campus events, and public-facing policies, the study reveals widespread adoption of AI resources but a significant lack of clarity, consistency, and librarian involvement in policy development. While most institutions meet baseline criteria related to privacy, plagiarism, and algorithmic transparency, fewer address AI’s potential harms to marginalized communities or its impact on accessibility for students with disabilities. Notably, fewer than half of the AI taskforces surveyed included library staff, despite librarians’ expertise in digital literacy and ethical information use. This column urges academic librarians to actively seek leadership roles in institutional AI governance to help shape inclusive, responsible, and human-centered AI policy frameworks.  more » « less
Award ID(s):
2438144
PAR ID:
10648738
Author(s) / Creator(s):
;
Publisher / Repository:
Routledge, Taylor & Francis Group
Date Published:
Journal Name:
Journal of Library Administration
Volume:
65
Issue:
6-7
ISSN:
0193-0826
Page Range / eLocation ID:
808 to 824
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. This column explores the practical considerations and institutional strategies for adopting Generative Artificial Intelligence (GenAI) tools in academic libraries. As higher education institutions increasingly integrate AI into teaching, research, and student support, libraries play a pivotal role in guiding ethical, inclusive, and pedagogically sound implementation. Drawing on case studies from Clemson University, Wake Forest University, and Goldey-Beacom College, the column examines key areas of GenAI integration: contract negotiations, licensing models, trial and pilot program design, data privacy, accessibility, authentication, analytics, training, and ethical use. The article emphasizes the importance of aligning AI adoption with institutional missions, user agency, and evolving frameworks of AI literacy. Recommendations are provided for libraries of all sizes to navigate the dynamic GenAI landscape responsibly and equitably, ensuring that academic integrity and student-centered values remain at the core of AI integration. 
    more » « less
  2. As artificial intelligence (AI) tools become integral to education, fostering ethical engagement and academic integrity is paramount. This column explores the integration of AI tools, including Grammarly, Scholarcy, and ImageFX, into a first-year writing program to promote ethical AI literacy. Grounded in Lo’s CLEAR framework, structured exercises emphasized critical thinking, intellectual agency, and human-AI collaboration. The study highlights the potential for AI to enhance learning while addressing challenges such as equity, accessibility, and ethical considerations. Findings demonstrate how intentional integration of AI tools fosters creativity, self-reflection, and responsible decision-making, providing a scalable model for incorporating AI into curricula while preserving student agency and integrity. 
    more » « less
  3. Abstract The explosive growth of artificial intelligence (AI) over the past few years has focused attention on how diverse stakeholders regulate these technologies to ensure their safe and ethical use. Increasingly, governmental bodies, corporations, and nonprofit organizations are developing strategies and policies for AI governance. While existing literature on ethical AI has focused on the various principles and guidelines that have emerged as a result of these efforts, just how these principles are operationalized and translated to broader policy is still the subject of current research. Specifically, there is a gap in our understanding of how policy practitioners actively engage with, contextualize, or reflect on existing AI ethics policies in their daily professional activities. The perspectives of these policy experts towards AI regulation generally are not fully understood. To this end, this paper explores the perceptions of scientists and engineers in policy-related roles in the US public and nonprofit sectors towards AI ethics policy, both in the US and abroad. We interviewed 15 policy experts and found that although these experts were generally familiar with AI governance efforts within their domains, overall knowledge of guiding frameworks and critical regulatory policies was still limited. There was also a general perception among the experts we interviewed that the US lagged behind other comparable countries in regulating AI, a finding that supports the conclusion of existing literature. Lastly, we conducted a preliminary comparison between the AI ethics policies identified by the policy experts in our study and those emphasized in existing literature, identifying both commonalities and areas of divergence. 
    more » « less
  4. Maslej, Nestor; Fattorini, Loredana; Perrault, Raymond; Gil, Yolanda; Parli, Vanessa; Kariuki, Njenga; Capstick, Emily; Reuel, Anka; Brynjolfsson, Erik; Etchemendy, John (Ed.)
    AI has entered the public consciousness through generative AI’s impact on work—enhancing efficiency and automating tasks—but it has also driven innovation in education and personalized learning. Still, while AI promises benefits, it also poses risks—from hallucinating false outputs to reinforcing biases and diminishing critical thinking. With the AI education market expected to grow substantially, ethical concerns about the technology’s misuse—AI tools have already falsely accused marginalized students of cheating—are mounting, highlighting the need for responsible creation and deployment. Addressing these challenges requires both technical literacy and critical engagement with AI’s societal impact. Expanding AI expertise must begin in K–12 and higher education in order to ensure that students are prepared to be responsible users and developers. AI education cannot exist in isolation—it must align with broader computer science (CS) education efforts. This chapter examines the global state of AI and CS education, access disparities, and policies shaping AI’s role in learning. This chapter was a collaboration prepared by the Kapor Foundation, CSTA, PIT-UN and the AI Index. The Kapor Foundation works at the intersection of racial equity and technology to build equitable and inclusive computing education pathways, advance tech policies that mitigate harms and promote equitable opportunity, and deploy capital to support responsible, ethical, and equitable tech solutions. The CSTA is a global membership organization that unites, supports, and empowers educators to enhance the quality, accessibility, and inclusivity of computer science education. The Public Interest Technology University Network (PIT-UN) fosters collaboration between universities and colleges to build the PIT field and nurture a new generation of civic-minded technologists. 
    more » « less
  5. Miller, Eva (Ed.)
    The recent outbreak of COVID-19, considered as being a lethal pandemic by the World Health Organization, has caused profound changes in the educational system within the U.S and across the world. Overnight, universities and their educators had to switch to a largely online teaching format, which challenged their capacity to deliver learning content effectively to STEM students. Students were forced to adapt to a new learning environment in the midst of challenges in their own lives due to the COVID-19 effects on society and professional expectations. The main purpose of this paper is to investigate faculty perceptions of STEM student experiences during COVID-19. Through a qualitative methodology consisting of one-hour zoom interviews administered to 32 STEM faculty members from six U.S. Universities nationwide, faculty narratives regarding student and faculty experiences during COVID-19 were obtained. The qualitative research approach involved identifying common themes across faculty experiences and views in these narratives. Some of the categories of emerging themes associated with faculty perceptions on student and faculty experiences included: student struggles and challenges, student cheating and the online environment, faculty and student adaptability, faculty and student needs and support, and university resources and support. Best practices to facilitate online teaching and learning employed by STEM faculty were also discussed. Key findings revealed that students and faculty had both positive and negative experiences during COVID-19. Additionally, there was a greater need for consistent policies to improve the online student learning experiences. Recommendations to improve STEM student experiences include increased institutional resources and collaboration between faculty and the university administrators to provide a coherent online learning environment. Preliminary findings also provide insights to enhance institutional adaptability and resilience for improving STEM student experiences during future pandemics. Future research should continue to explore institutional adaptation strategies that enhance STEM student learning during pandemics. 
    more » « less