As generative AI technologies proliferate across higher education, many U.S. universities are still developing institutional policies to address their ethical, pedagogical, and accessibility implications. This posIT column critically examines AI policies and resources at 50 four year universities—one from each U.S. state—to assess alignment with the Association of Research Libraries’ (ARL) Guiding Principles for Artificial Intelligence. Through content analysis of LibGuides, AI taskforce membership, campus events, and public-facing policies, the study reveals widespread adoption of AI resources but a significant lack of clarity, consistency, and librarian involvement in policy development. While most institutions meet baseline criteria related to privacy, plagiarism, and algorithmic transparency, fewer address AI’s potential harms to marginalized communities or its impact on accessibility for students with disabilities. Notably, fewer than half of the AI taskforces surveyed included library staff, despite librarians’ expertise in digital literacy and ethical information use. This column urges academic librarians to actively seek leadership roles in institutional AI governance to help shape inclusive, responsible, and human-centered AI policy frameworks.
more »
« less
Practical Considerations for Adopting Generative AI Tools in Academic Libraries
This column explores the practical considerations and institutional strategies for adopting Generative Artificial Intelligence (GenAI) tools in academic libraries. As higher education institutions increasingly integrate AI into teaching, research, and student support, libraries play a pivotal role in guiding ethical, inclusive, and pedagogically sound implementation. Drawing on case studies from Clemson University, Wake Forest University, and Goldey-Beacom College, the column examines key areas of GenAI integration: contract negotiations, licensing models, trial and pilot program design, data privacy, accessibility, authentication, analytics, training, and ethical use. The article emphasizes the importance of aligning AI adoption with institutional missions, user agency, and evolving frameworks of AI literacy. Recommendations are provided for libraries of all sizes to navigate the dynamic GenAI landscape responsibly and equitably, ensuring that academic integrity and student-centered values remain at the core of AI integration.
more »
« less
- Award ID(s):
- 2438144
- PAR ID:
- 10648739
- Publisher / Repository:
- Routledge, Taylor & Francis Group
- Date Published:
- Journal Name:
- Journal of Library Administration
- Volume:
- 65
- Issue:
- 5
- ISSN:
- 0193-0826
- Page Range / eLocation ID:
- 596 to 615
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
As artificial intelligence (AI) tools become integral to education, fostering ethical engagement and academic integrity is paramount. This column explores the integration of AI tools, including Grammarly, Scholarcy, and ImageFX, into a first-year writing program to promote ethical AI literacy. Grounded in Lo’s CLEAR framework, structured exercises emphasized critical thinking, intellectual agency, and human-AI collaboration. The study highlights the potential for AI to enhance learning while addressing challenges such as equity, accessibility, and ethical considerations. Findings demonstrate how intentional integration of AI tools fosters creativity, self-reflection, and responsible decision-making, providing a scalable model for incorporating AI into curricula while preserving student agency and integrity.more » « less
-
Abstract This study reports a comprehensive environmental scan of the generative AI (GenAI) infrastructure in the national network for clinical and translational science across 36 institutions supported by the CTSA Program led by the National Center for Advancing Translational Sciences (NCATS) of the National Institutes of Health (NIH) at the United States. Key findings indicate a diverse range of institutional strategies, with most organizations in the experimental phase of GenAI deployment. The results underscore the need for a more coordinated approach to GenAI governance, emphasizing collaboration among senior leaders, clinicians, information technology staff, and researchers. Our analysis reveals that 53% of institutions identified data security as a primary concern, followed by lack of clinician trust (50%) and AI bias (44%), which must be addressed to ensure the ethical and effective implementation of GenAI technologies.more » « less
-
Gill, Karamjit (Ed.)Research on Artificial Intelligence (AI) is lined with moral considerations. In healthcare, a high-risk field, sub-fields have emerged to mitigate AI-specific ethical issues such as fairness and transparency. However, similar considerations remain unaddressed beyond healthcare, and as generative AI tools (‘GenAI’) reach lay audiences, this neglect yields ethical concerns. The present work focuses on learning from ethical considerations in healthcare to mitigate challenges ofGenAI. We structure our proposed mitigation strategies around three of the five established biomedical and AI ethics principles (autonomy, transparency, beneficence), highlighting the risks GenAI poses for intellectual property owners (scraping of copyrighted data, unpaid labor, plagiarism, fraud). We propose concrete ways to affirm these principles on GenAI, using biomedical AI examples and the emerging frameworks they have sparked in domains of data ownership, federated learning, and data provenance. This article comes at a pivotal time for AI, generalizing ethics-aware principles to GenAI to open new research avenues toward responsible AI.more » « less
-
Abstract The recent development and use of generative AI (GenAI) has signaled a significant shift in research activities such as brainstorming, proposal writing, dissemination, and even reviewing. This has raised questions about how to balance the seemingly productive uses of GenAI with ethical concerns such as authorship and copyright issues, use of biased training data, lack of transparency, and impact on user privacy. To address these concerns, many Higher Education Institutions (HEIs) have released institutional guidance for researchers. To better understand the guidance that is being provided we report findings from a thematic analysis of guidelines from thirty HEIs in the United States that are classified as R1 or “very high research activity.” We found that guidance provided to researchers: (1) asks them to refer to external sources of information such as funding agencies and publishers to keep updated and use institutional resources for training and education; (2) asks them to understand and learn about specific GenAI attributes that shape research such as predictive modeling, knowledge cutoff date, data provenance, and model limitations, and educate themselves about ethical concerns such as authorship, attribution, privacy, and intellectual property issues; and (3) includes instructions on how to acknowledge sources and disclose the use of GenAI, how to communicate effectively about their GenAI use, and alerts researchers to long term implications such as over reliance on GenAI, legal consequences, and risks to their institutions from GenAI use. Overall, guidance places the onus of compliance on individual researchers making them accountable for any lapses, thereby increasing their responsibility.more » « less
An official website of the United States government

