Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Abstract The recent development and use of generative AI (GenAI) has signaled a significant shift in research activities such as brainstorming, proposal writing, dissemination, and even reviewing. This has raised questions about how to balance the seemingly productive uses of GenAI with ethical concerns such as authorship and copyright issues, use of biased training data, lack of transparency, and impact on user privacy. To address these concerns, many Higher Education Institutions (HEIs) have released institutional guidance for researchers. To better understand the guidance that is being provided we report findings from a thematic analysis of guidelines from thirty HEIs in the United States that are classified as R1 or “very high research activity.” We found that guidance provided to researchers: (1) asks them to refer to external sources of information such as funding agencies and publishers to keep updated and use institutional resources for training and education; (2) asks them to understand and learn about specific GenAI attributes that shape research such as predictive modeling, knowledge cutoff date, data provenance, and model limitations, and educate themselves about ethical concerns such as authorship, attribution, privacy, and intellectual property issues; and (3) includes instructions on how to acknowledge sources and disclose the use of GenAI, how to communicate effectively about their GenAI use, and alerts researchers to long term implications such as over reliance on GenAI, legal consequences, and risks to their institutions from GenAI use. Overall, guidance places the onus of compliance on individual researchers making them accountable for any lapses, thereby increasing their responsibility.more » « less
-
The release of ChatGPT in November 2022 prompted a massive uptake of generative artificial intelligence (GenAI) across higher education institutions (HEIs). In response, HEIs focused on regulating its use, particularly among students, before shifting towards advocating for its productive integration within teaching and learning. Since then, many HEIs have increasingly provided policies and guidelines to direct GenAI. This paper presents an analysis of documents produced by 116 US universities classified as as high research activity or R1 institutions providing a comprehensive examination of the advice and guidance offered by institutional stakeholders about GenAI. Through an extensive analysis, we found a majority of universities (N = 73, 63%) encourage the use of GenAI, with many offering detailed guidance for its use in the classroom (N = 48, 41%). Over half the institutions provided sample syllabi (N = 65, 56%) and half (N = 58, 50%) provided sample GenAI curriculum and activities that would help instructors integrate and leverage GenAI in their teaching. Notably, the majority of guidance focused on writing activities focused on writing, whereas references to code and STEM-related activities were infrequent, and often vague, even when mentioned (N = 58, 50%). Based on our findings we caution that guidance for faculty can become burdensome as policies suggest or imply substantial revisions to existing pedagogical practices.more » « lessFree, publicly-accessible full text available March 1, 2026
-
Since the release of ChatGPT in 2022, Generative AI (GenAI) is increasingly being used in higher education computing classrooms across the United States. While scholars have looked at overall institutional guidance for the use of GenAI and reports have documented the response from schools in the form of broad guidance to instructors, we do not know what policies and practices instructors are actually adopting and how they are being communicated to students through course syllabi. To study instructors' policy guidance, we collected 98 computing course syllabi from 54 R1 institutions in the U.S. and studied the GenAI policies they adopted and the surrounding discourse. Our analysis shows that 1) most instructions related to GenAI use were as part of the academic integrity policy for the course and 2) most syllabi prohibited or restricted GenAI use, often warning students about the broader implications of using GenAI, e.g. lack of veracity, privacy risks, and hindering learning. Beyond this, there was wide variation in how instructors approached GenAI including a focus on how to cite GenAI use, conceptualizing GenAI as an assistant, often in an anthropomorphic manner, and mentioning specific GenAI tools for use. We discuss the implications of our findings and conclude with current best practices for instructors.more » « lessFree, publicly-accessible full text available February 12, 2026
-
Although internet access and affordability are increasingly at the center of policy decisions around issues of the “digital divide” in the US, the complex nature of usage as it relates to structural inequality is not well-understood. We partnered with Project Waves, a community internet provider, to set up connectivity across the urban landscape of a city in the Eastern United States to study factors that impact the rollout of affordable broadband internet connectivity to low-income communities during the COVID-19 pandemic. The organization endeavored to meet structural challenges, provide community support for adoption, and stave off attendant privacy concerns. We present three dimensions of equitable use prioritized by the community internet provider: safety from COVID-19 through social distancing enabled by remote access, trusted connectivity, and private internet access. We use employee interviews and a phone survey of internet recipients to investigate how the provider prioritized these dimensions and who uses their service.more » « lessFree, publicly-accessible full text available July 11, 2025
-
Older adults face unique risks in trying to secure their online activities. They are not only the frequent targets of scams and fraud; they are the targets of a barrage of cybersafety communiqués whose impact is unclear. AARP, the United States advocacy group focusing on issues facing older adults over the age of 50, is among those educators whose strategies remain underexplored, yet their reach makes it imperative that we understand what they are saying, to whom, and to what effect. Drawing on an analysis of AARP publications about cybersafety and privacy, we sought to better understand their discourse on the topic. We report on findings that AARP's language may have the effect of portraying bad actors (fraudsters) as individuals, rather than enterprises, which at the target end, personalizes interactions, placing too much onus on individual users to assess and deflect threats. AARP's positioning of, and guidance about, threats may sometimes prompt a thought process that puts users at the center of the narrative and may encourage engagement. Instructing older Americans, or anyone, on the forensics of cyber-sleuthing is enormously difficult. We conclude with a discussion of different approaches to cybersafety, one that involves educating older adults about the rudiments of surveillance capitalism.more » « less
-
Artificial intelligence (AI) underpins virtually every experience that we have—from search and social media to generative AI and immersive social virtual reality (SVR). For Generation Z, there is no before AI. As adults, we must humble ourselves to the notion that AI is shaping youths’ world in ways that we don’t understand and we need to listen to them about their lived experiences. We invite researchers from academia and industry to participate in a workshop with youth activists to set the agenda for research into how AI-driven emerging technologies affect youth and how to address these challenges. This reflective workshop will amplify youth voices and empower youth and researchers to set an agenda. As part of the workshop, youth activists will participate in a panel and steer the conversation around the agenda for future research. All will participate in group research agenda setting activities to reflect on their experiences with AI technologies and consider ways to tackle these challenges.more » « less
-
This qualitative study examines the privacy challenges perceived by librarians who afford access to physical and electronic spaces and are in a unique position of safeguarding the privacy of their patrons. As internet “service providers,” librarians represent a bridge between the physical and internet world, and thus offer a unique sight line to the convergence of privacy, identity, and social disadvantage. Drawing on interviews with 16 librarians, we describe how they often interpret or define their own rules when it comes to privacy to protect patrons who face challenges that stem from structures of inequality outside their walls. We adopt the term “intersectional thinking” to describe how librarians reported thinking about privacy solutions, which is focused on identity and threats of structural discrimination (the rules, norms, and other determinants of discrimination embedded in institutions and other societal structures that present barriers to certain groups or individuals), and we examine the role that low/no-tech strategies play in ameliorating these threats. We then discuss how librarians act as privacy intermediaries for patrons, the potential analogue for this role for developers of systems, the power of low/no-tech strategies, and implications for design and research of privacy-enhancing technologies (PETs).more » « less
-
Social media companies wield power over their users through design, policy, and through their participation in public discourse. We set out to understand how companies leverage public relations to influence expectations of privacy and privacy-related norms. To interrogate the discourse productions of companies in relation to privacy, we examine the blogs associated with three major social media platforms: Facebook, Instagram (both owned by Facebook Inc.), and Snapchat. We analyze privacy-related posts using critical discourse analysis to demonstrate how these powerful entities construct narratives about users and their privacy expectations. We find that each of these platforms often make use of discourse about "vulnerable" identities to invoke relations of power, while at the same time, advancing interpretations and values that favor data capitalism. Finally, we discuss how these public narratives might influence the construction of users' own interpretations of appropriate privacy norms and conceptions of self. We contend that expectations of privacy and social norms are not simply artifacts of users' own needs and desires, but co-constructions that reflect the influence of social media companies themselves.more » « less
-
null (Ed.)Designing technologies that support the cybersecurity of older adults with memory concerns involves wrestling with an uncomfortable paradox between surveillance and independence and the close collaboration of couples. This research captures the interactions between older adult couples where one or both have memory concerns—a primary feature of cognitive decline—as they make decisions on how to safeguard their online activities using a Safety Setting probe we designed, and over the course of several informal interviews and a diary study. Throughout, couples demonstrated a collaborative mentality to which we apply a frame of citizenship in opensource collaboration, specifically (a) histories of participation , (b) lower barriers to participation, and (c) maintaining ongoing contribution. In this metaphor of collaborative enterprise, one partner (or member of the couple) may be the service provider and the other may be the participant, but at varying moments, they may switch roles while still maintaining a collaborative focus on preserving shared assets and freedom on the internet. We conclude with a discussion of what this service provider-contributor mentality means for empowerment through citizenship, and implications for vulnerable populations’ cybersecurity.more » « less
-
null (Ed.)Designing technologies that support the mutual cybersecurity and autonomy of older adults facing cognitive challenges requires close collaboration of partners. As part of research to design a Safety Setting application for older adults with memory loss or mild cognitive impairment (MCI), we use a scenario-based participatory design. Our study builds on previous findings that couples’ approach to memory loss was characterized by a desire for flexibility and choice, and an embrace of role uncertainty. We find that couples don't want a system that fundamentally alters their relationship and are looking to maximize self-surveillance competence and minimize loss of autonomy for their partners. All desire Safety Settings to maintain their mutual safety rather than designating one partner as the target of oversight. Couples are open to more rigorous surveillance if they have control over what types of activities trigger various levels of oversight.more » « less