Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
People attempting to immigrate to the U.S. (through a port of entry or other means) may be required to accept various forms of surveillance technologies after interacting with immigration officials. In March 2025, around 160,000 people in the U.S. were required to use a smartphone application—BI SmartLINK—that uses facial recognition, voice recognition, and location tracking; others were assigned an ankle monitor or a smartwatch. These compulsory surveillance technologies exist under Immigration and Custom Enforcement (ICE)’s Alternatives to Detention (ATD) program, a combination of surveillance technologies, home visits, and in-person meetings with ICE officials and third-party “case specialists.” For migrants in the U.S. who are already facing multiple other challenges, such as securing housing, work, or healthcare, the surveillance technologies administered under ATD introduce new challenges. To understand the challenges facing migrants using BI SmartLINK under ATD, their questions about the app, and what role technologists might play (if any) in addressing these challenges, we conducted an interview study (n=9) with immigrant rights advocates. These advocates have collectively supported thousands of migrants over their careers and witnessed firsthand their struggles with surveillance tech under ATD. Among other things, our findings highlight how surveillance tech exacerbates the power imbalance between migrants and ICE officials (or their proxies), how these technologies (negatively) impact migrants, and how migrants and their advocates struggle to understand how the technologies that surveil them function. Our findings regarding the harms experienced by migrants lead us to believe that BI SmartLINK should not be used, and these harms fundamentally cannot be addressed by improvements to the app’s functionality or design. However, as this technology is currently deployed, we end by highlighting intervention opportunities for technologists to use our findings to make these high-stakes technologies less opaque for migrants and their advocates.more » « lessFree, publicly-accessible full text available June 23, 2026
-
Free, publicly-accessible full text available June 23, 2026
-
Synthetic nonconsensual explicit imagery, also referred to as “deepfake nudes”, is becoming faster and easier to generate. In the last year, synthetic nonconsensual explicit imagery was reported in at least ten US middle and high schools, generated by students of other students. Teachers are at the front lines of this new form of image abuse and have a valuable perspective on threat models in this context. We interviewed 17 US teachers to understand their opinions and concerns about synthetic nonconsensual explicit imagery in schools. No teachers knew of it happening at their schools, but most expected it to be a growing issue. Teachers proposed many interventions, such as improving reporting mechanisms, focusing on consent in sex education, and updating technology policies. However, teachers disagreed about appropriate consequences for students who create such images. We unpack our findings relative to differing models of justice, sexual violence, and sociopolitical challenges within schools.more » « lessFree, publicly-accessible full text available April 25, 2026
-
Free, publicly-accessible full text available May 12, 2026
-
The present and future transition of lives and activities into virtual worlds --- worlds in which people interact using avatars --- creates novel privacy challenges and opportunities. Avatars present an opportunity for people to control the way they are represented to other users and the information shared or implied by that representation. Importantly, users with marginalized identities may have a unique set of concerns when choosing what information about themselves (and their identities) to conceal or expose in an avatar. We present a theoretical basis, supported by two empirical studies, to understand how marginalization impacts the ways in which people create avatars and perceive others' avatars: what information do people choose to reveal or conceal, and how do others react to these choices? In Study 1, participants from historically marginalized backgrounds felt more concerned about being devalued based on their identities in virtual worlds, which related to a lower desire to reveal their identities in an avatar, compared to non-marginalized participants. However, in Study 2 participants were often uncomfortable with others changing visible characteristics in an avatar, weighing concerns about others' anonymity with possible threats to their own safety and security online. Our findings demonstrate asymmetries in what information people prefer the self vs. others to reveal in their online representations: participants want privacy for themselves but to feel informed about others. Although avatars allow people to choose what information to reveal about themselves, people from marginalized backgrounds may still face backlash for concealing components of their identities to avoid harm.more » « lessFree, publicly-accessible full text available April 1, 2026
-
Free, publicly-accessible full text available February 24, 2026
-
Free, publicly-accessible full text available February 1, 2026
-
The Heilmeier Catechism consists of a set of questions that researchers and practitioners can consider when formulating research and applied engineering projects. In this article, we suggest explicitly asking who is included and who is left out of consideration.more » « lessFree, publicly-accessible full text available May 1, 2026
-
Free, publicly-accessible full text available November 4, 2025
-
We applied techniques from psychology --- typically used to visualize human bias --- to facial analysis systems, providing novel approaches for diagnosing and communicating algorithmic bias. First, we aggregated a diverse corpus of human facial images (N=1492) with self-identified gender and race. We tested four automated gender recognition (AGR) systems and found that some exhibited intersectional gender-by-race biases. Employing a technique developed by psychologists --- face averaging --- we created composite images to visualize these systems' outputs. For example, we visualized what an average woman looks like, according to a system's output. Second, we conducted two online experiments wherein participants judged the bias of hypothetical AGR systems. The first experiment involved participants (N=228) from a convenience sample. When depicting the same results in different formats, facial visualizations communicated bias to the same magnitude as statistics. In the second experiment with only Black participants (N=223), facial visualizations communicated bias significantly more than statistics, suggesting that face averages are meaningful for communicating algorithmic bias.more » « less
An official website of the United States government
