Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
                                            Some full text articles may not yet be available without a charge during the embargo (administrative interval).
                                        
                                        
                                        
                                            
                                                
                                             What is a DOI Number?
                                        
                                    
                                
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
- 
            People attempting to immigrate to the U.S. (through a port of entry or other means) may be required to accept various forms of surveillance technologies after interacting with immigration officials. In March 2025, around 160,000 people in the U.S. were required to use a smartphone application—BI SmartLINK—that uses facial recognition, voice recognition, and location tracking; others were assigned an ankle monitor or a smartwatch. These compulsory surveillance technologies exist under Immigration and Custom Enforcement (ICE)’s Alternatives to Detention (ATD) program, a combination of surveillance technologies, home visits, and in-person meetings with ICE officials and third-party “case specialists.” For migrants in the U.S. who are already facing multiple other challenges, such as securing housing, work, or healthcare, the surveillance technologies administered under ATD introduce new challenges. To understand the challenges facing migrants using BI SmartLINK under ATD, their questions about the app, and what role technologists might play (if any) in addressing these challenges, we conducted an interview study (n=9) with immigrant rights advocates. These advocates have collectively supported thousands of migrants over their careers and witnessed firsthand their struggles with surveillance tech under ATD. Among other things, our findings highlight how surveillance tech exacerbates the power imbalance between migrants and ICE officials (or their proxies), how these technologies (negatively) impact migrants, and how migrants and their advocates struggle to understand how the technologies that surveil them function. Our findings regarding the harms experienced by migrants lead us to believe that BI SmartLINK should not be used, and these harms fundamentally cannot be addressed by improvements to the app’s functionality or design. However, as this technology is currently deployed, we end by highlighting intervention opportunities for technologists to use our findings to make these high-stakes technologies less opaque for migrants and their advocates.more » « lessFree, publicly-accessible full text available June 23, 2026
- 
            Free, publicly-accessible full text available June 23, 2026
- 
            Immersive, interactive virtual reality (VR) experiences rely on eye tracking data for a variety of applications. However, eye trackers assume that the user's eyes move in a coordinated way. We investigate how the violation of this assumption impacts the performance and subjective experience of users with strabismus and amblyopia. Our investigation follows a case study approach by analyzing in depth the qualitative and quantitative data collected during an interactive VR game by a small number of users with these visual impairments. Our findings reveal the ways in which assumptions about the default functioning of the eye can discourage or even exclude otherwise enthusiastic users from immersive VR. This study thus opens a new frontier for eye tracking research and practice.more » « lessFree, publicly-accessible full text available May 20, 2026
- 
            Sharing high-quality research data specifically for reuse in future work helps the scientific community progress by enabling researchers to build upon existing work and explore new research questions without duplicating data collection efforts. Because current discussions about research artifacts in Computer Security focus on reproducibility and availability of source code, the reusability of data is unclear. We examine data sharing practices in Computer Security and Measurement to provide resources and recommendations for sharing reusable data. Our study covers five years (2019–2023) and seven conferences in Computer Security and Measurement, identifying 948 papers that create a dataset as one of their contributions. We analyze the 265 accessible datasets, evaluating their under-standability and level of reuse. Our findings reveal inconsistent practices in data sharing structure and documentation, causing some datasets to not be shared effectively. Additionally, reuse of datasets is low, especially in fields where the nature of the data does not lend itself to reuse. Based on our findings, we offer data-driven recommendations and resources for improving data sharing practices in our community. Furthermore, we encourage authors to be intentional about their data sharing goals and align their sharing strategies with those goals.more » « lessFree, publicly-accessible full text available May 12, 2026
- 
            Free, publicly-accessible full text available May 12, 2026
- 
            The Heilmeier Catechism consists of a set of questions that researchers and practitioners can consider when formulating research and applied engineering projects. In this article, we suggest explicitly asking who is included and who is left out of consideration.more » « lessFree, publicly-accessible full text available May 1, 2026
- 
            A rapidly emerging research community at the intersection of sport and human-computer interaction (SportsHCI) explores how technology can support physically active humans, such as athletes. At highly competitive levels, coaching staff play a central role in the athlete experience by using data to enhance performance, reduce injuries, and foster team success. However, little is known about the practices and needs of these coaching staff. We conducted five focus groups with 17 collegiate coaching staff across three women’s teams and two men’s teams at an elite U.S. university. Our findings show that coaching staff selectively use data with the goal of balancing performance goals, athlete emotional well-being, and privacy. This paper contributes design recommendations to support coaching staff in operating across the data life cycle through gathering, sharing, deciding, acting, and assessing data as they aim to support team success and foster the well-being of student-athletes.more » « lessFree, publicly-accessible full text available April 25, 2026
- 
            Synthetic nonconsensual explicit imagery, also referred to as “deepfake nudes”, is becoming faster and easier to generate. In the last year, synthetic nonconsensual explicit imagery was reported in at least ten US middle and high schools, generated by students of other students. Teachers are at the front lines of this new form of image abuse and have a valuable perspective on threat models in this context. We interviewed 17 US teachers to understand their opinions and concerns about synthetic nonconsensual explicit imagery in schools. No teachers knew of it happening at their schools, but most expected it to be a growing issue. Teachers proposed many interventions, such as improving reporting mechanisms, focusing on consent in sex education, and updating technology policies. However, teachers disagreed about appropriate consequences for students who create such images. We unpack our findings relative to differing models of justice, sexual violence, and sociopolitical challenges within schools.more » « lessFree, publicly-accessible full text available April 25, 2026
- 
            The present and future transition of lives and activities into virtual worlds --- worlds in which people interact using avatars --- creates novel privacy challenges and opportunities. Avatars present an opportunity for people to control the way they are represented to other users and the information shared or implied by that representation. Importantly, users with marginalized identities may have a unique set of concerns when choosing what information about themselves (and their identities) to conceal or expose in an avatar. We present a theoretical basis, supported by two empirical studies, to understand how marginalization impacts the ways in which people create avatars and perceive others' avatars: what information do people choose to reveal or conceal, and how do others react to these choices? In Study 1, participants from historically marginalized backgrounds felt more concerned about being devalued based on their identities in virtual worlds, which related to a lower desire to reveal their identities in an avatar, compared to non-marginalized participants. However, in Study 2 participants were often uncomfortable with others changing visible characteristics in an avatar, weighing concerns about others' anonymity with possible threats to their own safety and security online. Our findings demonstrate asymmetries in what information people prefer the self vs. others to reveal in their online representations: participants want privacy for themselves but to feel informed about others. Although avatars allow people to choose what information to reveal about themselves, people from marginalized backgrounds may still face backlash for concealing components of their identities to avoid harm.more » « lessFree, publicly-accessible full text available April 1, 2026
- 
            Cochlear implants (CIs) allow deaf and hard-ofhearing individuals to use audio devices, such as phones or voice assistants. However, the advent of increasingly sophisticated synthetic audio (i.e., deepfakes) potentially threatens these users. Yet, this population’s susceptibility to such attacks is unclear. In this paper, we perform the first study of the impact of audio deepfakes on CI populations. We examine the use of CI-simulated audio within deepfake detectors. Based on these results, we conduct a user study with 35 CI users and 87 hearing persons (HPs) to determine differences in how CI users perceive deepfake audio. We show that CI users can, similarly to HPs, identify text-to-speech generated deepfakes. Yet, they perform substantially worse for voice conversion deepfake generation algorithms, achieving only 67% correct audio classification. We also evaluate how detection models trained on a CI-simulated audio compare to CI users and investigate if they can effectively act as proxies for CI users. This work begins an investigation into the intersection between adversarial audio and CI users to identify and mitigate threats against this marginalized group.more » « lessFree, publicly-accessible full text available January 1, 2026
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
