skip to main content


Search for: All records

Award ID contains: 1704369

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Baym, Nancy ; Ellison, Nicole (Ed.)
    Abstract The future of work increasingly focuses on the collection and analysis of worker data to monitor communication, ensure productivity, reduce security threats, and assist in decision-making. The COVID-19 pandemic increased employer reliance on these technologies; however, the blurring of home and work boundaries meant these monitoring tools might also surveil private spaces. To explore workers’ attitudes toward increased monitoring practices, we present findings from a factorial vignette survey of 645 U.S. adults who worked from home during the early months of the pandemic. Using the theory of privacy as contextual integrity to guide the survey design and analysis, we unpack the types of workplace surveillance practices that violate privacy norms and consider attitudinal differences between male and female workers. Our findings highlight that the acceptability of workplace surveillance practices is highly contextual, and that reductions in privacy and autonomy at work may further exacerbate power imbalances, especially for vulnerable employees. 
    more » « less
    Free, publicly-accessible full text available June 12, 2024
  2. In widely used sociological descriptions of how accountability is structured through institutions, an “actor” (e.g., the developer) is accountable to a “forum” (e.g., regulatory agencies) empowered to pass judgements on and demand changes from the actor or enforce sanctions. However, questions about structuring accountability persist: why and how is a forum compelled to keep making demands of the actor when such demands are called for? To whom is a forum accountable in the performance of its responsibilities, and how can its practices and decisions be contested? In the context of algorithmic accountability, we contend that a robust accountability regime requires a triadic relationship, wherein the forum is also accountable to another entity: the public(s). Typically, as is the case with environmental impact assessments, public(s) make demands upon the forum's judgements and procedures through the courts, thereby establishing a minimum standard of due diligence. However, core challenges relating to: (1) lack of documentation, (2) difficulties in claiming standing, and (3) struggles around admissibility of expert evidence on and achieving consensus over the workings of algorithmic systems in adversarial proceedings prevent the public from approaching the courts when faced with algorithmic harms. In this paper, we demonstrate that the courts are the primary route—and the primary roadblock—in the pursuit of redress for algorithmic harms. Courts often find algorithmic harms non-cognizable and rarely require developers to address material claims of harm. To address the core challenges of taking algorithms to court, we develop a relational approach to algorithmic accountability that emphasizes not what the actors do nor the results of their actions, but rather how interlocking relationships of accountability are constituted in a triadic relationship between actors, forums, and public(s). As is the case in other regulatory domains, we believe that impact assessments (and similar accountability documentation) can provide the grounds for contestation between these parties, but only when that triad is structured such that the public(s) are able to cohere around shared experiences and interests, contest the outcomes of algorithmic systems that affect their lives, and make demands upon the other parties. Where courts now find algorithmic harms non-cognizable, an impact assessment regime can potentially create procedural rights to protect substantive rights of the public(s). This would require algorithmic accountability policies currently under consideration to provide the public(s) with adequate standing in courts, and opportunities to access and contest the actor's documentation and the forum's judgments. 
    more » « less
    Free, publicly-accessible full text available June 12, 2024
  3. Applied machine learning (ML) has not yet coalesced on standard practices for research ethics. For ML that predicts mental illness using social media data, ambiguous ethical standards can impact peoples’ lives because of the area’s sensitivity and material con- sequences on health. Transparency of current ethics practices in research is important to document decision-making and improve research practice. We present a systematic literature review of 129 studies that predict mental illness using social media data and ML, and the ethics disclosures they make in research publications. Rates of disclosure are going up over time, but this trend is slow moving – it will take another eight years for the average paper to have coverage on 75% of studied ethics categories. Certain practices are more readily adopted, or "stickier", over time, though we found pri- oritization of data-driven disclosures rather than human-centered. These inconsistently reported ethical considerations indicate a gap between what ML ethicists believe ought to be and what actually is done. We advocate for closing this gap through increased trans- parency of practice and formal mechanisms to support disclosure. 
    more » « less
    Free, publicly-accessible full text available June 12, 2024
  4. In recent years, the CHI community has seen significant growth in research on Human-Centered Responsible Artificial Intelligence. While different research communities may use different terminol- ogy to discuss similar topics, all of this work is ultimately aimed at developing AI that benefits humanity while being grounded in human rights and ethics, and reducing the potential harms of AI. In this special interest group, we aim to bring together researchers from academia and industry interested in these topics to map cur- rent and future research trends to advance this important area of research by fostering collaboration and sharing ideas. 
    more » « less
  5. Computer vision is a "data hungry" field. Researchers and practitioners who work on human-centric computer vision, like facial recognition, emphasize the necessity of vast amounts of data for more robust and accurate models. Humans are seen as a data resource which can be converted into datasets. The necessity of data has led to a proliferation of gathering data from easily available sources, including "public" data from the web. Yet the use of public data has significant ethical implications for the human subjects in datasets. We bridge academic conversations on the ethics of using publicly obtained data with concerns about privacy and agency associated with computer vision applications. Specifically, we examine how practices of dataset construction from public data-not only from websites, but also from public settings and public records-make it extremely difficult for human subjects to trace their images as they are collected, converted into datasets, distributed for use, and, in some cases, retracted. We discuss two interconnected barriers current data practices present to providing an ethics of traceability for human subjects: awareness and control. We conclude with key intervention points for enabling traceability for data subjects. We also offer suggestions for an improved ethics of traceability to enable both awareness and control for individual subjects in dataset curation practices. 
    more » « less
  6. There is a rich literature on technology’s role in facilitating employee monitoring in the workplace. The COVID-19 pandemic created many challenges for employers, and many companies turned to new forms of monitoring to ensure remote workers remained productive; however, these technologies raise important privacy concerns as the boundaries between work and home are further blurred. In this paper, we present findings from a study of 645 US workers who spent at least part of 2020 working remotely due to the pandemic. We explore how their work experiences (job satisfaction, stress, and security) changed between January and November 2020, as well as their attitudes toward and concerns about being monitored. Findings support anecdotal evidence that the pandemic has had an uneven effect on workers, with women reporting more negative effects on their work experiences. In addition, while nearly 40% of workers reported their employer began using new surveillance tools during the pandemic, a significant percentage were unsure, suggesting there is confusion or a lack of transparency regarding how new policies are communicated to staff. We consider these findings in light of prior research and discuss the benefits and drawbacks of various approaches to minimize surveillance-related worker harms. 
    more » « less
  7. Social media provides unique opportunities for researchers to learn about a variety of phenomena—it is often publicly available, highly accessible, and affords more naturalistic observation. However, as research using social media data has increased, so too has public scrutiny, highlighting the need to develop ethical approaches to social media data use. Prior work in this area has explored users’ perceptions of researchers’ use of social media data in the context of a single platform. In this paper, we expand on that work, exploring how platforms and their affordances impact how users feel about social media data reuse. We present results from three factorial vignette surveys, each focusing on a different platform—dating apps, Instagram, and Reddit—to assess users’ comfort with research data use scenarios across a variety of contexts. Although our results highlight different expectations between platforms depending on the research domain, purpose of research, and content collected, we find that the factor with the greatest impact across all platforms is consent—a finding which presents challenges for big data researchers. We conclude by offering a sociotechnical approach to ethical decision-making. This approach provides recommendations on how researchers can interpret and respond to platform norms and affordances to predict potential data use sensitivities. The approach also recommends that researchers respond to the predominant expectation of notification and consent for research participation by bolstering awareness of data collection on digital platforms. 
    more » « less
  8. Around the world, people increasingly generate data through their everyday activities. Much of this happens unwittingly through sensors, cameras, and other surveillance tools on roads, in cities, and at the workplace. However, how individuals and governments think about privacy varies significantly around the world. In this article, we explore differences between people’s attitudes toward privacy and data collection practices in the United States and the Netherlands, two countries with very different regulatory approaches to governing consumer privacy. Through a factorial vignette survey deployed in the two countries, we identify specific contextual factors associated with concerns regarding how personal data are being used. Using Nissenbaum’s framework of privacy as contextual integrity to guide our analysis, we consider the role that five factors play in this assessment: actors (those using data), data type, amount of data collected, reported purpose of data use, and inferences drawn from the data. Findings indicate nationally bound differences as well as shared concerns and indicate future directions for cross-cultural privacy research. 
    more » « less
  9. While research has been conducted with and in marginalized or vulnerable groups, explicit guidelines and best practices centering on specific communities are nascent. An excellent case study to engage within this aspect of research is Black Twitter. This research project considers the history of research with Black communities, combined with empirical work that explores how people who engage with Black Twitter think about research and researchers in order to suggest potential good practices and what researchers should know when studying Black Twitter or other digital traces from marginalized or vulnerable online communities. From our interviews, we gleaned that Black Twitter users feel differently about their content contributing to a research study depending on, for example, the type of content and the positionality of the researcher. Much of the advice participants shared for researchers involved an encouragement to cultivate cultural competency, get to know the community before researching it, and conduct research transparently. Aiming to improve the experience of research for both Black Twitter and researchers, this project is a stepping stone toward future work that further establishes and expands user perceptions of research ethics for online communities composed of vulnerable populations. 
    more » « less