skip to main content

Attention:

The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 11:00 PM ET on Friday, May 17 until 8:00 AM ET on Saturday, May 18 due to maintenance. We apologize for the inconvenience.


Search for: All records

Creators/Authors contains: "Andalibi, Nazanin"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Emotion recognition technologies, while critiqued for bias, validity, and privacy invasion, continue to be developed and applied in a range of domains including in high-stakes settings like the workplace. We set out to examine emotion recognition technologies proposed for use in the workplace, describing the input data and training, outputs, and actions that these systems take or prompt. We use these design features to reflect on these technologies' implications using the ethical speculation lens. We analyzed patent applications that developed emotion recognition technologies to be used in the workplace (N=86). We found that these technologies scope data collection broadly; claim to reveal not only targets' emotional expressions, but also their internal states; and take or prompt a wide range of actions, many of which impact workers' employment and livelihoods. Technologies described in patent applications frequently violated existing guidelines for ethical automated emotion recognition technology. We demonstrate the utility of using patent applications for ethical speculation. In doing so, we suggest that 1) increasing the visibility of claimed emotional states has the potential to create additional emotional labor for workers (a burden that is disproportionately distributed to low-power and marginalized workers) and contribute to a larger pattern of blurring boundaries between expectations of the workplace and a worker's autonomy, and more broadly to the data colonialism regime; 2) Emotion recognition technology's failures can be invisible, may inappropriately influence high-stakes workplace decisions and can exacerbate inequity. We discuss the implications of making emotions and emotional data visible in the workplace and submit for consideration implications for designers of emotion recognition, employers who use them, and policymakers. 
    more » « less
  2. Despite debates about emotion artificial intelligence's (EAI) validity, legality, and social consequences, EAI is increasingly present in the high stakes context of hiring, with potential to shape the future of work and the workforce. The values laden in technology play a significant role in its societal impact.We conducted qualitative content analysis on the public-facing websites (N=229) of EAI hiring services. We identify the organizational problems that EAI hiring services claim to solve and reveal the values emerging in desired EAI uses as promoted by EAI hiring services to solve organizational problems. Our findings show that EAI hiring services market their technologies as technosolutions to three purported organizational hiring problems: 1) hiring (in)accuracy, 2) hiring (mis)fit, and 3) hiring (in)authenticity. We unpack these problems to expose how these desired uses of EAI are legitimized by the corporate ideals of data-driven decision making, continuous improvement, precision, loyalty, and stability. We identify the unfair and deceptive mechanisms by which EAI hiring services claim to solve the purported organizational hiring problems, suggesting that they unfairly exclude and exploit job candidates through EAI's creation, extraction, and affective commodification of a candidate's affective value through pseudoscientific approaches. Lastly, we interrogate EAI hiring service claims to reveal the core values that underpin their stated desired use: techno-omnipresence, techno-omnipotence, and techno-omniscience. We show how EAI hiring services position desired use of their technology as a moral imperative for hiring organizations with supreme capabilities to solve organizational hiring problems, then discuss implications for fairness, ethics, and policy in EAI-enabled hiring within the US policy landscape. 
    more » « less
  3. The workplace has experienced extensive digital transformation, in part due to artificial intelligence's commercial availability. Though still an emerging technology, emotional artificial intelligence (EAI) is increasingly incorporated into enterprise systems to augment and automate organizational decisions and to monitor and manage workers. EAI use is often celebrated for its potential to improve workers' wellbeing and performance as well as address organizational problems such as bias and safety. Workers subject to EAI in the workplace are data subjects whose data make EAI possible and who are most impacted by it. However, we lack empirical knowledge about data subjects' perspectives on EAI, including in the workplace. To this end, using a relational ethics lens, we qualitatively analyzed 395 U.S. adults' open-ended survey (partly representative) responses regarding the perceived benefits and risks they associate with being subjected to EAI in the workplace. While participants acknowledged potential benefits of being subject to EAI (e.g., employers using EAI to aid their wellbeing, enhance their work environment, reduce bias), a myriad of potential risks overshadowed perceptions of potential benefits. Participants expressed concerns regarding the potential for EAI use to harm their wellbeing, work environment and employment status, and create and amplify bias and stigma against them, especially the most marginalized (e.g., along dimensions of race, gender, mental health status, disability). Distrustful of EAI and its potential risks, participants anticipated conforming to (e.g., partaking in emotional labor) or refusing (e.g., quitting a job) EAI implementation in practice. We argue that EAI may magnify, rather than alleviate, existing challenges data subjects face in the workplace and suggest that some EAI-inflicted harms would persist even if concerns of EAI's accuracy and bias are addressed. 
    more » « less
  4. Workplaces are increasingly adopting emotion AI, promising benefits to organizations. However, little is known about the perceptions and experiences of workers subject to emotion AI in the workplace. Our interview study with (n=15) US adult workers addresses this gap, finding that (1) participants viewed emotion AI as a deep privacy violation over the privacy of workers’ sensitive emotional information; (2) emotion AI may function to enforce workers’ compliance with emotional labor expectations, and that workers may engage in emotional labor as a mechanism to preserve privacy over their emotions; (3) workers may be exposed to a wide range of harms as a consequence of emotion AI in the workplace. Findings reveal the need to recognize and define an individual right to what we introduce as emotional privacy, as well as raise important research and policy questions on how to protect and preserve emotional privacy within and beyond the workplace. 
    more » « less
  5. The growth of technologies promising to infer emotions raises political and ethical concerns, including concerns regarding their accuracy and transparency. A marginalized perspective in these conversations is that of data subjects potentially affected by emotion recognition. Taking social media as one emotion recognition deployment context, we conducted interviews with data subjects (i.e., social media users) to investigate their notions about accuracy and transparency in emotion recognition and interrogate stated attitudes towards these notions and related folk theories. We find that data subjects see accurate inferences as uncomfortable and as threatening their agency, pointing to privacy and ambiguity as desired design principles for social media platforms. While some participants argued that contemporary emotion recognition must be accurate, others raised concerns about possibilities for contesting the technology and called for better transparency. Furthermore, some challenged the technology altogether, highlighting that emotions are complex, relational, performative, and situated. In interpreting our findings, we identify new folk theories about accuracy and meaningful transparency in emotion recognition. Overall, our analysis shows an unsatisfactory status quo for data subjects that is shaped by power imbalances and a lack of reflexivity and democratic deliberation within platform governance. 
    more » « less
  6. Automatic emotion recognition (ER)-enabled wellbeing interventions use ER algorithms to infer the emotions of a data subject (i.e., a person about whom data is collected or processed to enable ER) based on data generated from their online interactions, such as social media activity, and intervene accordingly. The potential commercial applications of this technology are widely acknowledged, particularly in the context of social media. Yet, little is known about data subjects' conceptualizations of and attitudes toward automatic ER-enabled wellbeing interventions. To address this gap, we interviewed 13 US adult social media data subjects regarding social media-based automatic ER-enabled wellbeing interventions. We found that participants' attitudes toward automatic ER-enabled wellbeing interventions were predominantly negative. Negative attitudes were largely shaped by how participants compared their conceptualizations of Artificial Intelligence (AI) to the humans that traditionally deliver wellbeing support. Comparisons between AI and human wellbeing interventions were based upon human attributes participants doubted AI could hold: 1) helpfulness and authentic care; 2) personal and professional expertise; 3) morality; and 4) benevolence through shared humanity. In some cases, participants' attitudes toward automatic ER-enabled wellbeing interventions shifted when participants conceptualized automatic ER-enabled wellbeing interventions' impact on others, rather than themselves. Though with reluctance, a minority of participants held more positive attitudes toward their conceptualizations of automatic ER-enabled wellbeing interventions, citing their potential to benefit others: 1) by supporting academic research; 2) by increasing access to wellbeing support; and 3) through egregious harm prevention. However, most participants anticipated harms associated with their conceptualizations of automatic ER-enabled wellbeing interventions for others, such as re-traumatization, the spread of inaccurate health information, inappropriate surveillance, and interventions informed by inaccurate predictions. Lastly, while participants had qualms about automatic ER-enabled wellbeing interventions, we identified three development and delivery qualities of automatic ER-enabled wellbeing interventions upon which their attitudes toward them depended: 1) accuracy; 2) contextual sensitivity; and 3) positive outcome. Our study is not motivated to make normative statements about whether or how automatic ER-enabled wellbeing interventions should exist, but to center voices of the data subjects affected by this technology. We argue for the inclusion of data subjects in the development of requirements for ethical and trustworthy ER applications. To that end, we discuss ethical, social, and policy implications of our findings, suggesting that automatic ER-enabled wellbeing interventions imagined by participants are incompatible with aims to promote trustworthy, socially aware, and responsible AI technologies in the current practical and regulatory landscape in the US. 
    more » « less
  7. HCI researchers increasingly conduct emotionally demanding research in a variety of different contexts. Though scholarship has begun to address the experiences of HCI researchers conducting this work, there is a need to develop guidelines and best practices for researcher wellbeing. In this one-day CHI workshop, we will bring together a group of HCI researchers across sectors and career levels who conduct emotionally demanding research to discuss their experiences, self-care practices, and strategies for research. Based on these discussions, we will work with workshop attendees to develop best practices and guidelines for researcher wellbeing in the context of emotionally demanding HCI research; launch a repository of community-sourced resources for researcher wellbeing; document the experiences of HCI researchers conducting emotionally demanding research; and establish a community of HCI researchers conducting this type of work. 
    more » « less
  8. Emotion recognition algorithms recognize, infer, and harvest emotions using data sources such as social media behavior, streaming service use, voice, facial expressions, and biometrics in ways often opaque to the people providing these data. People's attitudes towards emotion recognition and the harms and outcomes they associate with it are important yet unknown. Focusing on social media, we interviewed 13 adult U.S. social media users to fill this gap. We find that people view emotions as insights to behavior, prone to manipulation, intimate, vulnerable, and complex. Many find emotion recognition invasive and scary, associating it with autonomy and control loss. We identify two categories of emotion recognition's risks: individual and societal. We discuss findings' implications for algorithmic accountability and argue for considering emotion data as sensitive. Using a Science and Technology Studies lens, we advocate that technology users should be considered as a relevant social group in emotion recognition advancements. 
    more » « less