skip to main content


Title: Attitudes and Folk Theories of Data Subjects on Transparency and Accuracy in Emotion Recognition
The growth of technologies promising to infer emotions raises political and ethical concerns, including concerns regarding their accuracy and transparency. A marginalized perspective in these conversations is that of data subjects potentially affected by emotion recognition. Taking social media as one emotion recognition deployment context, we conducted interviews with data subjects (i.e., social media users) to investigate their notions about accuracy and transparency in emotion recognition and interrogate stated attitudes towards these notions and related folk theories. We find that data subjects see accurate inferences as uncomfortable and as threatening their agency, pointing to privacy and ambiguity as desired design principles for social media platforms. While some participants argued that contemporary emotion recognition must be accurate, others raised concerns about possibilities for contesting the technology and called for better transparency. Furthermore, some challenged the technology altogether, highlighting that emotions are complex, relational, performative, and situated. In interpreting our findings, we identify new folk theories about accuracy and meaningful transparency in emotion recognition. Overall, our analysis shows an unsatisfactory status quo for data subjects that is shaped by power imbalances and a lack of reflexivity and democratic deliberation within platform governance.  more » « less
Award ID(s):
2020872
NSF-PAR ID:
10321699
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Proceedings of the ACM on Human-Computer Interaction
Volume:
6
Issue:
CSCW1
ISSN:
2573-0142
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Automatic emotion recognition (ER)-enabled wellbeing interventions use ER algorithms to infer the emotions of a data subject (i.e., a person about whom data is collected or processed to enable ER) based on data generated from their online interactions, such as social media activity, and intervene accordingly. The potential commercial applications of this technology are widely acknowledged, particularly in the context of social media. Yet, little is known about data subjects' conceptualizations of and attitudes toward automatic ER-enabled wellbeing interventions. To address this gap, we interviewed 13 US adult social media data subjects regarding social media-based automatic ER-enabled wellbeing interventions. We found that participants' attitudes toward automatic ER-enabled wellbeing interventions were predominantly negative. Negative attitudes were largely shaped by how participants compared their conceptualizations of Artificial Intelligence (AI) to the humans that traditionally deliver wellbeing support. Comparisons between AI and human wellbeing interventions were based upon human attributes participants doubted AI could hold: 1) helpfulness and authentic care; 2) personal and professional expertise; 3) morality; and 4) benevolence through shared humanity. In some cases, participants' attitudes toward automatic ER-enabled wellbeing interventions shifted when participants conceptualized automatic ER-enabled wellbeing interventions' impact on others, rather than themselves. Though with reluctance, a minority of participants held more positive attitudes toward their conceptualizations of automatic ER-enabled wellbeing interventions, citing their potential to benefit others: 1) by supporting academic research; 2) by increasing access to wellbeing support; and 3) through egregious harm prevention. However, most participants anticipated harms associated with their conceptualizations of automatic ER-enabled wellbeing interventions for others, such as re-traumatization, the spread of inaccurate health information, inappropriate surveillance, and interventions informed by inaccurate predictions. Lastly, while participants had qualms about automatic ER-enabled wellbeing interventions, we identified three development and delivery qualities of automatic ER-enabled wellbeing interventions upon which their attitudes toward them depended: 1) accuracy; 2) contextual sensitivity; and 3) positive outcome. Our study is not motivated to make normative statements about whether or how automatic ER-enabled wellbeing interventions should exist, but to center voices of the data subjects affected by this technology. We argue for the inclusion of data subjects in the development of requirements for ethical and trustworthy ER applications. To that end, we discuss ethical, social, and policy implications of our findings, suggesting that automatic ER-enabled wellbeing interventions imagined by participants are incompatible with aims to promote trustworthy, socially aware, and responsible AI technologies in the current practical and regulatory landscape in the US. 
    more » « less
  2. Emotion recognition algorithms recognize, infer, and harvest emotions using data sources such as social media behavior, streaming service use, voice, facial expressions, and biometrics in ways often opaque to the people providing these data. People's attitudes towards emotion recognition and the harms and outcomes they associate with it are important yet unknown. Focusing on social media, we interviewed 13 adult U.S. social media users to fill this gap. We find that people view emotions as insights to behavior, prone to manipulation, intimate, vulnerable, and complex. Many find emotion recognition invasive and scary, associating it with autonomy and control loss. We identify two categories of emotion recognition's risks: individual and societal. We discuss findings' implications for algorithmic accountability and argue for considering emotion data as sensitive. Using a Science and Technology Studies lens, we advocate that technology users should be considered as a relevant social group in emotion recognition advancements. 
    more » « less
  3. Background As a number of vaccines for COVID-19 are given emergency use authorization by local health agencies and are being administered in multiple countries, it is crucial to gain public trust in these vaccines to ensure herd immunity through vaccination. One way to gauge public sentiment regarding vaccines for the goal of increasing vaccination rates is by analyzing social media such as Twitter. Objective The goal of this research was to understand public sentiment toward COVID-19 vaccines by analyzing discussions about the vaccines on social media for a period of 60 days when the vaccines were started in the United States. Using the combination of topic detection and sentiment analysis, we identified different types of concerns regarding vaccines that were expressed by different groups of the public on social media. Methods To better understand public sentiment, we collected tweets for exactly 60 days starting from December 16, 2020 that contained hashtags or keywords related to COVID-19 vaccines. We detected and analyzed different topics of discussion of these tweets as well as their emotional content. Vaccine topics were identified by nonnegative matrix factorization, and emotional content was identified using the Valence Aware Dictionary and sEntiment Reasoner sentiment analysis library as well as by using sentence bidirectional encoder representations from transformer embeddings and comparing the embedding to different emotions using cosine similarity. Results After removing all duplicates and retweets, 7,948,886 tweets were collected during the 60-day time period. Topic modeling resulted in 50 topics; of those, we selected 12 topics with the highest volume of tweets for analysis. Administration and access to vaccines were some of the major concerns of the public. Additionally, we classified the tweets in each topic into 1 of the 5 emotions and found fear to be the leading emotion in the tweets, followed by joy. Conclusions This research focused not only on negative emotions that may have led to vaccine hesitancy but also on positive emotions toward the vaccine. By identifying both positive and negative emotions, we were able to identify the public's response to the vaccines overall and to news events related to the vaccines. These results are useful for developing plans for disseminating authoritative health information and for better communication to build understanding and trust. 
    more » « less
  4. Abstract

    Friends and therapists often encourage people in distress to say how they feel (i.e., name their emotions) with the hope that identifying their emotions will help them cope. Although lay and some psychological theories posit that emotion naming should facilitate subsequent emotion regulation, there is little research directly testing this question. Here, we report on two experimental studies that test how naming the emotions evoked by aversive images impacts subsequent regulation of those emotions. In study 1 (N= 80), participants were randomly assigned into one of four between-subjects conditions in which they either (i) passively observed aversive images, (ii) named the emotions that these images made them feel, (iii) regulated their emotions by reappraising the meaning of images, or (iv) both named and regulated their emotions. Analyses of self-reported negative affect revealed that emotion naming impeded emotion regulation via reappraisal. Participants who named their emotions before reappraising reported feelingworsethan those who regulated without naming. Study 2 (N= 60) replicated these findings in a within-participants design, demonstrated that emotion naming also impeded regulation via mindful acceptance, and showed that observed effects were unrelated to a measure of social desirability, thereby mitigating the concern of experimenter demand. Together, these studies show that the impact of emotion naming on emotion regulation opposes common intuitions: instead of facilitating emotion regulation via reappraisal or acceptance, constructing an instance of a specific emotion category by giving it a name may “crystalize” one’s affective experience and make it more resistant to modification.

     
    more » « less
  5. In general, people tend to identify the emotions of others from their facial expressions, however recent findings suggest that we may be more accurate when we hear someone’s voice than when we look only at their facial expression. The study reported in the paper examined whether these findings hold true for animated agents. A total of 37 subjects participated in the study: 19 males, 14 females, and 4 of non-specified gender. Subjects were asked to view 18 video stimuli; 9 clips featured a male agent and 9 clips a female agent. Each agent showed 3 different facial expressions (happy, angry, neutral), each one paired with 3 different voice lines spoken in three different tones (happy, angry, neutral). Hence, in some clips the agent’s tone of voice and facial expression were congruent, while in some videos they were not. Subjects answered questions regarding the emotion they believed the agent was feeling and rated the emotion intensity, typicality, and sincerity. Findings showed that emotion recognition rate and ratings of emotion intensity, typicality and sincerity were highest when the agent’s face and voice were congruent. However, when the channels were incongruent, subjects identified the emotion more accurately from the agent’s facial expression than the tone of voice. 
    more » « less