Abstract Artificial intelligence (AI) is already widely used in daily communication, but despite concerns about AI’s negative effects on society the social consequences of using it to communicate remain largely unexplored. We investigate the social consequences of one of the most pervasive AI applications, algorithmic response suggestions (“smart replies”), which are used to send billions of messages each day. Two randomized experiments provide evidence that these types of algorithmic recommender systems change how people interact with and perceive one another in both pro-social and anti-social ways. We find that using algorithmic responses changes language and social relationships. More specifically, it increases communication speed, use of positive emotional language, and conversation partners evaluate each other as closer and more cooperative. However, consistent with common assumptions about the adverse effects of AI, people are evaluated more negatively if they are suspected to be using algorithmic responses. Thus, even though AI can increase the speed of communication and improve interpersonal perceptions, the prevailing anti-social connotations of AI undermine these potential benefits if used overtly.
more »
« less
This content will become publicly available on May 11, 2025
Making Transparency Influencers: A Case Study of an Educational Approach to Improve Responsible AI Practices in News and Media
Concerns about the risks posed by artificial intelligence (AI) have resulted in growing interest in algorithmic transparency. While algorithmic transparency is well-studied, there is evidence that many organizations do not value implementing transparency. In this case study, we test a ground-up approach to ensuring better real-world algorithmic transparency by creating transparency influencers — motivated individuals within organizations who advocate for transparency. We held an interactive online workshop on algorithmic transparency and advocacy for 15 professionals from news, media, and journalism. We reflect on workshop design choices and presents insights from participant interviews. We found positive evidence for our approach: In the days following the workshop, three participants had done pro-transparency advocacy. Notably, one of them advocated for algorithmic transparency at an organization-wide AI strategy meeting. In the words of a participant: “if you are questioning whether or not you need to tell people [about AI], you need to tell people.”
more »
« less
- PAR ID:
- 10514477
- Publisher / Repository:
- ACM
- Date Published:
- Journal Name:
- Extended Abstracts of the 2024 CHI Conference on Human Factors in Computing Systems
- ISBN:
- 9798400703317
- Page Range / eLocation ID:
- 1 to 8
- Subject(s) / Keyword(s):
- responsible AI transparency explainability artificial intelligence machine learning tempered radicals
- Format(s):
- Medium: X
- Location:
- Honolulu HI USA
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
In this session, we will consider how to use place-based data to build your case to sponsors for funding research at your institution, particularly for sponsors who operate on the national or international scale. The setting of your institution—the communities it developed in, the region where it operates, and the people it reaches and serves—is key for conveying its unique capacities and potentials, and for making sponsors eager to bring you into their funding portfolio. How can data help you introduce yourself as an institution and tell your story in geographical and economic context? In this session we will explore US Census data, other federal data hubs, and research and reporting from organizations such as the Pew Research Center or the National Bureau of Economic Research (NBER). We will cover the benefits and challenges of working with raw data; identify suitable data types for certain purposes, such as diversity and equity issues; and consider what kinds of data and presentations are most compelling to different types of funders. With greater awareness of what data and tools are available, you can “put yourself on the map” and paint a vivid picture of your community for prospective funders. Presented at the 2024 Research Analytics Summit in Albuquerque, NMmore » « less
-
Artificial intelligence (AI) underpins virtually every experience that we have—from search and social media to generative AI and immersive social virtual reality (SVR). For Generation Z, there is no before AI. As adults, we must humble ourselves to the notion that AI is shaping youths’ world in ways that we don’t understand and we need to listen to them about their lived experiences. We invite researchers from academia and industry to participate in a workshop with youth activists to set the agenda for research into how AI-driven emerging technologies affect youth and how to address these challenges. This reflective workshop will amplify youth voices and empower youth and researchers to set an agenda. As part of the workshop, youth activists will participate in a panel and steer the conversation around the agenda for future research. All will participate in group research agenda setting activities to reflect on their experiences with AI technologies and consider ways to tackle these challenges.more » « less
-
An initial exploratory study examined basic parameters of the sustainability mindset in an historically underrepresented group within engineering. An NSF water quality engineering research project engaged citizen scientists from vulnerable Latinx families in design, construction, and use of acrylic concrete structures for rainwater harvesting. During the start, middle, and end of the project, participants were asked to share their perceptions of sustainability through a series of exploratory focus groups questions: “How do you feel about droughts in the region; can you please tell me what you know about drought-resiliency; do you know ways a person might be able to conserve water during a drought; can you please tell me what you know about water quality testing?” Three coders (an environmental engineer, a civil engineer, and a sociologist) conducted a domain analysis of the focus group to determine emergent themes reflecting the sustainability mindset of the citizen scientists. Preliminary results show that between the onset and conclusion of the rainwater harvesting project, participants increasingly articulated their thoughts on sustainability in a future-oriented context requiring collective action in a broader, community sense. The preliminary findings have implications for sustainability- focused engineering outreach and crowdsourcing efforts.more » « less
-
People form perceptions and interpretations of AI through external sources prior to their interaction with new technology. For example, shared anecdotes and media stories influence prior beliefs that may or may not accurately represent the true nature of AI systems. We hypothesize people's prior perceptions and beliefs will affect human-AI interactions and usage behaviors when using new applications. This paper presents a user experiment to explore the interplay between user's pre-existing beliefs about AI technology, individual differences, and previously established sources of cognitive bias from first impressions with an interactive AI application. We employed questionnaire measures as features to categorize users into profiles based on their prior beliefs and attitudes about technology. In addition, participants were assigned to one of two controlled conditions designed to evoke either positive or negative first impressions during an AI-assisted judgment task using an interactive application. The experiment and results provide empirical evidence that profiling users by surveying them on their prior beliefs and differences can be a beneficial approach for bias (and/or unanticipated usage) mitigation instead of seeking one-size-fits-all solutions.more » « less