Abstract Artificially intelligent communication technologies (AICTs) that operate autonomously with high degrees of conversational fluency can make communication decisions on behalf of their principal users and communicate with those principals’ audiences on their behalf. In this study, we explore how the involvement of AICTs in communication activities shapes how principals engage in impression management and how their communication partners form impressions of them. Through an inductive, comparative field study of users of two AI scheduling technologies, we uncover three communicative practices through which principals engaged in impression management when AICTs communicate on their behalf: interpretation, diplomacy, and staging politeness. We also uncover three processes through which communication partners formed impressions of principals when communicating with them via AICTs: confirmation, transference, and compartmentalization. We show that communication partners can transfer impressions of AICTs to principals themselves and outline the conditions under which such transference is and is not likely. We discuss the implications of these findings for the study of technological mediation of impression management and formation in the age of artificial intelligence and present propositions to guide future empirical research.
more »
« less
Enacting machine agency when AI makes one’s day: understanding how users relate to AI communication technologies for scheduling
Abstract AI Communication Technologies (AICTs) make decisions about users’ communication on their behalf. Users’ implementation of AICTs that autonomously act may enable and constrain how they accomplish their work and interact with others. Drawing on interviews with users of two AICTs with differing levels of autonomy designed for work-related scheduling, this study investigated how users enacted AICTs in practice. Users of both tools drew on AICTs’ autonomous capabilities to enact machine agency, a structure that assigns AICTs the power to allocate resources, which helped them increase scheduling efficiency and guide how others interacted with them. Users of the tool that autonomously implemented decisions described a process of enactment in which they used the tool to control their work, perceived the tool was exhibiting too much control, and acted to regain control. I present implications for understanding how people enact machine agency with AICTs that make decisions about their work.
more »
« less
- Award ID(s):
- 1922266
- PAR ID:
- 10528423
- Publisher / Repository:
- Oxford University Press
- Date Published:
- Journal Name:
- Journal of Computer-Mediated Communication
- Volume:
- 29
- Issue:
- 4
- ISSN:
- 1083-6101
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Laboratory experimentation is a key component of the development of professional engineers. However, experiments conducted in chemical engineering laboratory classes are commonly more prescriptive than the problems faced by practicing engineers, who have agency to make consequential decisions across the experiment and communication of results. Thus, understanding how experiments in laboratory courses vary in offering students opportunities to make such decisions, and how students navigate higher agency learning experiences is important for preparing graduates ready to direct these practices. In this study, we sought to answer the following research questions: How do students perceive their agency in course-based undergraduate research experiences? What factors are measured by the Consequential Agency in Laboratory Experiments survey? To better understand student perceptions of their agency in relation to laboratory experiments, we first conducted a case study of a course-based research experience (CURE) in a senior-level chemical engineering laboratory course. We then surveyed six upper-division laboratory courses across two universities using an initial version of the Consequential Agency in Laboratory Experiments survey. We used exploratory factor analysis to investigate the validity of the data from the survey for measuring relevant constructs of authenticity, agency in specific domains, responsibility, and opportunity to make decisions. We found that with instructional support, students in the CURE recognized that failure could itself provide opportunities for learning. They valued having the agency to make consequential decisions, even when they also found the experience challenging. We also found strong support for items measuring agency as responsibility, authenticity, agency in the communication domain, agency in the experimental design domain, and opportunity to make decisions. These findings give us insight into the value of higher agency laboratory experiments, and they provide a foundation for developing a more precise survey capable of measuring agency across various laboratory experiment practices. Such a survey will enable future studies that investigate the impacts of increasing agency in just one domain versus in several. In turn, this can aid faculty in developing higher agency learning experiences that are more feasible to implement, compared to CUREs.more » « less
-
Research exploring how to support decision-making has often used machine learning to automate or assist human decisions. We take an alternative approach for improving decision-making, using machine learning to help stakeholders surface ways to improve and make fairer decision-making processes. We created "Deliberating with AI", a web tool that enables people to create and evaluate ML models in order to examine strengths and shortcomings of past decision-making and deliberate on how to improve future decisions. We apply this tool to a context of people selection, having stakeholders---decision makers (faculty) and decision subjects (students)---use the tool to improve graduate school admission decisions. Through our case study, we demonstrate how the stakeholders used the web tool to create ML models that they used as boundary objects to deliberate over organization decision-making practices. We share insights from our study to inform future research on stakeholder-centered participatory AI design and technology for organizational decision-making.more » « less
-
Abstract Artificial intelligence and machine learning (AI/ML) have attracted a great deal of attention from the atmospheric science community. The explosion of attention on AI/ML development carries implications for the operational community, prompting questions about how novel AI/ML advancements will translate from research into operations. However, the field lacks empirical evidence on how National Weather Service (NWS) forecasters, as key intended users, perceive AI/ML and its use in operational forecasting. This study addresses this crucial gap through structured interviews conducted with 29 NWS forecasters from October 2021 through July 2023 in which we explored their perceptions of AI/ML in forecasting. We found that forecasters generally prefer the term “machine learning” over “artificial intelligence” and that labeling a product as being AI/ML did not hurt perceptions of the products and made some forecasters more excited about the product. Forecasters also had a wide range of familiarity with AI/ML, and overall, they were (tentatively) open to the use of AI/ML in forecasting. We also provide examples of specific areas related to AI/ML that forecasters are excited or hopeful about and that they are concerned or worried about. One concern that was raised in several ways was that AI/ML could replace forecasters or remove them from the forecasting process. However, forecasters expressed a widespread and deep commitment to the best possible forecasts and services to uphold the agency mission using whatever tools or products that are available to assist them. Last, we note how forecasters’ perceptions evolved over the course of the study.more » « less
-
People often rely on their friends, family, and other loved ones to help them make decisions about digital privacy and security. However, these social processes are rarely supported by technology. To address this gap, we developed an Android-based mobile application ("app") prototype which helps individuals collaborate with people they know to make informed decisions about their app privacy permissions. To evaluate our design, we conducted an interview study with 10 college students while they interacted with our prototype. Overall, participants responded positively to the novel idea of using social collaboration as a means for making better privacy decisions. Yet, we also found that users are less inclined to help others and may be only willing to partake in conversations that directly affect themselves. We discuss the potential for embedding social processes in the design of systems that support privacy decision-making, as well as some of the challenges of this approach.more » « less
An official website of the United States government
