Explanations of AI Agents' actions are considered to be an important factor in improving users' trust in the decisions made by autonomous AI systems. However, as these autonomous systems evolve from reactive, i.e., acting on user input, to proactive, i.e., acting without requiring user intervention, there is a need to explore how the explanation for the actions of these agents should evolve. In this work, we explore the design of explanations through participatory design methods for a proactive auto-response messaging agent that can reduce perceived obligations and social pressure to respond quickly to incoming messages by providing unavailability-related context. We recruited 14 participants who worked in pairs during collaborative design sessions where they reasoned about the agent's design and actions. We qualitatively analyzed the data collected through these sessions and found that participants' reasoning about agent actions led them to speculate heavily on its design. These speculations significantly influenced participants' desire for explanations and the controls they sought to inform the agents' behavior. Our findings indicate a need to transform users' speculations into accurate mental models of agent design. Further, since the agent acts as a mediator in human-human communication, it is also necessary to account for social norms in its explanation design. Finally, user expertise in understanding their habits and behaviors allows the agent to learn from the user their preferences when justifying its actions.
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
-
Smart voice assistants such as Amazon Alexa and Google Home are becoming increasingly pervasive in our everyday environments. Despite their benefits, their miniaturized and embedded cameras and microphones raise important privacy concerns related to surveillance and eavesdropping. Recent work on the privacy concerns of people in the vicinity of these devices has highlighted the need for 'tangible privacy', where control and feedback mechanisms can provide a more assured sense of whether the camera or microphone is 'on' or 'off'. However, current designs of these devices lack adequate mechanisms to provide such assurances. To address this gap in the design of smart voice assistants, especially in the case of disabling microphones, we evaluate several designs that incorporate (or not) tangible control and feedback mechanisms. By comparing people's perceptions of risk, trust, reliability, usability, and control for these designs in a between-subjects online experiment (N=261), we find that devices with tangible built-in physical controls are perceived as more trustworthy and usable than those with non-tangible mechanisms. Our findings present an approach for tangible, assured privacy especially in the context of embedded microphones.
-
Identifying instances when a user will not able to attend to an incoming message and constructing an auto-response with relevant contextual information may help reduce social pressures to immediately respond that many users face. Mobile messaging behavior often varies from one person to another. As a result, compared to a generic model considering profiles of several users, a personalized model can capture a user's messaging behavior more accurately to predict their inattentive states. However, creating accurate personalized models requires a non-trivial amount of individual data, which is often not available for new users. In this work, we investigate a weighted hybrid approach to model users' attention to messaging. Through dynamic performance-based weighting, we combine the predictions of three types of models, a general model, a group model and a personalized model to create an approach which can work through the lack of initial data while adapting to the user's behavior. We present the details of our modeling approach and the evaluation of the model with over three weeks of data from 274 users. Our results highlight the value of hybrid weighted modeling to predict when a user cannot attend to their messages.more » « less
-
Delays in response to mobile messages can cause negative emotions in message senders and can affect an individual's social relationships. Recipients, too, feel a pressure to respond even during inopportune moments. A messaging assistant which could respond with relevant contextual information on behalf of individuals while they are unavailable might reduce the pressure to respond immediately and help put the sender at ease. By modelling attentiveness to messaging, we aim to (1) predict instances when a user is not able to attend to an incoming message within reasonable time and (2) identify what contextual factors can explain the user's attentiveness---or lack thereof---to messaging. In this work, we investigate two approaches to modelling attentiveness: a general approach in which data from a group of users is combined to form a single model for all users; and a personalized approach, in which an individual model is created for each user. Evaluating both models, we observed that on average, with just seven days of training data, the personalized model can outperform the generalized model in terms of both accuracy and F-measure for predicting inattentiveness. Further, we observed that in majority of cases, the messaging patterns identified by the attentiveness models varied widely across users. For example, the top feature in the generalized model appeared in the top five features for only 41% of the individual personalized models.more » « less