Regardless of how much data artificial intelligence agents have available, agents will inevitably encounter previously unseen situations in real-world deployments. Reacting to novel situations by acquiring new information from other people—socially situated learning—is a core faculty of human development. Unfortunately, socially situated learning remains an open challenge for artificial intelligence agents because they must learn how to interact with people to seek out the information that they lack. In this article, we formalize the task of socially situated artificial intelligence—agents that seek out new information through social interactions with people—as a reinforcement learning problem where the agent learns to identify meaningful and informative questions via rewards observed through social interaction. We manifest our framework as an interactive agent that learns how to ask natural language questions about photos as it broadens its visual intelligence on a large photo-sharing social network. Unlike active-learning methods, which implicitly assume that humans are oracles willing to answer any question, our agent adapts its behavior based on observed norms of which questions people are or are not interested to answer. Through an 8-mo deployment where our agent interacted with 236,000 social media users, our agent improved its performance at recognizing new visual information by 112%. A controlled field experiment confirmed that our agent outperformed an active-learning baseline by 25.6%. This work advances opportunities for continuously improving artificial intelligence (AI) agents that better respect norms in open social environments.
more »
« less
Invoking Principles of Groupware to Develop and Evaluate Present and Future Human-Agent Teams
Advances in artificial intelligence are constantly increasing its validity as a team member enabling it to effectively work alongside humans and other artificial teammates. Unfortunately, the digital nature of artificial teammates and their restrictive communication
and coordination requirements complicate the interaction patterns that exist. In light of this challenge, we create a theoretical framework that details the possible interactions in human-agent teams, emphasizing interactions through groupware, which is based on literature regarding groupware and human-agent teamwork. As artificial intelligence changes and advances, the interaction in human agent teams will also advance, meaning interaction frameworks and groupware must adapt to these changes. We provide examples and a discussion of the frameworks ability to adapt based on advancements in relevant research areas like natural language processing and artificial general intelligence. The results are a framework that detail human-agent interaction throughout the coming years, which can be used to guide groupware development.
more »
« less
- Award ID(s):
- 1829008
- PAR ID:
- 10284451
- Date Published:
- Journal Name:
- Proceedings of the 8th International Conference on Human-Agent Interaction
- Page Range / eLocation ID:
- 15 to 24
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Improving our understanding of how humans perceive AI teammates is an important foundation for our general understanding of human-AI teams. Extending relevant work from cognitive science, we propose a framework based on item response theory for modeling these perceptions. We apply this framework to real-world experiments, in which each participant works alongside another person or an AI agent in a question-answering setting, repeatedly assessing their teammate’s performance. Using this experimental data, we demonstrate the use of our framework for testing research questions about people’s perceptions of both AI agents and other people. We contrast mental models of AI teammates with those of human teammates as we characterize the dimensionality of these mental models, their development over time, and the influence of the participants’ own self-perception. Our results indicate that people expect AI agents’ performance to be significantly better on average than the performance of other humans, with less variation across different types of problems. We conclude with a discussion of the implications of these findings for human-AI interaction.more » « less
-
null (Ed.)Technological advancement goes hand in hand with economic advancement, meaning applied industries like manufacturing, medicine, and retail are set to leverage new practices like human-autonomy teams. These human-autonomy teams call for deep integration between artificial intelligence and the human workers that make up a majority of the workforce. This paper identifies the core principles of the human-autonomy teaming literature relevant to the integration of human-autonomy teams in applied contexts and research due to this large scale implementation of human-autonomy teams. A framework is built and defined from these fundamental concepts, with specific examples of its use in applied contexts and the interactions between various components of the framework. This framework can be utilized by practitioners of human-autonomy teams, allowing them to make informed decisions regarding the integration and training of human-autonomy teams.more » « less
-
Teamwork is a set of interrelated reasoning, actions and behaviors of team members that facilitate common objectives. Teamwork theory and experiments have resulted in a set of states and processes for team effectiveness in both human-human and agent-agent teams. However, human-agent teaming is less well studied because it is so new and involves asymmetry in policy and intent not present in human teams. To optimize team performance in human-agent teaming, it is critical that agents infer human intent and adapt their polices for smooth coordination. Most literature in human-agent teaming builds agents referencing a learned human model. Though these agents are guaranteed to perform well with the learned model, they lay heavy assumptions on human policy such as optimality and consistency, which is unlikely in many real-world scenarios. In this paper, we propose a novel adaptive agent architecture in human-model-free setting on a two-player cooperative game, namely Team Space Fortress (TSF). Previous human-human team research have shown complementary policies in TSF game and diversity in human players’ skill, which encourages us to relax the assumptions on human policy. Therefore, we discard learning human models from human data, and instead use an adaptation strategy on a pre-trained library of exemplar policies composed of RL algorithms or rule-based methods with minimal assumptions of human behavior. The adaptation strategy relies on a novel similarity metric to infer human policy and then selects the most complementary policy in our library to maximize the team performance. The adaptive agent architecture can be deployed in real-time and generalize to any off-the-shelf static agents. We conducted human-agent experiments to evaluate the proposed adaptive agent framework, and demonstrated the suboptimality, diversity, and adaptability of human policies in human-agent teams.more » « less
-
As artificial agents proliferate, there will be more and more situations in which they must communicate their capabilities to humans, including what they can “see.” Artificial agents have existed for decades in the form of computer-controlled agents in videogames. We analyze videogames in order to not only inspire the design of better agents, but to stop agent designers from replicating research that has already been theorized, designed, and tested in-depth. We present a qualitative thematic analysis of sight cues in videogames and develop a framework to support human-agent interaction design. The framework identifies the different locations and stimulus types – both visualizations and sonifications – available to designers and the types of information they can convey as sight cues. Insights from several other cue properties are also presented. We close with suggestions for implementing such cues with existing technologies to improve the safety, privacy, and efficiency of human-agent interactions.more » « less