Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
                                            Some full text articles may not yet be available without a charge during the embargo (administrative interval).
                                        
                                        
                                        
                                            
                                                
                                             What is a DOI Number?
                                        
                                    
                                
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
- 
            Abstract Why have core knowledge? Standard answers typically emphasize the difficulty of learning core knowledge from experience, or the benefits it confers for learning about the world. Here, we suggest a complementary reason: Core knowledge is critical for learning not just about the external world, but about the mind itself.more » « less
- 
            Abstract The discovery of a new kind of experience can teach an agent what that kind of experience is like. Such a discovery can be epistemically transformative, teaching an agent something they could not have learned without having that kind of experience. However, learning something new does not always require new experience. In some cases, an agent can merely expand their existing knowledge using, e.g., inference or imagination that draws on prior knowledge. We present a computational framework, grounded in the language of partially observable Markov Decision Processes (POMDPs), to formalize this distinction. We propose that epistemically transformative experiences leave a measurable “signature” distinguishing them from experiences that are not epistemically transformative. For epistemically transformative experiences, learning in a new environment may be comparable to “learning from scratch” (since prior knowledge has become obsolete). In contrast, for experiences that are not transformative, learning in a new environment can be facilitated by prior knowledge of that same kind (since new knowledge can be built upon the old). We demonstrate this in a synthetic experiment inspired by Edwin Abbott’s Flatland, where an agent learns to navigate a 2D world and is subsequently transferred either to a 3D world (epistemically transformative change) or to an expanded 2D world (epistemically non-transformative change). Beyond the contribution to understanding epistemic change, our work shows how tools in computational cognitive science can formalize and evaluate philosophical intuitions in new ways.more » « less
- 
            Understanding human perceptions of robot performance is crucial for designing socially intelligent robots that can adapt to human expectations. Current approaches often rely on surveys, which can disrupt ongoing human–robot interactions. As an alternative, we explore predicting people’s perceptions of robot performance using non-verbal behavioral cues and machine learning techniques. We contribute the SEAN TOGETHER Dataset consisting of observations of an interaction between a person and a mobile robot in Virtual Reality, together with perceptions of robot performance provided by users on a 5-point scale. We then analyze how well humans and supervised learning techniques can predict perceived robot performance based on different observation types (like facial expression and spatial behavior features). Our results suggest that facial expressions alone provide useful information, but in the navigation scenarios that we considered, reasoning about spatial features in context is critical for the prediction task. Also, supervised learning techniques outperformed humans’ predictions in most cases. Further, when predicting robot performance as a binary classification task on unseen users’ data, the F1-Score of machine learning models more than doubled that of predictions on a 5-point scale. This suggested good generalization capabilities, particularly in identifying performance directionality over exact ratings. Based on these findings, we conducted a real-world demonstration where a mobile robot uses a machine learning model to predict how a human who follows it perceives it. Finally, we discuss the implications of our results for implementing these supervised learning models in real-world navigation. Our work paves the path to automatically enhancing robot behavior based on observations of users and inferences about their perceptions of a robot.more » « lessFree, publicly-accessible full text available April 18, 2026
- 
            Free, publicly-accessible full text available November 4, 2025
- 
            To enable sophisticated interactions between humans and robots in a shared environment, robots must infer the intentions and strategies of their human counterparts. This inference can provide a competitive edge to the robot or enhance human-robot collaboration by reducing the necessity for explicit communication about task decisions. In this work, we identify specific states within the shared environment, which we refer to as Critical Decision Points, where the actions of a human would be especially indicative of their high-level strategy. A robot can significantly reduce uncertainty regarding the human’s strategy by observing actions at these points. To demonstrate the practical value of Critical Decision Points, we propose a Receding Horizon Planning (RHP) approach for the robot to influence the movement of a human opponent in a competitive game of hide-and-seek in a partially observable setting. The human plays as the hider and the robot plays as the seeker. We show that the seeker can influence the hider to move towards Critical Decision Points, and this can facilitate a more accurate estimation of the hider’s strategy. In turn, this helps the seeker catch the hider faster than estimating the hider’s strategy whenever the hider is visible or when the seeker only optimizes for minimizing its distance to the hider.more » « less
- 
            Work in Human–Robot Interaction (HRI) has investigated interactions between one human and one robot as well as human–robot group interactions. Yet the field lacks a clear definition and understanding of the influence a robot can exert on interactions between other group members (e.g., human-to-human). In this article, we define Interaction-Shaping Robotics (ISR), a subfield of HRI that investigates robots that influence the behaviors and attitudes exchanged between two (or more) other agents. We highlight key factors of interaction-shaping robots that include the role of the robot, the robot-shaping outcome, the form of robot influence, the type of robot communication, and the timeline of the robot’s influence. We also describe three distinct structures of human–robot groups to highlight the potential of ISR in different group compositions and discuss targets for a robot’s interaction-shaping behavior. Finally, we propose areas of opportunity and challenges for future research in ISR.more » « less
- 
            Recent work in Human-Robot Interaction (HRI) has shown that robots can leverage implicit communicative signals from users to understand how they are being perceived during interactions. For example, these signals can be gaze patterns, facial expressions, or body motions that reflect internal human states. To facilitate future research in this direction, we contribute the REACT database, a collection of two datasets of human-robot interactions that display users’ natural reactions to robots during a collaborative game and a photography scenario. Further, we analyze the datasets to show that interaction history is an important factor that can influence human reactions to robots. As a result, we believe that future models for interpreting implicit feedback in HRI should explicitly account for this history. REACT opens up doors to this possibility in the future.more » « less
- 
            Current methods of measuring fairness in human-robot interaction (HRI) research often gauge perceptions of fairness at the conclu- sion of a task. However, this methodology overlooks the dynamic nature of fairness perceptions, which may shift and evolve as a task progresses. To help address this gap, we introduce a platform designed to help investigate the evolution of fairness over time: the Multiplayer Space Invaders game. This three-player game is structured such that two players work to eliminate as many of their own enemies as possible while a third player makes decisions about which player to support throughout the game. In this paper, we discuss different potential experimental designs facilitated by this platform. A key aspect of these designs is the inclusion of a robot that operates the supporting ship and must make multiple decisions about which player to aid throughout a task. We discuss how capturing fairness perceptions at different points in the game could give us deeper insights into how perceptions of fairness fluctuate in response to different variables and decisions made in the game.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
