Understanding human perceptions of robot performance is crucial for designing socially intelligent robots that can adapt to human expectations. Current approaches often rely on surveys, which can disrupt ongoing human–robot interactions. As an alternative, we explore predicting people’s perceptions of robot performance using non-verbal behavioral cues and machine learning techniques. We contribute the SEAN TOGETHER Dataset consisting of observations of an interaction between a person and a mobile robot in Virtual Reality, together with perceptions of robot performance provided by users on a 5-point scale. We then analyze how well humans and supervised learning techniques can predict perceived robot performance based on different observation types (like facial expression and spatial behavior features). Our results suggest that facial expressions alone provide useful information, but in the navigation scenarios that we considered, reasoning about spatial features in context is critical for the prediction task. Also, supervised learning techniques outperformed humans’ predictions in most cases. Further, when predicting robot performance as a binary classification task on unseen users’ data, the F1-Score of machine learning models more than doubled that of predictions on a 5-point scale. This suggested good generalization capabilities, particularly in identifying performance directionality over exact ratings. Based on these findings, we conducted a real-world demonstration where a mobile robot uses a machine learning model to predict how a human who follows it perceives it. Finally, we discuss the implications of our results for implementing these supervised learning models in real-world navigation. Our work paves the path to automatically enhancing robot behavior based on observations of users and inferences about their perceptions of a robot. 
                        more » 
                        « less   
                    
                            
                            Let’s move on: Topic Change in Robot-Facilitated Group Discussions
                        
                    
    
            Robot-moderated group discussions have the potential to facilitate engaging and productive interactions among human participants. Previous work on topic management in conversational agents has predominantly focused on human engagement and topic personalization, with the agent having an active role in the discussion. Also, studies have shown the usefulness of including robots in groups, yet further exploration is still needed for robots to learn when to change the topic while facilitating discussions. Accordingly, our work investigates the suitability of machine-learning models and audiovisual non-verbal features in predicting appropriate topic changes. We utilized interactions between a robot moderator and human participants, which we annotated and used for extracting acoustic and body language-related features. We provide a detailed analysis of the performance of machine learning approaches using sequential and non-sequential data with different sets of features. The results indicate promising performance in classifying inappropriate topic changes, outperforming rule-based approaches. Additionally, acoustic features exhibited comparable performance and robustness compared to the complete set of multimodal features. Our annotated data is publicly available at https://github.com/ghadj/topic-change-robot-discussions-data-2024. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 2143109
- PAR ID:
- 10566596
- Publisher / Repository:
- IEEE
- Date Published:
- ISBN:
- 979-8-3503-7502-2
- Page Range / eLocation ID:
- 2087 to 2094
- Format(s):
- Medium: X
- Location:
- Pasadena, CA, USA
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            Nonverbal interactions are a key component of human communication. Since robots have become significant by trying to get close to human beings, it is important that they follow social rules governing the use of space. Prior research has conceptualized personal space as physical zones which are based on static distances. This work examined how preferred interaction distance can change given different interaction scenarios. We conducted a user study using three different robot heights. We also examined the difference in preferred interaction distance when a robot approaches a human and, conversely, when a human approaches a robot. Factors included in quantitative analysis are the participants' gender, robot's height, and method of approach. Subjective measures included human comfort and perceived safety. The results obtained through this study shows that robot height, participant gender and method of approach were significant factors influencing measured proxemic zones and accordingly participant comfort. Subjective data showed that experiment respondents regarded robots in a more favorable light following their participation in this study. Furthermore, the NAO was perceived most positively by respondents according to various metrics and the PR2 Tall, most negatively.more » « less
- 
            Modern robotics heavily relies on machine learning and has a growing need for training data. Advances and commercialization of virtual reality (VR) present an opportunity to use VR as a tool to gather such data for human-robot interactions. We present the Robot Interaction in VR simulator, which allows human participants to interact with simulated robots and environments in real-time. We are particularly interested in spoken interactions between the human and robot, which can be combined with the robot's sensory data for language grounding. To demonstrate the utility of the simulator, we describe a study which investigates whether a user's head pose can serve as a proxy for gaze in a VR object selection task. Participants were asked to describe a series of known objects, providing approximate labels for the focus of attention. We demonstrate that using a concept of gaze derived from head pose can be used to effectively narrow the set of objects that are the target of participants' attention and linguistic descriptions.more » « less
- 
            Social touch provides a rich non-verbal communication channel between humans and robots. Prior work has identified a set of touch gestures for human-robot interaction and described them with natural language labels (e.g., stroking, patting). Yet, no data exists on the semantic relationships between the touch gestures in users’ minds. To endow robots with touch intelligence, we investigated how people perceive the similarities of social touch labels from the literature. In an online study, 45 participants grouped 36 social touch labels based on their perceived similarities and annotated their groupings with descriptive names. We derived quantitative similarities of the gestures from these groupings and analyzed the similarities using hierarchical clustering. The analysis resulted in 9 clusters of touch gestures formed around the social, emotional, and contact characteristics of the gestures. We discuss the implications of our results for designing and evaluating touch sensing and interactions with social robots.more » « less
- 
            In a future where many robot assistants support human endeavors, interactions with multiple robots either simultaneously or sequentially will occur. This paper highlights an initial exploration into one type of sequential interaction, which we call "transfers'' between multiple service robots. We defined the act of transferring between service robots and further decomposed it into five stages. Our research was informed by a design workshop investigating usage of multiple service robots. We also identified open design and research questions on this topic.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                    