Introduction This dataset was gathered during the Vid2Real online video-based study, which investigates humans’ perception of robots' intelligence in the context of an incidental Human-Robot encounter. The dataset contains participants' questionnaire responses to four video study conditions, namely Baseline, Verbal, Body language, and Body language + Verbal. The videos depict a scenario where a pedestrian incidentally encounters a quadruped robot trying to enter a building. The robot uses verbal commands or body language to try to ask for help from the pedestrian in different study conditions. The differences in the conditions were manipulated using the robot’s verbal and expressive movement functionalities. Dataset Purpose The dataset includes the responses of human subjects about the robots' social intelligence used to validate the hypothesis that robot social intelligence is positively correlated with human compliance in an incidental human-robot encounter context. The video based dataset was also developed to obtain empirical evidence that can be used to design future real-world HRI studies. Dataset Contents Four videos, each corresponding to a study condition. Four sets of Perceived Social Intelligence Scale data. Each set corresponds to one study condition Four sets of compliance likelihood questions, each set include one Likert question and one free-form question One set of Godspeed questionnaire data. One set of Anthropomorphism questionnaire data. A csv file containing the participants demographic data, Likert scale data, and text responses. A data dictionary explaining the meaning of each of the fields in the csv file. Study Conditions There are 4 videos (i.e. study conditions), the video scenarios are as follows. Baseline: The robot walks up to the entrance and waits for the pedestrian to open the door without any additional behaviors. This is also the "control" condition. Verbal: The robot walks up to the entrance, and says ”can you please open the door for me” to the pedestrian while facing the same direction, then waits for the pedestrian to open the door. Body Language: The robot walks up to the entrance, turns its head to look at the pedestrian, then turns its head to face the door, and waits for the pedestrian to open the door. Body Language + Verbal: The robot walks up to the entrance, turns its head to look at the pedestrian, and says ”Can you open the door for me” to the pedestrian, then waits for the pedestrian to open the door. Image showing the Verbal condition. Image showing the Body Language condition. A within-subject design was adopted, and all participants experienced all conditions. The order of the videos, as well as the PSI scales, were randomized. After receiving consent from the participants, they were presented with one video, followed by the PSI questions and the two exploratory questions (compliance likelihood) described above. This set was repeated 4 times, after which the participants would answer their general perceptions of the robot with Godspeed and AMPH questionnaires. Each video was around 20 seconds and the total study time was around 10 minutes. Video as a Study Method A video-based study in human-robot interaction research is a common method for data collection. Videos can easily be distributed via online participant recruiting platforms, and can reach a larger sample than in-person/lab-based studies. Therefore, it is a fast and easy method for data collection for research aiming to obtain empirical evidence. Video Filming The videos were filmed with a first-person point-of-view in order to maximize the alignment of video and real-world settings. The device used for the recording was an iPhone 12 pro, and the videos were shot in 4k 60 fps. For better accessibility, the videos have been converted to lower resolutions. Instruments The questionnaires used in the study include the Perceived Social Intelligence Scale (PSI), Godspeed Questionnaire, and Anthropomorphism Questionnaire (AMPH). In addition to these questionnaires, a 5-point Likert question and a free-text response measuring human compliance were added for the purpose of the video-based study. Participant demographic data was also collected. Questionnaire items are attached as part of this dataset. Human Subjects For the purpose of this project, the participants are recruited through Prolific. Therefore, the participants are users of Prolific. Additionally, they are restricted to people who are currently living in the United States, fluent in English, and have no hearing or visual impairments. No other restrictions were imposed. Among the 385 participants, 194 participants identified as female, and 191 as male, the age ranged from 19 to 75 (M = 38.53, SD = 12.86). Human subjects remained anonymous. Participants were compensated with $4 upon submission approval. This study was reviewed and approved by UT Austin Internal Review Board. Robot The dataset contains data about humans’ perceived social intelligence of a Boston Dynamics’ quadruped robot Spot (Explorer model). The robot was selected because quadruped robots are gradually being adopted to provide services such as delivery, surveillance, and rescue. However, there are still issues or obstacles that robots cannot easily overcome by themselves in which they will have to ask for help from nearby humans. Therefore, it is important to understand how humans react to a quadruped robot that they incidentally encounter. For the purposes of this video-study, the robot operation was semi-autonomous, with the navigation being manually teleoperated by an operator and a few standalone autonomous modules to supplement it. Data Collection The data was collected through Qualtrics, a survey development platform. After the completion of data collection, the data was downloaded as a csv file. Data Quality Control Qualtrics automatically detects bots so any response that is flagged as bots are discarded. All incomplete and duplicate responses were discarded. Data Usage This dataset can be used to conduct a meta-analysis on robots' perceived intelligence. Please note that data is coupled with this study design. Users interested in data reuse will have to assess that this dataset is in line with their study design. Acknowledgement This study was funded through the NSF Award # 2219236GCR: Community-Embedded Robotics: Understanding Sociotechnical Interactions with Long-term Autonomous Deployments. 
                        more » 
                        « less   
                    This content will become publicly available on December 31, 2025
                            
                            Influence of Simulation and Interactivity on Human Perceptions of a Robot During Navigation Tasks
                        
                    
    
            In Human–Robot Interaction, researchers typically utilize in-person studies to collect subjective perceptions of a robot. In addition, videos of interactions and interactive simulations (where participants control an avatar that interacts with a robot in a virtual world) have been used to quickly collect human feedback at scale. How would human perceptions of robots compare between these methodologies? To investigate this question, we conducted a 2x2 between-subjects study (N=160), which evaluated the effect of the interaction environment (Real vs. Simulated environment) and participants’ interactivity during human-robot encounters (Interactive participation vs. Video observations) on perceptions about a robot (competence, discomfort, social presentation, and social information processing) for the task of navigating in concert with people. We also studied participants’ workload across the experimental conditions. Our results revealed a significant difference in the perceptions of the robot between the real environment and the simulated environment. Furthermore, our results showed differences in human perceptions when people watched a video of an encounter versus taking part in the encounter. Finally, we found that simulated interactions and videos of the simulated encounter resulted in a higher workload than real-world encounters and videos thereof. Our results suggest that findings from video and simulation methodologies may not always translate to real-world human–robot interactions. In order to allow practitioners to leverage learnings from this study and future researchers to expand our knowledge in this area, we provide guidelines for weighing the tradeoffs between different methodologies. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 1924802
- PAR ID:
- 10566591
- Publisher / Repository:
- ACM
- Date Published:
- Journal Name:
- ACM Transactions on Human-Robot Interaction
- Volume:
- 13
- Issue:
- 4
- ISSN:
- 2573-9522
- Page Range / eLocation ID:
- 1 to 19
- Subject(s) / Keyword(s):
- Human-robot interaction human perception robot navigation
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            A single mobile service robot may generate hundreds of encounters with pedestrians, yet there is little published data on the factors influencing these incidental human-robot encounters. We report the results of a between-subjects experiment (n=222) testing the impact of robot body language, defined as non-functional modifications to robot movement, upon incidental pedestrian encounters with a quadruped service robot in a real-world setting. We find that canine-inspired body language had a positive influence on participants' perceptions of the robot compared to the robot's stock movement. This effect was visible across all questions of a questionnaire on the perceptions of robots (Godspeed). We argue that body language is a promising and practical design space for improving pedestrian encounters with service robots.more » « less
- 
            HRI research using autonomous robots in real-world settings can produce results with the highest ecological validity of any study modality, but many difficulties limit such studies’ feasibility and effectiveness. We propose VID2REAL HRI, a research framework to maximize real-world insights offered by video-based studies. The VID2REAL HRI framework was used to design an online study using first-person videos of robots as real-world encounter surrogates. The online study (n = 385) distinguished the within-subjects effects of four robot behavioral conditions on perceived social intelligence and human willingness to help the robot enter an exterior door. A real-world, between subjects replication (n = 26) using two conditions confirmed the validity of the online study’s findings and the sufficiency of the participant recruitment target (n = 22) based on a power analysis of online study results. The VID2REAL HRI framework offers HRI researchers a principled way to take advantage of the efficiency of video-based study modalities while generating directly transferable knowledge of real-world HRI. Code and data from the study are provided at vid2real.github.io/vid2realHRI.more » « less
- 
            Incidental human‐robot encounters are becoming more common as robotic technologies proliferate, but there is little scientific understanding of human experience and reactions during these encounters. To contribute towards addressing this gap, this study applies Grounded Theory methodologies to study human reactions in Human‐Robot Encounters with an autonomous quadruped robot. Based upon observation and interviews, we find that participants' reactions to the robot can be explained by their attitudes of familiarity, certainty, and confidence during their encounter and by their understanding of the robot's capabilities and role. Participants differed in how and whether they utilized opportunities to resolve their unfamiliarity, uncertainty, or lack of confidence, shedding light on the dynamics and experiential characteristics of Human‐Robot Encounters. We provide an emerging theory that can be used to unravel the complexity of the field as well as assist hypothesis generation in future research in designing and deploying mobile autonomous service robots.more » « less
- 
            Lovable robots in movies regularly beep, chirp, and whirr, yet robots in the real world rarely deploy such sounds. Despite preliminary work supporting the perceptual and objective benefits of intentionally-produced robot sound, relatively little research is ongoing in this area. In this paper, we systematically evaluate transformative robot sound across multiple robot archetypes and behaviors. We conducted a series of five online video-based surveys, each with N ≈ 100 participants, to better understand the effects of musician-designed transformative sounds on perceptions of personal, service, and industrial robots. Participants rated robot videos with transformative sound as significantly happier, warmer, and more competent in all five studies, as more energetic in four studies, and as less discomforting in one study. Overall, results confirmed that transformative sounds consistently improve subjective ratings but may convey affect contrary to the intent of affective robot behaviors. In future work, we will investigate the repeatability of these results through in-person studies and develop methods to automatically generate transformative robot sound. This work may benefit researchers and designers who aim to make robots more favorable to human users.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
