<?xml version="1.0" encoding="UTF-8"?><rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:dcq="http://purl.org/dc/terms/"><records count="1" morepages="false" start="1" end="1"><record rownumber="1"><dc:product_type>Conference Paper</dc:product_type><dc:title>Vid2Real HRI: Align video-based HRI study designs with real-world settings</dc:title><dc:creator>Hauser, Elliott; Chan, Yao-Cheng; Modak, Sadanand; Biswas, Joydeep; Hart, Justin</dc:creator><dc:corporate_author/><dc:editor/><dc:description>HRI research using autonomous robots in real-world settings can produce results with the highest ecological validity of any study modality, but many difficulties limit such studies’ feasibility and effectiveness. We propose VID2REAL HRI, a research framework to maximize real-world insights offered by video-based studies. The VID2REAL HRI framework was used to design an online study using first-person videos of robots as real-world encounter surrogates. The online study (n = 385) distinguished the within-subjects effects of four robot behavioral conditions on perceived social intelligence and human willingness to help the robot enter an exterior door. A real-world, between subjects replication (n = 26) using two conditions confirmed the validity of the online study’s findings and the sufficiency of the participant recruitment target (n = 22) based on a power analysis of online study results. The VID2REAL HRI framework offers HRI researchers a principled way to take advantage of the efficiency of video-based study modalities while generating directly transferable knowledge of real-world HRI. Code and data from the study are provided at vid2real.github.io/vid2realHRI.</dc:description><dc:publisher>IEEE</dc:publisher><dc:date>2024-08-26</dc:date><dc:nsf_par_id>10636093</dc:nsf_par_id><dc:journal_name/><dc:journal_volume/><dc:journal_issue/><dc:page_range_or_elocation>542 to 548</dc:page_range_or_elocation><dc:issn/><dc:isbn>979-8-3503-7502-2</dc:isbn><dc:doi>https://doi.org/10.1109/RO-MAN60168.2024.10731413</dc:doi><dcq:identifierAwardId>2219236</dcq:identifierAwardId><dc:subject/><dc:version_number/><dc:location>Pasadena, CA, USA</dc:location><dc:rights/><dc:institution/><dc:sponsoring_org>National Science Foundation</dc:sponsoring_org></record></records></rdf:RDF>