As human-robot interactions become more social, a robot's personality plays an increasingly vital role in shaping user experience and its overall effectiveness. In this study, we examine the impact of three distinct robot personalities on user experiences during well-being exercises: a Baseline Personality that aligns with user expectations, a High Extraversion Personality, and a High Neuroticism Personality. These personalities were manifested through the robot's dialogue, which were generated using a large language model (LLM) guided by key behavioral characteristics from the Big 5 personality traits. In a between-subjects user study (N = 66), where each participant interacted with one distinct robot personality, we found that both the High Extraversion and High Neuroticism Robot Personalities significantly enhanced participants' emotional states (arousal, control, and valence). The High Extraversion Robot Personality was also rated as the most enjoyable to interact with. Additionally, evidence suggested that participants' personality traits moderated the effectiveness of specific robot personalities in eliciting positive outcomes from well-being exercises. Our findings highlight the potential benefits of designing robot personalities that deviate from users' expectations, thereby enriching human-robot interactions.
more »
« less
Augmenting Simulation Data with Sensor Effects for Improved Domain Transfer
Simulation provides vast benefits for the field of robotics and Human-Robot Interaction (HRI). This study investigates how sensor effects seen in the real domain can be modeled in simulation and what role they play in effective Sim2Real domain transfer for learned perception models. The study considers introducing naive noise approaches such as additive Gaussian and salt and pepper noise as well as data-driven sensor effects models into simulation for representing Microsoft Kinect sensor capabilities and phenomena seen on real world systems. This study quantifies the benefit of multiple approaches to modeling sensor effects in simulation for Sim2Real domain transfer by their object classification improvements in the real domain. User studies are conducted to address hypotheses by training grounded language models in each of the sensor effects modeling cases and evaluated on the robot's interaction capabilities in the real domain. In addition to grounded language performance metrics, user study evaluation includes surveys on the human participant's assessment of the robot's capabilities in the real domain. Results from this pilot study show benefits to modeling sensor noise in simulation for Sim2Real domain transfer. This study also begins to explore the effects that such models have on human-robot interactions.
more »
« less
- PAR ID:
- 10382789
- Date Published:
- Journal Name:
- Tenth International Workshop on Assistive Computer Vision and Robotics (ACVR) at ECCV
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Gonzalez, D. (Ed.)Today’s research on human-robot teaming requires the ability to test artificial intelligence (AI) algorithms for perception and decision-making in complex real-world environments. Field experiments, also referred to as experiments “in the wild,” do not provide the level of detailed ground truth necessary for thorough performance comparisons and validation. Experiments on pre-recorded real-world data sets are also significantly limited in their usefulness because they do not allow researchers to test the effectiveness of active robot perception and control or decision strategies in the loop. Additionally, research on large human-robot teams requires tests and experiments that are too costly even for the industry and may result in considerable time losses when experiments go awry. The novel Real-Time Human Autonomous Systems Collaborations (RealTHASC) facility at Cornell University interfaces real and virtual robots and humans with photorealistic simulated environments by implementing new concepts for the seamless integration of wearable sensors, motion capture, physics-based simulations, robot hardware and virtual reality (VR). The result is an extended reality (XR) testbed by which real robots and humans in the laboratory are able to experience virtual worlds, inclusive of virtual agents, through real-time visual feedback and interaction. VR body tracking by DeepMotion is employed in conjunction with the OptiTrack motion capture system to transfer every human subject and robot in the real physical laboratory space into a synthetic virtual environment, thereby constructing corresponding human/robot avatars that not only mimic the behaviors of the real agents but also experience the virtual world through virtual sensors and transmit the sensor data back to the real human/robot agent, all in real time. New cross-domain synthetic environments are created in RealTHASC using Unreal Engine™, bridging the simulation-to-reality gap and allowing for the inclusion of underwater/ground/aerial autonomous vehicles, each equipped with a multi-modal sensor suite. The experimental capabilities offered by RealTHASC are demonstrated through three case studies showcasing mixed real/virtual human/robot interactions in diverse domains, leveraging and complementing the benefits of experimentation in simulation and in the real world.more » « less
-
Reward learning as a method for inferring human intent and preferences has been studied extensively. Prior approaches make an implicit assumption that the human maintains a correct belief about the robot's domain dynamics. However, this may not always hold since the human's belief may be biased, which can ultimately lead to a misguided estimation of the human's intent and preferences, which is often derived from human feedback on the robot's behaviors. In this paper, we remove this restrictive assumption by considering that the human may have an inaccurate understanding of the robot. We propose a method called Generalized Reward Learning with biased beliefs about domain dynamics (GeReL) to infer both the reward function and human's belief about the robot in a Bayesian setting based on human ratings. Due to the complex forms of the posteriors, we formulate it as a variational inference problem to infer the posteriors of the parameters that govern the reward function and human's belief about the robot simultaneously. We evaluate our method in a simulated domain and with a user study where the user has a bias based on the robot's appearances. The results show that our method can recover the true human preferences while subject to such biased beliefs, in contrast to prior approaches that could have misinterpreted them completely.more » « less
-
Given a swarm of limited-capability robots, we seek to automatically discover the set of possible emergent behaviors. Prior approaches to behavior discovery rely on human feedback or hand-crafted behavior metrics to represent and evolve behaviors and only discover behaviors in simulation, without testing or considering the deployment of these new behaviors on real robot swarms. In this work, we present Real2Sim2Real Behavior Discovery via Self-Supervised Representation Learning, which combines representation learning and novelty search to discover possible emergent behaviors automatically in simulation and enable direct controller transfer to real robots. First, we evaluate our method in simulation and show that our proposed self-supervised representation learning approach outperforms previous hand-crafted metrics by more accurately representing the space of possible emergent behaviors. Then, we address the reality gap by incorporating recent work in sim2real transfer for swarms into our lightweight simulator design, enabling direct robot deployment of all behaviors discovered in simulation on an open-source and low-cost robot platform.more » « less
-
Perceived social agency-the perception of a robot as an autonomous and intelligent social other-is important for fostering meaningful and engaging human-robot interactions. While end-user programming (EUP) enables users to customize robot behavior, enhancing usability and acceptance, it can also potentially undermine the robot's perceived social agency. This study explores the trade-offs between user control over robot behavior and preserving the robot's perceived social agency, and how these factors jointly impact user experience. We conducted a between-subjects study (N = 57) where participants customized the robot's behavior using either a High-Granularity Interface with detailed block-based programming, a Low-Granularity Interface with broader input-form customizations, or no EUP at all. Results show that while both EUP interfaces improved alignment with user preferences, the Low-Granularity Interface better preserved the robot's perceived social agency and led to a more engaging interaction. These findings highlight the need to balance user control with perceived social agency, suggesting that moderate customization without excessive granularity may enhance the overall satisfaction and acceptance of robot products.more » « less
An official website of the United States government

