Human emotions are expressed through multiple modalities, including verbal and non-verbal information. Moreover, the affective states of human users can be the indicator for the level of engagement and successful interaction, suitable for the robot to use as a rewarding factor to optimize robotic behaviors through interaction. This study demonstrates a multimodal human-robot interaction (HRI) framework with reinforcement learning to enhance the robotic interaction policy and personalize emotional interaction for a human user. The goal is to apply this framework in social scenarios that can let the robots generate a more natural and engaging HRI framework.
more »
« less
CONFIDANT: A Privacy Controller for Social Robots
As social robots become increasingly prevalent in day-to-day environments, they will participate in conversations and appropriately manage the information shared with them. However, little is known about how robots might appropriately discern the sensitivity of information, which has major implications for human-robot trust. As a first step to address a part of this issue, we designed a privacy controller, CONFIDANT, for conversational social robots, capable of using contextual metadata (e.g., sentiment, relationships, topic) from conversations to model privacy boundaries. Afterwards, we conducted two crowdsourced user studies. The first study (n = 174) focused on whether a variety of human-human interaction scenarios were perceived as either private/sensitive or non-private/non-sensitive. The findings from our first study were used to generate association rules. Our second study (n = 95) evaluated the effectiveness and accuracy of the privacy controller in human-robot interaction scenarios by comparing a robot that used our privacy controller against a baseline robot with no privacy controls. Our results demonstrate that the robot with the privacy controller outperforms the robot without the privacy controller in privacy-awareness, trustworthiness, and social-awareness. We conclude that the integration of privacy controllers in authentic human-robot conversations can allow for more trustworthy robots. This initial privacy controller will serve as a foundation for more complex solutions.
more »
« less
- Award ID(s):
- 1906854
- PAR ID:
- 10384276
- Date Published:
- Journal Name:
- Proceedings of the 2022 ACM/IEEE International Conference on Human-Robot Interaction
- Page Range / eLocation ID:
- 205–214
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
This paper explores user preferences for sharing sensitive information via telepresence robots using six input methods: pen & paper, smartphone, robot display, speech, whisper, and silent speech. Through a crowdsourced survey and a follow-up user study, it identifies key differences in effort, convenience, privacy, security, and social acceptability. Speech is perceived as the easiest but least secure method, while pen & paper, initially favored, proves inconvenient in practice. Robot display and smartphone consistently rank as the most secure, private, and socially acceptable. Silent speech emerges as a strong alternative, offering greater privacy than other speech-based methods. These findings highlight the need for telepresence robots to support multiple input methods to accommodate diverse user needs and privacy concerns.more » « less
-
Mobile robots must navigate efficiently, reliably, and appropriately around people when acting in shared social environments. For robots to be accepted in such environments, we explore robot navigation for the social contexts of each setting. Navigating through dynamic environments solely considering a collision-free path has long been solved. In human-robot environments, the challenge is no longer about efficiently navigating from one point to another. Autonomously detecting the context and adapting to an appropriate social navigation strategy is vital for social robots’ long-term applicability in dense human environments. As complex social environments, museums are suitable for studying such behavior as they have many different navigation contexts in a small space.Our prior Socially-Aware Navigation model considered con-text classification, object detection, and pre-defined rules to define navigation behavior in more specific contexts, such as a hallway or queue. This work uses environmental context, object information, and more realistic interaction rules for complex social spaces. In the first part of the project, we convert real-world interactions into algorithmic rules for use in a robot’s navigation system. Moreover, we use context recognition, object detection, and scene data for context-appropriate rule selection. We introduce our methodology of studying social behaviors in complex contexts, different analyses of our text corpus for museums, and the presentation of extracted social norms. Finally, we demonstrate applying some of the rules in scenarios in the simulation environment.more » « less
-
Today’s teens will most likely be the first generation to spend a lifetime living and interacting with both mechanical and social robots. Although human-robot interaction has been explored in children, adults, and seniors, examination of teen-robot interaction has been minimal. Using human-centered design, our team is developing a social robot to gather stress and mood data from teens in a public high school. As part of our preliminary design stage, we conducted a interaction pilot study in the wild to explore and capture teens’ initial interactions with a lowfidelity social robot prototype. We observed strong engagement and expressions of empathy from teens during our qualitative, interaction studies.more » « less
-
null (Ed.)This paper proposes and evaluates the use of image classification for detailed, full-body human-robot tactile interaction. A camera positioned below a translucent robot skin captures shadows generated from human touch and infers social gestures from the captured images. This approach enables rich tactile interaction with robots without the need for the sensor arrays used in traditional social robot tactile skins. It also supports the use of touch interaction with non-rigid robots, achieves high-resolution sensing for robots with different sizes and shape of surfaces, and removes the requirement of direct contact with the robot. We demonstrate the idea with an inflatable robot and a standing-alone testing device, an algorithm for recognizing touch gestures from shadows that uses Densely Connected Convolutional Networks, and an algorithm for tracking positions of touch and hovering shadows. Our experiments show that the system can distinguish between six touch gestures under three lighting conditions with 87.5 - 96.0% accuracy, depending on the lighting, and can accurately track touch positions as well as infer motion activities in realistic interaction conditions. Additional applications for this method include interactive screens on inflatable robots and privacy-maintaining robots for the home.more » « less
An official website of the United States government

