Designing and implementing human-robot interactions requires numerous skills, from having a rich understanding of social interactions and the capacity to articulate their subtle requirements, to the ability to then program a social robot with the many facets of such a complex interaction. Although designers are best suited to develop and implement these interactions due to their inherent understanding of the context and its requirements, these skills are a barrier to enabling designers to rapidly explore and prototype ideas: it is impractical for designers to also be experts on social interaction behaviors, and the technical challenges associated with programming a social robot are prohibitive. In this work, we introduce Synthé, which allows designers to act out, or bodystorm, multiple demonstrations of an interaction. These demonstrations are automatically captured and translated into prototypes for the design team using program synthesis. We evaluate Synthé in multiple design sessions involving pairs of designers bodystorming interactions and observing the resulting models on a robot. We build on the findings from these sessions to improve the capabilities of Synthé and demonstrate the use of these capabilities in a second design session.
more »
« less
Computational Tools for Human-Robot Interaction Design
Robots must exercise socially appropriate behavior when interacting with humans. How can we assist interaction designers to embed socially appropriate and avoid socially inappropriate behavior within human-robot interactions? We propose a multi-faceted interaction-design approach that intersects human-robot interaction and formal methods to help us achieve this goal. At the lowest level, designers create interactions from scratch and receive feedback from formal verification, while higher levels involve automated synthesis and repair of designs. In this extended abstract, we discuss past, present, and future work within each level of our design approach.
more »
« less
- Award ID(s):
- 1651129
- PAR ID:
- 10111486
- Date Published:
- Journal Name:
- Proceedings of the 2019 14th ACM/IEEE International Conference on Human-Robot Interaction
- Page Range / eLocation ID:
- 733 to 735
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
We propose VLM-Social-Nav, a novel Vision-Language Model (VLM) based navigation approach to compute a robot's motion in human-centered environments. Our goal is to make real-time decisions on robot actions that are socially compliant with human expectations. We utilize a perception model to detect important social entities and prompt a VLM to generate guidance for socially compliant robot behavior. VLM-Social-Nav uses a VLM-based scoring module that computes a cost term that ensures socially appropriate and effective robot actions generated by the underlying planner. Our overall approach reduces reliance on large training datasets and enhances adaptability in decision-making. In practice, it results in improved socially compliant navigation in human-shared environments. We demonstrate and evaluate our system in four different real-world social navigation scenarios with a Turtlebot robot. We observe at least 27.38% improvement in the average success rate and 19.05% improvement in the average collision rate in the four social navigation scenarios. Our user study score shows that VLM-Social-Nav generates the most socially compliant navigation behavior.more » « less
-
null (Ed.)Human-robot interaction designers and developers navigate a complex design space, which creates a need for tools that support intuitive design processes and harness the programming capacity of state-of-the-art authoring environments. We introduce Figaro, an expressive tabletop authoring environment for mobile robots, inspired by shadow puppetry, that provides designers with a natural, situated representation of human-robot interactions while exploiting the intuitiveness of tabletop and tangible programming interfaces. On the tabletop, Figaro projects a representation of an environment. Users demonstrate sequences of behaviors, or scenes, of an interaction by manipulating instrumented figurines that represent the robot and the human. During a scene, Figaro records the movement of figurines on the tabletop and narrations uttered by users. Subsequently, Figaro employs real-time program synthesis to assemble a complete robot program from all scenes provided. Through a user study, we demonstrate the ability of Figaro to support design exploration and development for human-robot interaction.more » « less
-
null (Ed.)As Human-Robot Interaction becomes more sophisticated, measuring the performance of a social robot is crucial to gauging the effectiveness of its behavior. However, social behavior does not necessarily have strict performance metrics that other autonomous behavior can have. Indeed, when considering robot navigation, a socially-appropriate action may be one that is sub-optimal, resulting in longer paths, longer times to get to a goal. Instead, we can rely on subjective assessments of the robot's social performance by a participant in a robot interaction or by a bystander. In this paper, we use the newly-validated Perceived Social Intelligence (PSI) scale to examine the perception of non-humanoid robots in non-verbal social scenarios. We show that there are significant differences between the perceived social intelligence of robots exhibiting SAN behavior compared to one using a traditional navigation planner in scenarios such as waiting in a queue and group behavior.more » « less
-
Abstract Human–exoskeleton interactions have the potential to bring about changes in human behavior for physical rehabilitation or skill augmentation. Despite significant advances in the design and control of these robots, their application to human training remains limited. The key obstacles to the design of such training paradigms are the prediction of human–exoskeleton interaction effects and the selection of interaction control to affect human behavior. In this article, we present a method to elucidate behavioral changes in the human–exoskeleton system and identify expert behaviors correlated with a task goal. Specifically, we observe the joint coordinations of the robot, also referred to as kinematic coordination behaviors, that emerge from human–exoskeleton interaction during learning. We demonstrate the use of kinematic coordination behaviors with two task domains through a set of three human-subject studies. We find that participants (1) learn novel tasks within the exoskeleton environment, (2) demonstrate similarity of coordination during successful movements within participants, (3) learn to leverage these coordination behaviors to maximize success within participants, and (4) tend to converge to similar coordinations for a given task strategy across participants. At a high level, we identify task-specific joint coordinations that are used by different experts for a given task goal. These coordinations can be quantified by observing experts and the similarity to these coordinations can act as a measure of learning over the course of training for novices. The observed expert coordinations may further be used in the design of adaptive robot interactions aimed at teaching a participant the expert behaviors.more » « less
An official website of the United States government

