Abstract Robotics researchers have been focusing on developing autonomous and human-like intelligent robots that are able to plan, navigate, manipulate objects, and interact with humans in both static and dynamic environments. These capabilities, however, are usually developed for direct interactions with people in controlled environments, and evaluated primarily in terms of human safety. Consequently, human-robot interaction (HRI) in scenarios with no intervention of technical personnel is under-explored. However, in the future, robots will be deployed in unstructured and unsupervised environments where they will be expected to work unsupervised on tasks which require direct interaction with humans and may not necessarily be collaborative. Developing such robots requires comparing the effectiveness and efficiency of similar design approaches and techniques. Yet, issues regarding the reproducibility of results, comparing different approaches between research groups, and creating challenging milestones to measure performance and development over time make this difficult. Here we discuss the international robotics competition called RoboCup as a benchmark for the progress and open challenges in AI and robotics development. The long term goal of RoboCup is developing a robot soccer team that can win against the world’s best human soccer team by 2050. We selected RoboCup because it requires robots to be able to play with and against humans in unstructured environments, such as uneven fields and natural lighting conditions, and it challenges the known accepted dynamics in HRI. Considering the current state of robotics technology, RoboCup’s goal opens up several open research questions to be addressed by roboticists. In this paper, we (a) summarise the current challenges in robotics by using RoboCup development as an evaluation metric, (b) discuss the state-of-the-art approaches to these challenges and how they currently apply to RoboCup, and (c) present a path for future development in the given areas to meet RoboCup’s goal of having robots play soccer against and with humans by 2050.
more »
« less
Who Takes the Lead? Automated Scheduling for Human-Robot Teams
Scheduling interactions between humans and robots presents unique challenges—while robots do not have humans’ natural ability to improvise and adapt to new setbacks, humans are not able to work with the same precision as robots. Additionally, hesitation, interruptions, and anticipatory action all influence a human’s perception and efficiency in social tasks, but are not inherent features of current algorithms. This paper explores both the challenges and opportunities of automated scheduling as a useful tool for human-robot interactions. We contribute an initial exploratory pilot study that suggests that when a robot takes the lead in dictating a schedule, there are gains in team efficiency without loss of humans’ perceived comfort.
more »
« less
- Award ID(s):
- 1651822
- PAR ID:
- 10056808
- Date Published:
- Journal Name:
- 2017 AAAI Fall Symposium Series on Artificial Intelligence for Human-Robot Interaction
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Despite promises about the near-term potential of social robots to share our daily lives, they remain unable to form autonomous, lasting, and engaging relationships with humans. Many companies are deploying social robots into the consumer and commercial market; however, both the companies and their products are relatively short lived for many reasons. For example, current social robots succeed in interacting with humans only within controlled environments, such as research labs, and for short time periods since longer interactions tend to provoke user disengagement. We interviewed 13 roboticists from robot manufacturing companies and research labs to delve deeper into the design process for social robots and unearth the many challenges robot creators face. Our research questions were: 1) What are the different design processes for creating social robots? 2) How are users involved in the design of social robots? 3) How are teams of robot creators constituted? Our qualitative investigation showed that varied design practices are applied when creating social robots but no consensus exists about an optimal or standard one. Results revealed that users have different degrees of involvement in the robot creation process, from no involvement to being a central part of robot development. Results also uncovered the need for multidisciplinary and international teams to work together to create robots. Drawing upon these insights, we identified implications for the field of Human-Robot Interaction that can shape the creation of best practices for social robot design.more » « less
-
Social robots are becoming increasingly influential in shaping the behavior of humans with whom they interact. Here, we examine how the actions of a social robot can influence human-to-human communication, and not just robot–human communication, using groups of three humans and one robot playing 30 rounds of a collaborative game ( n = 51 groups). We find that people in groups with a robot making vulnerable statements converse substantially more with each other, distribute their conversation somewhat more equally, and perceive their groups more positively compared to control groups with a robot that either makes neutral statements or no statements at the end of each round. Shifts in robot speech have the power not only to affect how people interact with robots, but also how people interact with each other, offering the prospect for modifying social interactions via the introduction of artificial agents into hybrid systems of humans and machines.more » « less
-
Abstract Effective interactions between humans and robots are vital to achieving shared tasks in collaborative processes. Robots can utilize diverse communication channels to interact with humans, such as hearing, speech, sight, touch, and learning. Our focus, amidst the various means of interactions between humans and robots, is on three emerging frontiers that significantly impact the future directions of human–robot interaction (HRI): (i) human–robot collaboration inspired by human–human collaboration, (ii) brain-computer interfaces, and (iii) emotional intelligent perception. First, we explore advanced techniques for human–robot collaboration, covering a range of methods from compliance and performance-based approaches to synergistic and learning-based strategies, including learning from demonstration, active learning, and learning from complex tasks. Then, we examine innovative uses of brain-computer interfaces for enhancing HRI, with a focus on applications in rehabilitation, communication, brain state and emotion recognition. Finally, we investigate the emotional intelligence in robotics, focusing on translating human emotions to robots via facial expressions, body gestures, and eye-tracking for fluid, natural interactions. Recent developments in these emerging frontiers and their impact on HRI were detailed and discussed. We highlight contemporary trends and emerging advancements in the field. Ultimately, this paper underscores the necessity of a multimodal approach in developing systems capable of adaptive behavior and effective interaction between humans and robots, thus offering a thorough understanding of the diverse modalities essential for maximizing the potential of HRI.more » « less
-
Abstract— Humans leverage multiple sensor modalities when interacting with objects and discovering their intrinsic properties. Using the visual modality alone is insufficient for deriving intuition behind object properties (e.g., which of two boxes is heavier), making it essential to consider non-visual modalities as well, such as the tactile and auditory. Whereas robots may leverage various modalities to obtain object property understanding via learned exploratory interactions with objects (e.g., grasping, lifting, and shaking behaviors), challenges remain: the implicit knowledge acquired by one robot via object exploration cannot be directly leveraged by another robot with different morphology, because the sensor models, observed data distributions, and interaction capabilities are different across these different robot configurations. To avoid the costly process of learning interactive object perception tasks from scratch, we propose a multi-stage projection framework for each new robot for transferring implicit knowledge of object properties across heterogeneous robot morphologies. We evaluate our approach on the object-property recognition and object-identity recognition tasks, using a dataset containing two heterogeneous robots that perform 7,600 object interactions. Results indicate that knowledge can be transferred across robots, such that a newly-deployed robot can bootstrap its recognition models without exhaustively exploring all objects. We also propose a data augmentation technique and show that this technique improves the generalization of models. We release code, datasets, and additional results, here: https: //github.com/gtatiya/Implicit-Knowledge-Transfer.more » « less
An official website of the United States government

