skip to main content


Title: Human Pose Estimation in UAV-Human Workspace
A 6D human pose estimation method is studied to assist autonomous UAV control in human environments. As autonomous robots/UAVs become increasingly prevalent in the future workspace, autonomous robots must detect/estimate human movement and predict their trajectory to plan a safe motion path. Our method utilize a deep Convolutional Neural Network to calculate a 3D torso bounding box to determine the location and orientation of human objects. The training uses a loss function that includes both 3D angle and translation errors. The trained model delivers <10-degree angular error and outperforms a reference method based on RSN.  more » « less
Award ID(s):
1818655
NSF-PAR ID:
10329274
Author(s) / Creator(s):
Date Published:
Journal Name:
HCI International 2021
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Autonomous robots that understand human instructions can significantly enhance the efficiency in human-robot assembly operations where robotic support is needed to handle unknown objects and/or provide on-demand assistance. This paper introduces a vision AI-based method for human-robot collaborative (HRC) assembly, enabled by a large language model (LLM). Upon 3D object reconstruction and pose establishment through neural object field modelling, a visual servoing-based mobile robotic system performs object manipulation and navigation guidance to a mobile robot. The LLM model provides text-based logic reasoning and high-level control command generation for natural human-robot interactions. The effectiveness of the presented method is experimentally demonstrated. 
    more » « less
  2. Rapid advancements in Artificial Intelligence have shifted the focus from traditional human-directed robots to fully autonomous ones that do not require explicit human control. These are commonly referred to as Human-on-the-Loop (HotL) systems. Transparency of HotL systems necessitates clear explanations of autonomous behavior so that humans are aware of what is happening in the environment and can understand why robots behave in a certain way. However, in complex multi-robot environments, especially those in which the robots are autonomous and mobile, humans may struggle to maintain situational awareness. Presenting humans with rich explanations of autonomous behavior tends to overload them with lots of information and negatively affect their understanding of the situation. Therefore, explaining the autonomous behavior of multiple robots creates a design tension that demands careful investigation. This paper examines the User Interface (UI) design trade-offs associated with providing timely and detailed explanations of autonomous behavior for swarms of small Unmanned Aerial Systems (sUAS) or drones. We analyze the impact of UI design choices on human awareness of the situation. We conducted multiple user studies with both inexperienced and expert sUAS operators to present our design solution and initial guidelines for designing the HotL multi-sUAS interface. 
    more » « less
  3. Li, Changsheng (Ed.)
    An autonomous household robot passed a self-awareness test in 2015, proving that the cognitive capabilities of robots are heading towards those of humans. While this is a milestone in AI, it raises questions about legal implications. If robots are progressively developing cognition, it is important to discuss whether they are entitled to justice pursuant to conventional notions of human rights. This paper offers a comprehensive discussion of this complex question through cross-disciplinary scholarly sources from computer science, ethics, and law. The computer science perspective dissects hardware and software of robots to unveil whether human behavior can be efficiently replicated. The ethics perspective utilizes insights from robot ethics scholars to help decide whether robots can act morally enough to be endowed with human rights. The legal perspective provides an in-depth discussion of human rights with an emphasis on eligibility. The article concludes with recommendations including open research issues. 
    more » « less
  4. Autonomous underwater robots working with teams of human divers may need to distinguish between different divers, e.g., to recognize a lead diver or to follow a specific team member. This paper describes a technique that enables autonomous underwater robots to track divers in real time as well as to reidentify them. The approach is an extension of Simple Online Realtime Tracking (SORT) with an appearance metric (deep SORT). Initial diver detection is performed with a custom CNN designed for realtime diver detection, and appearance features are subsequently extracted for each detected diver. Next, realtime tracking by-detection is performed with an extension of the deep SORT algorithm. We evaluate this technique on a series of videos of divers performing human-robot collaborative tasks and show that our methods result in more divers being accurately identified during tracking. We also discuss the practical considerations of applying multi-person tracking to on-board autonomous robot operations, and we consider how failure cases can be addressed during on-board tracking. 
    more » « less
  5. null (Ed.)
    We analyze a human and multi-robot collaboration system and propose a method to optimally schedule the human attention when a human operator receives collaboration requests from multiple robots at the same time. We formulate the human attention scheduling problem as a binary optimization problem which aims to maximize the overall performance among all the robots, under the constraint that a human has limited attention capacity. We first present the optimal schedule for the human to determine when to collaborate with a robot if there is no contention occurring among robots' collaboration requests. For the moments when contentions occur, we present a contention-resolving Model Predictive Control (MPC) method to dynamically schedule the human attention and determine which robot the human should collaborate with first. The optimal schedule can then be determined using a sampling based approach. The effectiveness of the proposed method is validated through simulation that shows improvements. 
    more » « less