Autonomous robots that understand human instructions can significantly enhance the efficiency in human-robot assembly operations where robotic support is needed to handle unknown objects and/or provide on-demand assistance. This paper introduces a vision AI-based method for human-robot collaborative (HRC) assembly, enabled by a large language model (LLM). Upon 3D object reconstruction and pose establishment through neural object field modelling, a visual servoing-based mobile robotic system performs object manipulation and navigation guidance to a mobile robot. The LLM model provides text-based logic reasoning and high-level control command generation for natural human-robot interactions. The effectiveness of the presented method is experimentally demonstrated.
more »
« less
Human Pose Estimation in UAV-Human Workspace
A 6D human pose estimation method is studied to assist autonomous UAV control in human environments. As autonomous robots/UAVs become increasingly prevalent in the future workspace, autonomous robots must detect/estimate human movement and predict their trajectory to plan a safe motion path. Our method utilize a deep Convolutional Neural Network to calculate a 3D torso bounding box to determine the location and orientation of human objects. The training uses a loss function that includes both 3D angle and translation errors. The trained model delivers <10-degree angular error and outperforms a reference method based on RSN.
more »
« less
- Award ID(s):
- 1818655
- PAR ID:
- 10329274
- Date Published:
- Journal Name:
- HCI International 2021
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Rapid advancements in Artificial Intelligence have shifted the focus from traditional human-directed robots to fully autonomous ones that do not require explicit human control. These are commonly referred to as Human-on-the-Loop (HotL) systems. Transparency of HotL systems necessitates clear explanations of autonomous behavior so that humans are aware of what is happening in the environment and can understand why robots behave in a certain way. However, in complex multi-robot environments, especially those in which the robots are autonomous and mobile, humans may struggle to maintain situational awareness. Presenting humans with rich explanations of autonomous behavior tends to overload them with lots of information and negatively affect their understanding of the situation. Therefore, explaining the autonomous behavior of multiple robots creates a design tension that demands careful investigation. This paper examines the User Interface (UI) design trade-offs associated with providing timely and detailed explanations of autonomous behavior for swarms of small Unmanned Aerial Systems (sUAS) or drones. We analyze the impact of UI design choices on human awareness of the situation. We conducted multiple user studies with both inexperienced and expert sUAS operators to present our design solution and initial guidelines for designing the HotL multi-sUAS interface.more » « less
-
Robotic search often involves teleoperating vehicles into unknown environments. In such scenarios, prior knowledge of target location or environmental map may be a viable resource to tap into and control other autonomous robots in the vicinity towards an improved search performance. In this paper, we test the hypothesis that despite having the same skill, prior knowledge of target or environment affects teleoperator actions, and such knowledge can therefore be inferred through robot movement. To investigate whether prior knowledge can improve human-robot team performance, we next evaluate an adaptive mutual-information blending strategy that admits a time-dependent weighting for steering autonomous robots. Human-subject experiments show that several features including distance travelled by the teleoperated robot, time spent staying still, speed, and turn rate, all depend on the level of prior knowledge and that absence of prior knowledge increased workload. Building on these results, we identified distance travelled and time spent staying still as movement cues that can be used to robustly infer prior knowledge. Simulations where an autonomous robot accompanied a human teleoperated robot revealed that whereas time to find the target was similar across all information-based search strategies, adaptive strategies that acted on movement cues found the target sooner more often than a single human teleoperator compared to non-adaptive strategies. This gain is diluted with number of robots, likely due to the limited size of the search environment. Results from this work set the stage for developing knowledge-aware control algorithms for autonomous robots in collaborative human-robot teams.more » « less
-
Li, Changsheng (Ed.)An autonomous household robot passed a self-awareness test in 2015, proving that the cognitive capabilities of robots are heading towards those of humans. While this is a milestone in AI, it raises questions about legal implications. If robots are progressively developing cognition, it is important to discuss whether they are entitled to justice pursuant to conventional notions of human rights. This paper offers a comprehensive discussion of this complex question through cross-disciplinary scholarly sources from computer science, ethics, and law. The computer science perspective dissects hardware and software of robots to unveil whether human behavior can be efficiently replicated. The ethics perspective utilizes insights from robot ethics scholars to help decide whether robots can act morally enough to be endowed with human rights. The legal perspective provides an in-depth discussion of human rights with an emphasis on eligibility. The article concludes with recommendations including open research issues.more » « less
-
Autonomous underwater robots working with teams of human divers may need to distinguish between different divers, e.g., to recognize a lead diver or to follow a specific team member. This paper describes a technique that enables autonomous underwater robots to track divers in real time as well as to reidentify them. The approach is an extension of Simple Online Realtime Tracking (SORT) with an appearance metric (deep SORT). Initial diver detection is performed with a custom CNN designed for realtime diver detection, and appearance features are subsequently extracted for each detected diver. Next, realtime tracking by-detection is performed with an extension of the deep SORT algorithm. We evaluate this technique on a series of videos of divers performing human-robot collaborative tasks and show that our methods result in more divers being accurately identified during tracking. We also discuss the practical considerations of applying multi-person tracking to on-board autonomous robot operations, and we consider how failure cases can be addressed during on-board tracking.more » « less
An official website of the United States government

