skip to main content


Search for: All records

Award ID contains: 1942056

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. When robots operate in real-world off-road environments with unstructured terrains, the ability to adapt their navigational policy is critical for effective and safe navigation. However, off-road terrains introduce several challenges to robot navigation, including dynamic obstacles and terrain uncertainty, leading to inefficient traversal or navigation failures. To address these challenges, we introduce a novel approach for adaptation by negotiation that enables a ground robot to adjust its navigational behaviors through a negotiation process. Our approach first learns prediction models for various navigational policies to function as a terrain-aware joint local controller and planner. Then, through a new negotiation process, our approach learns from various policies' interactions with the environment to agree on the optimal combination of policies in an online fashion to adapt robot navigation to unstructured off-road terrains on the fly. Additionally, we implement a new optimization algorithm that offers the optimal solution for robot negotiation in real-time during execution. Experimental results have validated that our method for adaptation by negotiation outperforms previous methods for robot navigation, especially over unseen and uncertain dynamic terrains. 
    more » « less
  2. Terrain adaptation is a critical ability for a ground robot to effectively traverse unstructured off-road terrain in real-world field environments such as forests. However, the expected or planned maneuvering behaviors cannot always be accurately executed due to setbacks such as reduced tire pressure. This inconsistency negatively affects the robot's ground maneuverability and can cause slower traversal time or errors in localization. To address this shortcoming, we propose a novel method for consistent behavior generation that enables a ground robot's actual behaviors to more accurately match expected behaviors while adapting to a variety of complex off-road terrains. Our method learns offset behaviors in a self-supervised fashion to compensate for the inconsistency between the actual and expected behaviors without requiring the explicit modeling of various setbacks. To evaluate the method, we perform extensive experiments using a physical ground robot over diverse complex off-road terrain in real-world field environments. Experimental results show that our method enables a robot to improve its ground maneuverability on complex unstructured off-road terrain with more navigational behavior consistency, and outperforms previous and baseline methods, particularly so on challenging terrain such as that which is seen in forests. 
    more » « less
  3. Correspondence identification is essential for multi-robot collaborative perception, which aims to identify the same objects in order to ensure consistent references of the objects by a group of robots/agents in their own fields of view. Although recent deep learning methods have shown encouraging performance on correspondence identification, they suffer from two shortcomings, including the inability to address non-covisibility in collaborative perception that is caused by occlusion and limited fields of view of the agents, and the inability to quantify and reduce uncertainty to improve correspondence identification. To address both issues, we propose a novel uncertainty-aware deep graph matching method for correspondence identification in collaborative perception. Our method formulates correspondence identification as a deep graph matching problem, which identifies correspondences based upon graph representations that are constructed from robot observations. We propose new deep graph matching networks in the Bayesian framework to explicitly quantify uncertainty in identified correspondences. In addition, we design novel loss functions in order to explicitly reduce correspondence uncertainty and perceptual non-covisibility during learning. We evaluate our method in the robotics applications of collaborative assembly and multi-robot coordination using high-fidelity simulations and physical robots. Experiments have validated that, by addressing uncertainty and non-covisibility, our proposed approach achieves the state-of-the-art performance of correspondence identification. 
    more » « less
  4. Collaborative object localization aims to collaboratively estimate locations of objects observed from multiple views or perspectives, which is a critical ability for multi-agent systems such as connected vehicles. To enable collaborative localization, several model-based state estimation and learning-based localization methods have been developed. Given their encouraging performance, model-based state estimation often lacks the ability to model the complex relationships among multiple objects, while learning-based methods are typically not able to fuse the observations from an arbitrary number of views and cannot well model uncertainty. In this paper, we introduce a novel spatiotemporal graph filter approach that integrates graph learning and model-based estimation to perform multi-view sensor fusion for collaborative object localization. Our approach models complex object relationships using a new spatiotemporal graph representation and fuses multi-view observations in a Bayesian fashion to improve location estimation under uncertainty. We evaluate our approach in the applications of connected autonomous driving and multiple pedestrian localization. Experimental results show that our approach outperforms previous techniques and achieves the state-of-the-art performance on collaborative localization. 
    more » « less
  5. We consider the problem of multi-robot sensor coverage, which deals with deploying a multi-robot team in an environment and optimizing the sensing quality of the overall environment. As real-world environments involve a variety of sensory information, and individual robots are limited in their available number of sensors, successful multi-robot sensor coverage requires the deployment of robots in such a way that each individual team member’s sensing quality is maximized. Additionally, because individual robots have varying complements of sensors and both robots and sensors can fail, robots must be able to adapt and adjust how they value each sensing capability in order to obtain the most complete view of the environment, even through changes in team composition. We introduce a novel formulation for sensor coverage by multi-robot teams with heterogeneous sensing capabilities that maximizes each robot's sensing quality, balancing the varying sensing capabilities of individual robots based on the overall team composition. We propose a solution based on regularized optimization that uses sparsity-inducing terms to ensure a robot team focuses on all possible event types, and which we show is proven to converge to the optimal solution. Through extensive simulation, we show that our approach is able to effectively deploy a multi-robot team to maximize the sensing quality of an environment, responding to failures in the multi-robot team more robustly than non-adaptive approaches. 
    more » « less
  6. null (Ed.)
  7. null (Ed.)
  8. null (Ed.)