Abstract In this paper, we address the problem of autonomous multi-robot mapping, exploration and navigation in unknown, GPS-denied indoor or urban environments using a team of robots equipped with directional sensors with limited sensing capabilities and limited computational resources. The robots have no a priori knowledge of the environment and need to rapidly explore and construct a map in a distributed manner using existing landmarks, the presence of which can be detected using onboard senors, although little to no metric information (distance or bearing to the landmarks) is available. In order to correctly and effectively achieve this, the presence of a necessary density/distribution of landmarks is ensured by design of the urban/indoor environment. We thus address this problem in two phases: (1) During the design/construction of the urban/indoor environment we can ensure that sufficient landmarks are placed within the environment. To that end we develop afiltration-based approach for designing strategic placement of landmarks in an environment. (2) We develop a distributed algorithm which a team of robots, with no a priori knowledge of the environment, can use to explore such an environment, construct a topological map requiring no metric/distance information, and use that map to navigate within the environment. This is achieved using a topological representation of the environment (called aLandmark Complex), instead of constructing a complete metric/pixel map. The representation is built by the robot as well as used by them for navigation through a balanced strategy involving exploration and exploitation. We use tools from homology theory for identifying “holes” in the coverage/exploration of the unknown environment and hence guide the robots towards achieving a complete exploration and mapping of the environment. Our simulation results demonstrate the effectiveness of the proposed metric-free topological (simplicial complex) representation in achieving exploration, localization and navigation within the environment.
more »
« less
Informative Path Planning Algorithm for the Exploration of Unknown 2D Environments
This paper addresses the Informative Path Planning (IPP) algorithm for autonomous robots to explore unknown 2D environments for mapping purposes. IPP can be beneficial to many applications such as search and rescue and cave exploration, where mapping an unknown environment is necessary. Autonomous robots' limited operation time due to their finite battery necessitates an efficient IPP algorithm, however, it is challenging because autonomous robots may not have any information about the environment. In this paper, we formulate a mathematical structure of the IPP problem along with the derivation of the optimal control input. Then, a discretized model for the IPP algorithm is presented as a solution for exploring an unknown environment. The proposed approach provides relatively fast computation time while being applicable to broad robot and sensor platforms. Various simulation results are provided to show the performance of the proposed IPP algorithm.
more »
« less
- Award ID(s):
- 2145810
- PAR ID:
- 10508964
- Publisher / Repository:
- IEEE
- Date Published:
- ISBN:
- 979-8-3503-0413-8
- Format(s):
- Medium: X
- Location:
- New York, NY, USA
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract The potential impact of autonomous robots on everyday life is evident in emerging applications such as precision agriculture, search and rescue, and infrastructure inspection. However, such applications necessitate operation in unknown and unstructured environments with a broad and sophisticated set of objectives, all under strict computation and power limitations. We therefore argue that the computational kernels enabling robotic autonomy must bescheduledandoptimizedto guarantee timely and correct behavior, while allowing for reconfiguration of scheduling parameters at runtime. In this paper, we consider a necessary first step towards this goal ofcomputational awarenessin autonomous robots: an empirical study of a base set of computational kernels from the resource management perspective. Specifically, we conduct a data-driven study of the timing, power, and memory performance of kernels for localization and mapping, path planning, task allocation, depth estimation, and optical flow, across three embedded computing platforms. We profile and analyze these kernels to provide insight into scheduling and dynamic resource management for computation-aware autonomous robots. Notably, our results show that there is a correlation of kernel performance with a robot’s operational environment, justifying the notion of computation-aware robots and why our work is a crucial step towards this goal.more » « less
-
Many modern simultaneous localization and mapping (SLAM) techniques rely on sparse landmark-based maps due to their real-time performance. However, these techniques frequently assert that these landmarks are fixed in position over time, known as the static-world assumption. This is rarely, if ever, the case in most real-world environments. Even worse, over long deployments, robots are bound to observe traditionally static landmarks change, for example when an autonomous vehicle encounters a construction zone. This work addresses this challenge, accounting for changes in complex three-dimensional environments with the creation of a probabilistic filter that operates on the features that give rise to landmarks. To accomplish this, landmarks are clustered into cliques and a filter is developed to estimate their persistence jointly among observations of the landmarks in a clique. This filter uses estimated spatial-temporal priors of geometric objects, allowing for dynamic and semi-static objects to be removed from a formally static map. The proposed algorithm is validated in a 3D simulated environment.more » « less
-
Robotic search often involves teleoperating vehicles into unknown environments. In such scenarios, prior knowledge of target location or environmental map may be a viable resource to tap into and control other autonomous robots in the vicinity towards an improved search performance. In this paper, we test the hypothesis that despite having the same skill, prior knowledge of target or environment affects teleoperator actions, and such knowledge can therefore be inferred through robot movement. To investigate whether prior knowledge can improve human-robot team performance, we next evaluate an adaptive mutual-information blending strategy that admits a time-dependent weighting for steering autonomous robots. Human-subject experiments show that several features including distance travelled by the teleoperated robot, time spent staying still, speed, and turn rate, all depend on the level of prior knowledge and that absence of prior knowledge increased workload. Building on these results, we identified distance travelled and time spent staying still as movement cues that can be used to robustly infer prior knowledge. Simulations where an autonomous robot accompanied a human teleoperated robot revealed that whereas time to find the target was similar across all information-based search strategies, adaptive strategies that acted on movement cues found the target sooner more often than a single human teleoperator compared to non-adaptive strategies. This gain is diluted with number of robots, likely due to the limited size of the search environment. Results from this work set the stage for developing knowledge-aware control algorithms for autonomous robots in collaborative human-robot teams.more » « less
-
In this paper, we propose a real-time deep-learning approach for determining the 6D relative pose of Autonomous Underwater Vehicles (AUV) from a single image. A team of autonomous robots localizing themselves, in a communicationconstrained underwater environment, is essential for many applications such as underwater exploration, mapping, multirobot convoying, and other multi-robot tasks. Due to the profound difficulty of collecting ground truth images with accurate 6D poses underwater, this work utilizes rendered images from the Unreal Game Engine simulation for training. An image translation network is employed to bridge the gap between the rendered and the real images producing synthetic images for training. The proposed method predicts the 6D pose of an AUV from a single image as 2D image keypoints representing 8 corners of the 3D model of the AUV, and then the 6D pose in the camera coordinates is determined using RANSACbased PnP. Experimental results in underwater environments (swimming pool and ocean) with different cameras demonstrate the robustness of the proposed technique, where the trained system decreased translation error by 75.5\% and orientation error by 64.6\% over the state-of-the-art methods.more » « less
An official website of the United States government
