Perception of obstacles remains a critical safety concern for autonomous vehicles. Real-world collisions have shown that the autonomy faults leading to fatal collisions originate from obstacle existence detection. Open source autonomous driving implementations show a perception pipeline with complex interdependent Deep Neural Networks. These networks are not fully verifiable, making them unsuitable for safety-critical tasks. In this work, we present a safety verification of an existing LiDAR based classical obstacle detection algorithm. We establish strict bounds on the capabilities of this obstacle detection algorithm. Given safety standards, such bounds allow for determining LiDAR sensor properties that would reliably satisfy the standards. Such analysis has as yet been unattainable for neural network based perception systems. We provide a rigorous analysis of the obstacle detection s
more »
« less
Minimal perception: enabling autonomy in resource-constrained robots
The rapidly increasing capabilities of autonomous mobile robots promise to make them ubiquitous in the coming decade. These robots will continue to enhance efficiency and safety in novel applications such as disaster management, environmental monitoring, bridge inspection, and agricultural inspection. To operate autonomously without constant human intervention, even in remote or hazardous areas, robots must sense, process, and interpret environmental data using only onboard sensing and computation. This capability is made possible by advancements in perception algorithms, allowing these robots to rely primarily on their perception capabilities for navigation tasks. However, tiny robot autonomy is hindered mainly by sensors, memory, and computing due to size, area, weight, and power constraints. The bottleneck in these robots lies in the real-time perception in resource-constrained robots. To enable autonomy in robots of sizes that are less than 100 mm in body length, we draw inspiration from tiny organisms such as insects and hummingbirds, known for their sophisticated perception, navigation, and survival abilities despite their minimal sensor and neural system. This work aims to provide insights into designing a compact and efficient minimal perception framework for tiny autonomous robots from higher cognitive to lower sensor levels.
more »
« less
- Award ID(s):
- 2020624
- PAR ID:
- 10565502
- Publisher / Repository:
- Frontiers Media SA
- Date Published:
- Journal Name:
- Frontiers in Robotics and AI
- Volume:
- 11
- ISSN:
- 2296-9144
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract The potential impact of autonomous robots on everyday life is evident in emerging applications such as precision agriculture, search and rescue, and infrastructure inspection. However, such applications necessitate operation in unknown and unstructured environments with a broad and sophisticated set of objectives, all under strict computation and power limitations. We therefore argue that the computational kernels enabling robotic autonomy must bescheduledandoptimizedto guarantee timely and correct behavior, while allowing for reconfiguration of scheduling parameters at runtime. In this paper, we consider a necessary first step towards this goal ofcomputational awarenessin autonomous robots: an empirical study of a base set of computational kernels from the resource management perspective. Specifically, we conduct a data-driven study of the timing, power, and memory performance of kernels for localization and mapping, path planning, task allocation, depth estimation, and optical flow, across three embedded computing platforms. We profile and analyze these kernels to provide insight into scheduling and dynamic resource management for computation-aware autonomous robots. Notably, our results show that there is a correlation of kernel performance with a robot’s operational environment, justifying the notion of computation-aware robots and why our work is a crucial step towards this goal.more » « less
-
In the ever-evolving landscape of autonomous vehicles, competition and research of high-speed autonomous racing emerged as a captivating frontier, pushing the limits of perception, planning, and control. Autonomous racing presents a setup where the intersection of cutting-edge software and hardware development sparks unprecedented opportunities and confronts unique challenges. The motorsport axiom, “If everything seems under control, then you are not going fast enough,” resonates in this special issue, underscoring the demand for algorithms and hardware that can navigate at the cutting edge of control, traction, and agility. In pursuing autonomy at high speeds, the racing environment becomes a crucible, pushing autonomous vehicles to execute split-second decisions with high precision. Autonomous racing, we believe, offers a litmus test for the true capabilities of self-driving software. Just as racing has historically served as a proving ground for automotive technology, autonomous racing now presents itself as the crucible for testing self-driving algorithms. While routine driving situations dominate much of the autonomous vehicle operations, focusing on extreme situations and environments is crucial to support investigation into safety benefits. The urgency of advancing highspeed autonomy is palpable in burgeoning autonomous racing competitions like Formula Student Driverless, F1TENTH autonomous racing, Roborace, and the Indy Autonomous Challenge. These arenas provide a literal testbed for testing perception, planning, and control algorithms and symbolize the accelerating traction of autonomous racing as a proving ground for agile and safe autonomy. Our special issue focuses on cutting-edge research into software and hardware solutions for highspeed autonomous racing. We sought contributions from the robotics and autonomy communities that delve into the intricacies of head-to-head multi-agent racing: modeling vehicle dynamics at high speeds, developing advanced perception, planning, and control algorithms, as well as the demonstration of algorithms, in simulation and in real-world vehicles. While presenting recent developments for autonomous racing, we believe these special issue papers will also create an impact in the broader realm of autonomous vehicles.more » « less
-
Humans are well-adept at navigating public spaces shared with others, where current autonomous mobile robots still struggle: while safely and efficiently reaching their goals, humans communicate their intentions and conform to unwritten social norms on a daily basis; conversely, robots become clumsy in those daily social scenarios, getting stuck in dense crowds, surprising nearby pedestrians, or even causing collisions. While recent research on robot learning has shown promises in data-driven social robot navigation, good-quality training data is still difficult to acquire through either trial and error or expert demonstrations. In this work, we propose to utilize the body of rich, widely available, social human navigation data in many natural human-inhabited public spaces for robots to learn similar, human-like, socially compliant navigation behaviors. To be specific, we design an open-source egocentric data collection sensor suite wearable by walking humans to provide multimodal robot perception data; we collect a large-scale (~100 km, 20 hours, 300 trials, 13 humans) dataset in a variety of public spaces which contain numerous natural social navigation interactions; we analyze our dataset, demonstrate its usability, and point out future research directions and use cases.11Website: https://cs.gmu.edu/-xiao/Research/MuSoHu/more » « less
-
null (Ed.)Autonomous navigation of steel bridge inspection robots are essential for proper maintenance. Majority of existing robotic solutions for bridge inspection require human intervention to assist in the control and navigation. In this paper, a control system framework has been proposed for a previously designed ARA robot [1], which facilitates autonomous real-time navigation and minimizes human involvement. The mechanical design and control framework of ARA robot enables two different configurations, namely the mobile and inch-worm transformation. In addition, a switching control was developed with 3D point clouds of steel surfaces as the input which allows the robot to switch between mobile and inch-worm transformation. The surface availability algorithm (considers plane, area and height) of the switching control enables the robot to perform inch-worm jumps autonomously. The mobile transformation allows the robot to move on continuous steel surfaces and perform visual inspection of steel bridge structures. Practical experiments on actual steel bridge structures highlight the effective performance of ARA robot with the proposed control framework for autonomous navigation during visual inspection of steel bridges.more » « less
An official website of the United States government

