Perception of obstacles remains a critical safety concern for autonomous vehicles. Real-world collisions have shown that the autonomy faults leading to fatal collisions originate from obstacle existence detection. Open source autonomous driving implementations show a perception pipeline with complex interdependent Deep Neural Networks. These networks are not fully verifiable, making them unsuitable for safety-critical tasks. In this work, we present a safety verification of an existing LiDAR based classical obstacle detection algorithm. We establish strict bounds on the capabilities of this obstacle detection algorithm. Given safety standards, such bounds allow for determining LiDAR sensor properties that would reliably satisfy the standards. Such analysis has as yet been unattainable for neural network based perception systems. We provide a rigorous analysis of the obstacle detection s
more »
« less
Minimal perception: enabling autonomy in resource-constrained robots
The rapidly increasing capabilities of autonomous mobile robots promise to make them ubiquitous in the coming decade. These robots will continue to enhance efficiency and safety in novel applications such as disaster management, environmental monitoring, bridge inspection, and agricultural inspection. To operate autonomously without constant human intervention, even in remote or hazardous areas, robots must sense, process, and interpret environmental data using only onboard sensing and computation. This capability is made possible by advancements in perception algorithms, allowing these robots to rely primarily on their perception capabilities for navigation tasks. However, tiny robot autonomy is hindered mainly by sensors, memory, and computing due to size, area, weight, and power constraints. The bottleneck in these robots lies in the real-time perception in resource-constrained robots. To enable autonomy in robots of sizes that are less than 100 mm in body length, we draw inspiration from tiny organisms such as insects and hummingbirds, known for their sophisticated perception, navigation, and survival abilities despite their minimal sensor and neural system. This work aims to provide insights into designing a compact and efficient minimal perception framework for tiny autonomous robots from higher cognitive to lower sensor levels.
more »
« less
- Award ID(s):
- 2020624
- PAR ID:
- 10565502
- Publisher / Repository:
- Frontiers Media SA
- Date Published:
- Journal Name:
- Frontiers in Robotics and AI
- Volume:
- 11
- ISSN:
- 2296-9144
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Real-time computer vision and remote visual sensing platforms are increasingly used in numerous underwater applications such as shipwreck mapping, subsea inspection, coastal water monitoring, surveillance, coral reef surveying, invasive fish tracking, and more. Recent advancements in robot vision and powerful single-board computers have paved the way for an imminent revolution in the next generation of subsea technologies. In this chapter, we present these exciting emerging applications and discuss relevant open problems and practical considerations. First, we delineate the specific environmental and operational challenges of underwater vision and highlight some prominent scientific and engineering solutions to ensure robust visual perception. We specifically focus on the characteristics of underwater light propagation from the perspective of image formation and photometry. We also discuss the recent developments and trends in underwater imaging literature to facilitate the restoration, enhancement, and filtering of inherently noisy visual data. Subsequently, we demonstrate how these ideas are extended and deployed in the perception pipelines of Autonomous Underwater Vehicles (AUVs) and Remotely Operated Vehicles (ROVs). In particular, we present several use cases for marine life monitoring and conservation, human-robot cooperative missions for inspecting submarine cables and archaeological sites, subsea structure or cave mapping, aquaculture, and marine ecology. We elaborately discuss how the deep visual learning and on-device AI breakthroughs are transforming the perception, planning, localization, and navigation capabilities of visually-guided underwater robots. Along this line, we also highlight the prospective future research directions and open problems at the intersection of computer vision and underwater robotics domains.more » « less
-
Autonomous agents are increasingly becoming construction workers’ teammates, making them an integral part of tomorrow’s construction industry. Although many expect that worker–autonomy teaming will enhance construction efficiency, the presence of auto-agents, or robots necessitates an appropriate level of trust-building between workers and their autonomous counterparts, especially because these auto-agents’ perfection still cannot be guaranteed. Although researchers have widely explored human–autonomy trust in various domains—such as manufacturing and the military—discussion of this teaming dynamic within the construction sector is still nascent. To address this gap, this paper simulated a futuristic bricklaying task to (1) examine whether identifying autonomous agents’ physical and informational failures and risk perception affect workers’ trust levels, and (2) investigate workers’ neuropsychophysiological responses as a measure of trust levels toward robots, especially when autonomous agents are faulty. Results indicate that (1) identification of both types of failures and high-risk perception significantly reduce workers’ trust in autonomous agents, and the nuances of workers’ responses to both types of failures were discerned; and (2) brain activation correlates with trust changes. The findings suggest that workers’ unfamiliarity with autonomous technologies, coupled with fast-growing interest in adopting them, may leave workers at risk of improper trust transfer or overtrust in the autonomous agents. This study contributes to an expanding exploration of worker–autonomy trust in construction and calls for further investigations into effective approaches for auto-agents to communicate their physical and informational failures and to help workers recover and repair trust.more » « less
-
Abstract The potential impact of autonomous robots on everyday life is evident in emerging applications such as precision agriculture, search and rescue, and infrastructure inspection. However, such applications necessitate operation in unknown and unstructured environments with a broad and sophisticated set of objectives, all under strict computation and power limitations. We therefore argue that the computational kernels enabling robotic autonomy must bescheduledandoptimizedto guarantee timely and correct behavior, while allowing for reconfiguration of scheduling parameters at runtime. In this paper, we consider a necessary first step towards this goal ofcomputational awarenessin autonomous robots: an empirical study of a base set of computational kernels from the resource management perspective. Specifically, we conduct a data-driven study of the timing, power, and memory performance of kernels for localization and mapping, path planning, task allocation, depth estimation, and optical flow, across three embedded computing platforms. We profile and analyze these kernels to provide insight into scheduling and dynamic resource management for computation-aware autonomous robots. Notably, our results show that there is a correlation of kernel performance with a robot’s operational environment, justifying the notion of computation-aware robots and why our work is a crucial step towards this goal.more » « less
-
In the ever-evolving landscape of autonomous vehicles, competition and research of high-speed autonomous racing emerged as a captivating frontier, pushing the limits of perception, planning, and control. Autonomous racing presents a setup where the intersection of cutting-edge software and hardware development sparks unprecedented opportunities and confronts unique challenges. The motorsport axiom, “If everything seems under control, then you are not going fast enough,” resonates in this special issue, underscoring the demand for algorithms and hardware that can navigate at the cutting edge of control, traction, and agility. In pursuing autonomy at high speeds, the racing environment becomes a crucible, pushing autonomous vehicles to execute split-second decisions with high precision. Autonomous racing, we believe, offers a litmus test for the true capabilities of self-driving software. Just as racing has historically served as a proving ground for automotive technology, autonomous racing now presents itself as the crucible for testing self-driving algorithms. While routine driving situations dominate much of the autonomous vehicle operations, focusing on extreme situations and environments is crucial to support investigation into safety benefits. The urgency of advancing highspeed autonomy is palpable in burgeoning autonomous racing competitions like Formula Student Driverless, F1TENTH autonomous racing, Roborace, and the Indy Autonomous Challenge. These arenas provide a literal testbed for testing perception, planning, and control algorithms and symbolize the accelerating traction of autonomous racing as a proving ground for agile and safe autonomy. Our special issue focuses on cutting-edge research into software and hardware solutions for highspeed autonomous racing. We sought contributions from the robotics and autonomy communities that delve into the intricacies of head-to-head multi-agent racing: modeling vehicle dynamics at high speeds, developing advanced perception, planning, and control algorithms, as well as the demonstration of algorithms, in simulation and in real-world vehicles. While presenting recent developments for autonomous racing, we believe these special issue papers will also create an impact in the broader realm of autonomous vehicles.more » « less
An official website of the United States government

