skip to main content


Title: Towards Sim2Real Transfer of Autonomy Algorithms using AutoDRIVE Ecosystem
The engineering community currently encounters significant challenges in the development of intelligent transportation algorithms that can be transferred from simulation to reality with minimal effort. This can be achieved by robustifying the algorithms using domain adaptation methods and/or by adopting cutting-edge tools that help support this objective seamlessly. This work presents AutoDRIVE, an openly accessible digital twin ecosystem designed to facilitate synergistic development, simulation and deployment of cyber-physical solutions pertaining to autonomous driving technology; and focuses on bridging the autonomy-oriented simulation-to-reality (sim2real) gap using the proposed ecosystem. In this paper, we extensively explore the modeling and simulation aspects of the ecosystem and substantiate its efficacy by demonstrating the successful transition of two candidate autonomy algorithms from simulation to reality to help support our claims: (i) autonomous parking using probabilistic robotics approach; (ii) behavioral cloning using deep imitation learning. The outcomes of these case studies further strengthen the credibility of AutoDRIVE as an invaluable tool for advancing the state-of-the-art in autonomous driving technology.  more » « less
Award ID(s):
1939058 1925500
NSF-PAR ID:
10491342
Author(s) / Creator(s):
; ;
Publisher / Repository:
Elsevier
Date Published:
Journal Name:
IFAC-PapersOnLine
Volume:
56
Issue:
3
ISSN:
2405-8963
Page Range / eLocation ID:
277 to 282
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Promising new technology has recently emerged to increase the level of safety and autonomy in driving, including lane and distance keeping assist systems, automatic braking systems, and even highway auto-drive systems. Each of these technologies brings cars closer to the ultimate goal of fully autonomous operation. While it is still unclear, if and when safe, driverless cares will be released on the mass market, a comparison with the development of aircraft autopilot systems can provide valuable insight. This review article contains several Additional Resources at the end, including key references to support its findings. The article investigates a path towards ensuring safety for "self-driving" or "autonomous" cars by leveraging prior work in aviation. It focuses on navigation, or localization, which is a key aspect of automated operation. 
    more » « less
  2. Abstract

    Process-based agroecosystem models are powerful tools to assess performance of managed landscapes, but their ability to accurately represent reality is limited by the types of input data they can use. Ensuring these models can represent cropping field heterogeneity and environmental impact is important, especially given the growing interest in using agroecosystem models to quantify ecosystem services from best management practices and land use change. We posited that augmenting process-based agroecosystem models with additional field-specific information such as topography, hydrologic processes, or independent indicators of yield could help limit simulation artifacts that obscure mechanisms driving observed variations. To test this, we augmented the agroecosystem model Agricultural Production Systems Simulator (APSIM) with field-specific topography and satellite imagery in a simulation framework we call Foresite. We used Foresite to optimize APSIM yield predictions to match those created from a machine learning model built on remotely sensed indicators of hydrology and plant productivity. Using these improved subfield yield predictions to guide APSIM optimization, totalNO3Nloss estimates increased by 39% in maize and 20% in soybeans when summed across all years. In addition, we found a disproportionate total amount of leaching in the lowest yielding field areas vs the highest yielding areas in maize (42% vs 15%) and a similar effect in soybeans (31% vs 20%). Overall, we found that augmenting process-based models with now-common subfield remotely sensed data significantly increased values of predicted nutrient loss from fields, indicating opportunities to improve field-scale agroecosystem simulations, particularly if used to calculate nutrient credits in ecosystem service markets.

     
    more » « less
  3. 3D LiDAR scanners are playing an increasingly important role in autonomous driving as they can generate depth information of the environment. However, creating large 3D LiDAR point cloud datasets with point-level labels requires a significant amount of manual annotation. This jeopardizes the efficient development of supervised deep learning algorithms which are often data-hungry. We present a framework to rapidly create point clouds with accurate pointlevel labels from a computer game. To our best knowledge, this is the first publication on LiDAR point cloud simulation framework for autonomous driving. The framework supports data collection from both auto-driving scenes and user-configured scenes. Point clouds from auto-driving scenes can be used as training data for deep learning algorithms, while point clouds from user-configured scenes can be used to systematically test the vulnerability of a neural network, and use the falsifying examples to make the neural network more robust through retraining. In addition, the scene images can be captured simultaneously in order for sensor fusion tasks, with a method proposed to do automatic registration between the point clouds and captured scene images. We show a significant improvement in accuracy (+9%) in point cloud segmentation by augmenting the training dataset with the generated synthesized data. Our experiments also show by testing and retraining the network using point clouds from user-configured scenes, the weakness/blind spots of the neural network can be fixed. 
    more » « less
  4. Abstract

    Machine learning and computational processing have advanced such that automated driving systems (ADSs) are no longer a distant reality. Many automobile manufacturers have developed prototypes; however, there exist numerous decision support issues requiring resolution to ensure mass ADS adoption. In the coming decades, it is likely that production ADSs will only be partially autonomous. Such ADSs operate within predetermined conditions and require driver intervention when they are violated. Since forecasts of their 20‐year market penetration are relatively low, ADSs will likely operate in heterogeneous traffic characterized by vehicles of varying autonomy levels. Under these conditions, effective decision support must consider intangible, subjective, and emotional factors as well as influences of human cognition; otherwise, the ADS risks driver distrust and unsatisfactory performance based on an incomplete understanding of its environment. We survey the literature relevant to these issues, identify open problems, and propose research directions for their resolution.

     
    more » « less
  5. Underwater robots, including Remote Operating Vehicles (ROV) and Autonomous Underwater Vehicles (AUV), are currently used to support underwater missions that are either impossible or too risky to be performed by manned systems. In recent years the academia and robotic industry have paved paths for tackling technical challenges for ROV/AUV operations. The level of intelligence of ROV/AUV has increased dramatically because of the recent advances in low-power-consumption embedded computing devices and machine intelligence (e.g., AI). Nonetheless, operating precisely underwater is still extremely challenging to minimize human intervention due to the inherent challenges and uncertainties associated with the underwater environments. Proximity operations, especially those requiring precise manipulation, are still carried out by ROV systems that are fully controlled by a human pilot. A workplace-ready and worker-friendly ROV interface that properly simplifies operator control and increases remote operation confidence is the central challenge for the wide adaptation of ROVs.

    This paper examines the recent advances of virtual telepresence technologies as a solution for lowering the barriers to the human-in-the-loop ROV teleoperation. Virtual telepresence refers to Virtual Reality (VR) related technologies that help a user to feel that they were in a hazardous situation without being present at the actual location. We present a pilot system of using a VR-based sensory simulator to convert ROV sensor data into human-perceivable sensations (e.g., haptics). Building on a cloud server for real-time rendering in VR, a less trained operator could possibly operate a remote ROV thousand miles away without losing the minimum situational awareness. The system is expected to enable an intensive human engagement on ROV teleoperation, augmenting abilities for maneuvering and navigating ROV in unknown and less explored subsea regions and works. This paper also discusses the opportunities and challenges of this technology for ad hoc training, workforce preparation, and safety in the future maritime industry. We expect that lessons learned from our work can help democratize human presence in future subsea engineering works, by accommodating human needs and limitations to lower the entrance barrier.

     
    more » « less