skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: A Sensor Simulation Framework for Training and Testing Robots and Autonomous Vehicles
Abstract Computer simulation can be a useful tool when designing robots expected to operate independently in unstructured environments. In this context, one needs to simulate the dynamics of the robot’s mechanical system, the environment in which the robot operates, and the sensors which facilitate the robot’s perception of the environment. Herein, we focus on the sensing simulation task by presenting a virtual sensing framework built alongside an open-source, multi-physics simulation platform called Chrono. This framework supports camera, lidar, GPS, and IMU simulation. We discuss their modeling as well as the noise and distortion implemented to increase the realism of the synthetic sensor data. We close with two examples that show the sensing simulation framework at work: one pertains to a reduced scale autonomous vehicle and the second is related to a vehicle driven in a digital replica of a Madison neighborhood.  more » « less
Award ID(s):
1739869
PAR ID:
10276949
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Journal of Autonomous Vehicles and Systems
Volume:
1
Issue:
2
ISSN:
2690-702X
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Contemporary approaches to perception, planning, estimation, and control have allowed robots to operate robustly as our remote surrogates in uncertain, unstructured environments. This progress now creates an opportunity for robots to operate not only in isolation, but also with and alongside humans in our complex environments. Realizing this opportunity requires an efficient and flexible medium through which humans can communicate with collaborative robots. Natural language provides one such medium, and through significant progress in statistical methods for natural-language understanding, robots are now able to interpret a diverse array of free-form navigation, manipulation, and mobile-manipulation commands. However, most contemporary approaches require a detailed, prior spatial-semantic map of the robot’s environment that models the space of possible referents of an utterance. Consequently, these methods fail when robots are deployed in new, previously unknown, or partially-observed environments, particularly when mental models of the environment differ between the human operator and the robot. This paper provides a comprehensive description of a novel learning framework that allows field and service robots to interpret and correctly execute natural-language instructions in a priori unknown, unstructured environments. Integral to our approach is its use of language as a “sensor”—inferring spatial, topological, and semantic information implicit in natural-language utterances and then exploiting this information to learn a distribution over a latent environment model. We incorporate this distribution in a probabilistic, language grounding model and infer a distribution over a symbolic representation of the robot’s action space, consistent with the utterance. We use imitation learning to identify a belief-space policy that reasons over the environment and behavior distributions. We evaluate our framework through a variety of different navigation and mobile-manipulation experiments involving an unmanned ground vehicle, a robotic wheelchair, and a mobile manipulator, demonstrating that the algorithm can follow natural-language instructions without prior knowledge of the environment. 
    more » « less
  2. This paper presents a class of four-wheel drive autonomous robots designed to collaboratively traverse terrains with a deformable upper layer, where soil properties result in limited traction and have the potential to cause immobilization. The robots are designed to have front and rear axle yaw degrees of freedom, and front and rear axle roll degrees of freedom providing ground compliance and maneuverability on friable terrain. These degrees of freedom, along with four individually driven wheels and an actuated translational degree of freedom inside a mid-frame joint, enable poses and modes of mobility that differ significantly from a rigid vehicle. A primary goal of this work is to assess the capacity to use this vehicular form as a testbed that leverages these vehicle dynamics to assess mobility. Using a custom ROS-Gazebo simulation environment, a heterogenous driving surface is created and used to evaluate this capability. We show that the vehicle can sense imbalanced terrain resistances proprioceptively. Additionally, we show that rigidity of the vehicle can be controlled through a simple feedback control loop governing the robot’s unconstrained axles to maintain a proper heading angle and still can provide an avenue to monitor the dynamics related to full-vehicle immobilization. 
    more » « less
  3. Developing whole-body tactile skins for robots remains a challenging task, as existing solutions often prioritize modular, one-size-fits-all designs, which, while versatile, fail to account for the robot’s specific shape and the unique demands of its operational context. In this work, we introduce GenTact Toolbox, a computational pipeline for creating versatile wholebody tactile skins tailored to both robot shape and application domain. Our method includes procedural mesh generation for conforming to a robot’s topology, task-driven simulation to refine sensor distribution, and multi-material 3D printing for shape-agnostic fabrication. We validate our approach by creating and deploying six capacitive sensing skins on a Franka Research 3 robot arm in a human-robot interaction scenario. This work represents a shift from “one-size-fits-all” tactile sensors toward context-driven, highly adaptable designs that can be customized for a wide range of robotic systems and applications. The project website is available at https://hiro-group.ronc.one/gentacttoolbox 
    more » « less
  4. This paper presents a theoretical analysis for a self-driving vehicle’s velocity as it navigates through a random environment. We study a stylized environment and vehicle mobility model capturing the essential features of a self-driving vehicle’s behavior, and leverage results from stochastic geometry to characterize the distribution of a typical vehicle’s safe driving velocity, as a function of key network parameters such as the density of objects in the environment and sensing accuracy. We then consider a setting wherein the sensing accuracy is subject to a sensing/communication rate constraint. We propose a procedure that focuses the vehicle’s sensing/communication resources and estimation efforts on the objects that affect its velocity and safety the most so as to optimize its ability to drive faster in uncertain environments. Simulation results show that the proposed methodology achieves considerable gains in the vehicle’s safe driving velocity as compared to uniform rate allocation policies. 
    more » « less
  5. Training self-driving systems to be robust to the long-tail of driving scenarios is a critical problem. Model-based approaches leverage simulation to emulate a wide range of scenarios without putting users at risk in the real world. One promising path to faithful simulation is to train a forward model of the world to predict the future states of both the environment and the ego-vehicle given past states and a sequence of actions. In this paper, we argue that it is beneficial to model the state of the ego-vehicle, which often has simple, predictable and deterministic behavior, separately from the rest of the environment, which is much more complex and highly multimodal. We propose to model the ego-vehicle using a simple and differentiable kinematic model, while training a stochastic convolutional forward model on raster representations of the state to predict the behavior of the rest of the environment. We explore several configurations of such decoupled models, and evaluate their performance both with Model Predictive Control (MPC) and direct policy learning. We test our methods on the task of highway driving and demonstrate lower crash rates and better stability. The code is available at https://github.com/vladisai/pytorch-PPUU/tree/ICLR2022. 
    more » « less