skip to main content


Title: Active Safety Envelopes using Light Curtains with Probabilistic Guarantees
To safely navigate unknown environments; robots must accurately perceive dynamic obstacles. Instead of directly measuring the scene depth with a LiDAR sensor; we explore the use of a much cheaper and higher resolution sensor: programmable light curtains. Light curtains are controllable depth sensors that sense only along a surface that a user selects. We use light curtains to estimate the safety envelope of a scene: a hypothetical surface that separates the robot from all obstacles. We show that generating light curtains that sense random locations (from a particular distribution) can quickly discover the safety envelope for scenes with unknown objects. Importantly; we produce theoretical safety guarantees on the probability of detecting an obstacle using random curtains. We combine random curtains with a machine learning based model that forecasts and tracks the motion of the safety envelope efficiently. Our method accurately estimates safety envelopes while providing probabilistic safety guarantees that can be used to certify the efficacy of a robot perception system to detect and avoid dynamic obstacles. We evaluate our approach in a simulated urban driving environment and a real-world environment with moving pedestrians using a light curtain device and show that we can estimate safety envelopes efficiently and effectively.  more » « less
Award ID(s):
1900821 1849154
NSF-PAR ID:
10295463
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
Robotics: Science and Systems
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. In this paper, we develop a novel and safe control design approach that takes demonstrations provided by a human teacher to enable a robot to accomplish complex manipulation scenarios in dynamic environments. First, an overall task is divided into multiple simpler subtasks that are more appropriate for learning and control objectives. Then, by collecting human demonstrations, the subtasks that require robot movement are modeled by probabilistic movement primitives (ProMPs). We also study two strategies for modifying the ProMPs to avoid collisions with environmental obstacles. Finally, we introduce a rule-base control technique by utilizing a finite-state machine along with a unique means of control design for ProMPs. For the ProMP controller, we propose control barrier and Lyapunov functions to guide the system along a trajectory within the distribution defined by a ProMP while guaranteeing that the system state never leaves more than a desired distance from the distribution mean. This allows for better performance on nonlinear systems and offers solid stability and known bounds on the system state. A series of simulations and experimental studies demonstrate the efficacy of our approach and show that it can run in real time. Note to Practitioners —This paper is motivated by the need to create a teach-by-demonstration framework that captures the strengths of movement primitives and verifiable, safe control. We provide a framework that learns safe control laws from a probability distribution of robot trajectories through the use of advanced nonlinear control that incorporates safety constraints. Typically, such distributions are stochastic, making it difficult to offer any guarantees on safe operation. Our approach ensures that the distribution of allowed robot trajectories is within an envelope of safety and allows for robust operation of a robot. Furthermore, using our framework various probability distributions can be combined to represent complex scenarios in the environment. It will benefit practitioners by making it substantially easier to test and deploy accurate, efficient, and safe robots in complex real-world scenarios. The approach is currently limited to scenarios involving static obstacles, with dynamic obstacle avoidance an avenue of future effort. 
    more » « less
  2. null (Ed.)
    Most real-world 3D sensors such as LiDARs perform fixed scans of the entire environment, while being decoupled from the recognition system that processes the sensor data. In this work, we propose a method for 3D object recognition using light curtains, a resource-efficient controllable sensor that measures depth at user-specified locations in the environment. Crucially, we propose using prediction uncertainty of a deep learning based 3D point cloud detector to guide active perception. Given a neural network's uncertainty, we derive an optimization objective to place light curtains using the principle of maximizing information gain. Then, we develop a novel and efficient optimization algorithm to maximize this objective by encoding the physical constraints of the device into a constraint graph and optimizing with dynamic programming. We show how a 3D detector can be trained to detect objects in a scene by sequentially placing uncertainty-guided light curtains to successively improve detection accuracy. 
    more » « less
  3. The effectiveness of obstacle avoidance response safety systems such as ADAS, has demonstrated the necessity to optimally integrate and enhance these systems in vehicles in the interest of increasing the road safety of vehicle occupants and pedestrians. Vehicle-pedestrian clearance can be achieved with a model safety envelope based on distance sensors designed to keep a threshold between the ego-vehicle and pedestrians or objects in the traffic environment. More accurate, reliable and robust distance measurements are possible by the implementation of multi-sensor fusion. This work presents the structure of a machine learning based sensor fusion algorithm that can accurately detect a vehicle safety envelope with the use of a HC-SR04 ultrasonic sensor, SF11/C microLiDAR sensor, and a 2D RPLiDAR A3M1 sensor. Sensors for the vehicle safety envelope and ADAS were calibrated for optimal performance and integration with versatile vehicle-sensor platforms. Results for this work include a robust distance sensor fusion algorithm that can correctly sense obstacles from 0.05m to 0.5m on average by 94.33% when trained as individual networks per distance. When the algorithm is trained as a common network of all distances, it can correctly sense obstacles at the same distances on average by 96.95%. Results were measured based on the precision and accuracy of the sensors’ outputs by the time of activation of the safety response once a potential collision was detected. From the results of this work the platform has the potential to identify collision scenarios, warning the driver, and taking corrective action based on the coordinate at which the risk has been identified.

     
    more » « less
  4. We provide methods which recover planar scene geometry by utilizing the transient histograms captured by a class of close-range time-of-flight (ToF) distance sensor. A transient histogram is a one dimensional temporal waveform which encodes the arrival time of photons incident on the ToF sensor. Typically, a sensor processes the transient histogram using a proprietary algorithm to produce distance estimates, which are commonly used in several robotics applications. Our methods utilize the transient histogram directly to enable recovery of planar geometry more accurately than is possible using only proprietary distance estimates, and consistent recovery of the albedo of the planar surface, which is not possible with proprietary distance estimates alone. This is accomplished via a differentiable rendering pipeline, which simulates the transient imaging process, allowing direct optimization of scene geometry to match observations. To validate our methods, we capture 3,800 measurements of eight planar surfaces from a wide range of viewpoints, and show that our method outperforms the proprietary-distance-estimate baseline by an order of magnitude in most scenarios. We demonstrate a simple robotics application which uses our method to sense the distance to and slope of a planar surface from a sensor mounted on the end effector of a robot arm. 
    more » « less
  5. Robots are active agents that operate in dynamic scenarios with noisy sensors. Predictions based on these noisy sensor measurements often lead to errors and can be unreliable. To this end, roboticists have used fusion methods using multiple observations. Lately, neural networks have dominated the accuracy charts for perception-driven predictions for robotic decision-making and often lack uncertainty metrics associated with the predictions. Here, we present a mathematical formulation to obtain the heteroscedastic aleatoric uncertainty of any arbitrary distribution without prior knowledge about the data. The approach has no prior assumptions about the prediction labels and is agnostic to network architecture. Furthermore, our class of networks, Ajna, adds minimal computation and requires only a small change to the loss function while training neural networks to obtain uncertainty of predictions, enabling real-time operation even on resource-constrained robots. In addition, we study the informational cues present in the uncertainties of predicted values and their utility in the unification of common robotics problems. In particular, we present an approach to dodge dynamic obstacles, navigate through a cluttered scene, fly through unknown gaps, and segment an object pile, without computing depth but rather using the uncertainties of optical flow obtained from a monocular camera with onboard sensing and computation. We successfully evaluate and demonstrate the proposed Ajna network on four aforementioned common robotics and computer vision tasks and show comparable results to methods directly using depth. Our work demonstrates a generalized deep uncertainty method and demonstrates its utilization in robotics applications.

     
    more » « less