skip to main content

Title: Active Safety Envelopes using Light Curtains with Probabilistic Guarantees
To safely navigate unknown environments; robots must accurately perceive dynamic obstacles. Instead of directly measuring the scene depth with a LiDAR sensor; we explore the use of a much cheaper and higher resolution sensor: programmable light curtains. Light curtains are controllable depth sensors that sense only along a surface that a user selects. We use light curtains to estimate the safety envelope of a scene: a hypothetical surface that separates the robot from all obstacles. We show that generating light curtains that sense random locations (from a particular distribution) can quickly discover the safety envelope for scenes with unknown objects. Importantly; we produce theoretical safety guarantees on the probability of detecting an obstacle using random curtains. We combine random curtains with a machine learning based model that forecasts and tracks the motion of the safety envelope efficiently. Our method accurately estimates safety envelopes while providing probabilistic safety guarantees that can be used to certify the efficacy of a robot perception system to detect and avoid dynamic obstacles. We evaluate our approach in a simulated urban driving environment and a real-world environment with moving pedestrians using a light curtain device and show that we can estimate safety envelopes efficiently and effectively.
Authors:
; ; ;
Award ID(s):
1900821 1849154
Publication Date:
NSF-PAR ID:
10295463
Journal Name:
Robotics: Science and Systems
Sponsoring Org:
National Science Foundation
More Like this
  1. In this paper, we develop a novel and safe control design approach that takes demonstrations provided by a human teacher to enable a robot to accomplish complex manipulation scenarios in dynamic environments. First, an overall task is divided into multiple simpler subtasks that are more appropriate for learning and control objectives. Then, by collecting human demonstrations, the subtasks that require robot movement are modeled by probabilistic movement primitives (ProMPs). We also study two strategies for modifying the ProMPs to avoid collisions with environmental obstacles. Finally, we introduce a rule-base control technique by utilizing a finite-state machine along with a unique means of control design for ProMPs. For the ProMP controller, we propose control barrier and Lyapunov functions to guide the system along a trajectory within the distribution defined by a ProMP while guaranteeing that the system state never leaves more than a desired distance from the distribution mean. This allows for better performance on nonlinear systems and offers solid stability and known bounds on the system state. A series of simulations and experimental studies demonstrate the efficacy of our approach and show that it can run in real time. Note to Practitioners —This paper is motivated by the need tomore »create a teach-by-demonstration framework that captures the strengths of movement primitives and verifiable, safe control. We provide a framework that learns safe control laws from a probability distribution of robot trajectories through the use of advanced nonlinear control that incorporates safety constraints. Typically, such distributions are stochastic, making it difficult to offer any guarantees on safe operation. Our approach ensures that the distribution of allowed robot trajectories is within an envelope of safety and allows for robust operation of a robot. Furthermore, using our framework various probability distributions can be combined to represent complex scenarios in the environment. It will benefit practitioners by making it substantially easier to test and deploy accurate, efficient, and safe robots in complex real-world scenarios. The approach is currently limited to scenarios involving static obstacles, with dynamic obstacle avoidance an avenue of future effort.« less
  2. Most real-world 3D sensors such as LiDARs perform fixed scans of the entire environment, while being decoupled from the recognition system that processes the sensor data. In this work, we propose a method for 3D object recognition using light curtains, a resource-efficient controllable sensor that measures depth at user-specified locations in the environment. Crucially, we propose using prediction uncertainty of a deep learning based 3D point cloud detector to guide active perception. Given a neural network's uncertainty, we derive an optimization objective to place light curtains using the principle of maximizing information gain. Then, we develop a novel and efficient optimization algorithm to maximize this objective by encoding the physical constraints of the device into a constraint graph and optimizing with dynamic programming. We show how a 3D detector can be trained to detect objects in a scene by sequentially placing uncertainty-guided light curtains to successively improve detection accuracy.
  3. A vehicle on a road or a robot in the field does not need a full-featured 3D depth sensor to detect potential collisions or monitor its blind spot. Instead, it needs to only monitor if any object comes within its near proximity which is an easier task than full depth scanning. We introduce a novel device that monitors the presence of objects on a virtual shell near the device, which we refer to as a light curtain. Light curtains offer a light-weight, resource-efficient and programmable approach to proximity awareness for obstacle avoidance and navigation. They also have additional benefits in terms of improving visibility in fog as well as flexibility in handling light fall-off. Our prototype for generating light curtains works by rapidly rotating a line sensor and a line laser, in synchrony. The device is capable of generating light curtains of various shapes with a range of 20–30 m in sunlight (40 m under cloudy skies and 50 m indoors) and adapts dynamically to the demands of the task. We analyze properties of light curtains and various approaches to optimize their thickness as well as power requirements. We showcase the potential of light curtains using a range of real-worldmore »scenarios.« less
  4. Learning a robot motor skill from scratch is impractically slow; so much so that in practice, learning must typically be bootstrapped using human demonstration. However, relying on human demonstration necessarily degrades the autonomy of robots that must learn a wide variety of skills over their operational lifetimes. We propose using kinematic motion planning as a completely autonomous, sample efficient way to bootstrap motor skill learning for object manipulation. We demonstrate the use of motion planners to bootstrap motor skills in two complex object manipulation scenarios with different policy representations: opening a drawer with a dynamic movement primitive representation, and closing a microwave door with a deep neural network policy. We also show how our method can bootstrap a motor skill for the challenging dynamic task of learning to hit a ball off a tee, where a kinematic plan based on treating the scene as static is insufficient to solve the task, but sufficient to bootstrap a more dynamic policy. In all three cases, our method is competitive with human-demonstrated initialization, and significantly outperforms starting with a random policy. This approach enables robots to to efficiently and autonomously learn motor policies for dynamic tasks without human demonstration.
  5. This paper presents methods for improved teleoperation in dynamic environments in which the objects to be manipulated are moving, but vision may not meet size, biocompatibility, or maneuverability requirements. In such situations, the object could be tracked through non-geometric means, such as heat, radioactivity, or other markers. In order to safely explore a region, we use an optical time-of-flight pretouch sensor to detect (and range) target objects prior to contact. Information from these sensors is presented to the user via haptic virtual fixtures. This combination of techniques allows the teleoperator to “feel” the object without an actual contact event between the robot and the target object. Thus it provides the perceptual benefits of touch interaction to the operator, without incurring the negative consequences of the robot contacting unknown geometrical structures; premature contact can lead to damage or unwanted displacement of the target. The authors propose that as the geometry of the scene transitions from completely unknown to partially explored, haptic virtual fixtures can both prevent collisions and guide the user towards areas of interest, thus improving exploration speed. Experimental results show that for situations that are not amenable to vision, haptically-presented pretouch sensor information allows operators to more effectively exploremore »moving objects.« less