In this paper, we develop new methods to assess safety risks of an integrated GNSS/LiDAR navigation system for highly automated vehicle (HAV) applications. LiDAR navigation requires feature extraction (FE) and data association (DA). In prior work, we established an FE and DA risk prediction algorithm assuming that the set of extracted features matched the set of mapped landmarks. This paper addresses these limiting assumptions by incorporating a Kalman filter innovation-based test to detect unwanted object (UO). UO include unmapped, moving, and wrongly excluded landmarks. An integrity risk bound is derived to account for the risk of not detecting UO. Direct simulations and preliminary testing help quantify the impact on integrity and continuity of UO monitoring in an example GNSS/LiDAR implementation.
more »
« less
Quantifying Safety of Laser-Based Navigation
In this paper, a new safety risk evaluation method is developed, simulated, and tested for laser-based navigation algorithms using feature extraction (FE) and data association (DA). First, at FE, we establish a probabilistic measure of separation between features to quantify the sensor's ability to distinguish landmarks. Then, an innovation-based DA process is designed to evaluate the impact on integrity risk of incorrect associations, while considering all potential measurement permutations. The algorithm is analyzed and tested in a structured environment.
more »
« less
- Award ID(s):
- 1637899
- PAR ID:
- 10070282
- Date Published:
- Journal Name:
- IEEE Transactions on Aerospace and Electronic Systems
- ISSN:
- 0018-9251
- Page Range / eLocation ID:
- 1 to 1
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
This research establishes new methods to quantify lidar-based navigation safety in highly automated vehicle (HAV) applications. Lidar navigation requires feature extraction (FE) and data association (DA). In prior work, an FE and DA risk prediction process was developed assuming that the set of extracted features matched the set of mapped landmarks. This paper addresses these limiting assumptions by first providing the means to select a subset of feature measurements (to be used in the estimator) while accounting for all existing landmarks in the surroundings. This is achieved by employing a probabilistic lower-bound on the mean innovation vector’s norm. This measure of landmark separation is used in an analytical integrity risk bound that accounts for all possible association hypotheses. Then, a solution separation algorithm is employed to detect unmapped obstacles and wrong extractions. The integrity risk bound is modified to incorporate the risk of not detecting an unwanted obstacle (UO) when one might be present. Covariance analysis, direct simulation, and preliminary testing show that selecting fewer extracted features can significantly reduce integrity risk, but can also decrease landmark redundancy, thereby reducing UO detection capability.more » « less
-
null (Ed.)This paper describes the derivation, analysis and implementation of a new data association method that provides a tight bound on the risk of incorrect association for LiDAR feature-based localization. Data association (DA) is the process of assigning currently-sensed features with ones that were previously observed. Most DA methods use a nearest-neighbor criterion based on the normalized innovation squared (NIS). They require complex algorithms to evaluate the risk of incorrect association because sensor state prediction, prior observations, and current measurements are uncertain. In contrast, in this work, we derive a new DA criterion using projections of the extended Kalman filter's innovation vector. The paper shows that innovation projections (IP) are signed quantities that not only capture the impact of an incorrect association in terms of its magnitude, but also of its direction. The IP-based DA criterion also leverages the fact that incorrect associations are known and well-defined fault modes. Thus, as compared to NIS, IPs provide a much tighter bound on the predicted risk of incorrect association. We analyze and evaluate the new IP method using simulated and experimental data for autonomous inertial-aided LiDAR localization in a structured lab environment.more » « less
-
Introduction In Midwestern maize ( Zea-mays L.)-based systems, planting an over-wintering cover crop such as rye ( Secale cereale L.) following fall harvests of summer crops maintains continuous soil cover, offering numerous environmental advantages. However, while adoption of cover crops has increased over the past decade, on a landscape-scale it remains low. Identifying where agronomic research could be most impactful in increasing adoption is therefore a useful exercise. Decision analysis (DA) is a tool for clarifying decision trade-offs, quantifying risk, and identifying optimal decisions. Several fields regularly utilize DA frameworks including the military, industrial engineering, business strategy, and economics, but it is not yet widely applied in agriculture. Methods Here we apply DA to a maize-soybean [ Glycine max (L.) Merr.] rotation using publicly available weather, management, and economic data from central Iowa. Results In this region, planting a cover crop following maize (preceding soybean) poses less risk to the producer compared to planting following soybean, meaning it may be a more palatable entry point for producers. Furthermore, the risk of reduced maize yields when planting less than 14 days following rye termination substantially contributes to the overall risk cover crops pose to producers, but also has significant potential to be addressed through agronomic research. Discussion In addition to identifying research priorities, DA provided clarity to a complex problem, was performed using publicly available data, and by incorporating risk it better estimated true costs to the producer compared to using input costs alone. We believe DA is a valuable and underutilized tool in agronomy and could aid in increasing adoption of cover crops in the Midwest.more » « less
-
Data augmentation (DA) is commonly used during model training, as it significantly improves test error and model robustness. DA artificially expands the training set by applying random noise, rotations, crops, or even adversarial perturbations to the input data. Although DA is widely used, its capacity to provably improve robustness is not fully understood. In this work, we analyze the robustness that DA begets by quantifying the margin that DA enforces on empirical risk minimizers. We first focus on linear separators, and then a class of nonlinear models whose labeling is constant within small convex hulls of data points. We present lower bounds on the number of augmented data points required for non-zero margin, and show that commonly used DA techniques may only introduce significant margin after adding exponentially many points to the data set.more » « less