- Award ID(s):
- 1837779
- PAR ID:
- 10296693
- Date Published:
- Journal Name:
- International Conference on Automated Planning and Scheduling
- ISSN:
- 2334-0835
- Page Range / eLocation ID:
- 415-425
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
Constraints solvers play a significant role in the analysis, synthesis, and formal verification of complex cyber-physical systems. In this article, we study the problem of designing a scalable constraints solver for an important class of constraints named polynomial constraint inequalities (also known as nonlinear real arithmetic theory). In this article, we introduce a solver named PolyARBerNN that uses convex polynomials as abstractions for highly nonlinears polynomials. Such abstractions were previously shown to be powerful to prune the search space and restrict the usage of sound and complete solvers to small search spaces. Compared with the previous efforts on using convex abstractions, PolyARBerNN provides three main contributions namely (i) a neural network guided abstraction refinement procedure that helps selecting the right abstraction out of a set of pre-defined abstractions, (ii) a Bernstein polynomial-based search space pruning mechanism that can be used to compute tight estimates of the polynomial maximum and minimum values which can be used as an additional abstraction of the polynomials, and (iii) an optimizer that transforms polynomial objective functions into polynomial constraints (on the gradient of the objective function) whose solutions are guaranteed to be close to the global optima. These enhancements together allowed the PolyARBerNN solver to solve complex instances and scales more favorably compared to the state-of-the-art nonlinear real arithmetic solvers while maintaining the soundness and completeness of the resulting solver. In particular, our test benches show that PolyARBerNN achieved 100X speedup compared with Z3 8.9, Yices 2.6, and PVS (a solver that uses Bernstein expansion to solve multivariate polynomial constraints) on a variety of standard test benches. Finally, we implemented an optimizer called PolyAROpt that uses PolyARBerNN to solve constrained polynomial optimization problems. Numerical results show that PolyAROpt is able to solve high-dimensional and high order polynomial optimization problems with higher speed compared to the built-in optimizer in the Z3 8.9 solver.more » « less
-
The semantics of temporal hierarchical planners are limited. In hierarchical paradigms, temporal reasoning has largely focused on durative constraints of primitive actions, which may be added directly or appear post-expansion. We propose extending temporal reasoning to composite actions, specifically within decompositional partial order causal linked planning. We outline how a general-purpose hierarchical planner can approach temporal reasoning outlined in a STRIPS-like for- malism. We build upon existing temporal and hierarchical semantics, and sketch two novel approaches: time-frame planning and decompositional time-frame planning.more » « less
-
Formal certification of Neural Networks (NNs) is crucial for ensuring their safety, fairness, and robustness. Unfortunately, on the one hand, sound and complete certification algorithms of ReLU-based NNs do not scale to large-scale NNs. On the other hand, incomplete certification algorithms are easier to compute, but they result in loose bounds that deteriorate with the depth of NN, which diminishes their effectiveness. In this paper, we ask the following question; can we replace the ReLU activation function with one that opens the door to incomplete certification algorithms that are easy to compute but can produce tight bounds on the NN's outputs? We introduce DeepBern-Nets, a class of NNs with activation functions based on Bernstein polynomials instead of the commonly used ReLU activation. Bernstein polynomials are smooth and differentiable functions with desirable properties such as the so-called range enclosure and subdivision properties. We design a novel Interval Bound Propagation (IBP) algorithm, called Bern-IBP, to efficiently compute tight bounds on DeepBern-Nets outputs. Our approach leverages the properties of Bernstein polynomials to improve the tractability of neural network certification tasks while maintaining the accuracy of the trained networks. We conduct experiments in adversarial robustness and reachability analysis settings to assess the effectiveness of the approach. Our proposed framework achieves high certified accuracy for adversarially-trained NNs, which is often a challenging task for certifiers of ReLU-based NNs. This work establishes Bernstein polynomial activation as a promising alternative for improving NN certification tasks across various NNs applications.
-
Tedesco, Marco ; Lai, Ching_Yao ; Brinkerhoff, Douglas ; Stearns, Leigh (Ed.)Emulators of ice flow models have shown promise for speeding up simulations of glaciers and ice sheets. Existing ice flow emulators have relied primarily on convolutional neural networks (CNN’s), which assume that model inputs and outputs are discretized on a uniform computational grid. However, many existing finite element-based ice sheet models such as the Ice-Sheet and Sea-level System model (ISSM) benefit from their ability to use unstructured computational meshes. Unstructured meshes allow for greater flexibility and computational efficiency in many modeling scenarios. In this work, we present an emulator of a higher order, finite element ice flow model based on a graph neural network (GNN) architecture. In this architecture, an unstructured finite element mesh is represented as a graph, with inputs and outputs of the ice flow model represented as variables on graph nodes and edges. An advantage of this approach is that the ice flow emulator can interface directly with a standard finite element –based ice sheet model by mapping between the finite element mesh and a graph suitable for the GNN emulator. We test the ability of the GNN to predict velocity fields on complex mountain glacier geometries and show how the emulated velocity can be used to solve for mass continuity using a standard finite element approach.more » « less
-
null (Ed.)Using sampling to estimate the connectivity of high-dimensional configuration spaces has been the theoretical underpinning for effective sampling-based motion planners. Typical strategies either build a roadmap, or a tree as the underlying search structure that connects sampled configurations, with a focus on guaranteeing completeness and optimality as the number of samples tends to infinity. Roadmap-based planners allow preprocessing the space, and can solve multiple kinematic motion planning problems, but need a steering function to connect pairwise-states. Such steering functions are difficult to define for kinodynamic systems, and limit the applicability of roadmaps to motion planning problems with dynamical systems. Recent advances in the analysis of single query tree-based planners has shown that forward search trees based on random propagations are asymptotically optimal. The current work leverages these recent results and proposes a multi-query framework for kinodynamic planning. Bundles of kinodynamic edges can be sampled to cover the state space before the query arrives. Then, given a motion planning query, the connectivity of the state space reachable from the start can be recovered from a forward search tree reasoning about a local neighborhood of the edge bundle from each tree node. The work demonstrates theoretically that considering any constant radial neighborhood during this process is sufficient to guarantee asymptotic optimality. Experimental validation in five and twelve dimensional simulated systems also highlights the ability of the proposed edge bundles to express high-quality kinodynamic solutions. Our approach consistently finds higher quality solutions compared to SST, and RRT, often with faster initial solution times. The strategy of sampling kinodynamic edges is demonstrated to be a promising new paradigm.more » « less