3D surface reconstruction usually begins with a point cloud and aims to build a representation of the object producing that point cloud. There are several algorithms to solve this problem, each with different priors over the point cloud, such as the type of object represented, or the method by which it was obtained. In this work, we focus on an algorithm called Non-Convex Hull (NCH), which reconstructs surfaces through a concept similar to the Medial Axis Transform. A new algorithm called Shrinking Planes is proposed to compute the NCH, based on the Shrinking Ball method with a few improvements. We prove that the new method can approximate surfaces to arbitrarily small error, and evaluate its performance on the surface reconstruction task. The new method maintains the same reconstruction quality as the Naïve Non-Convex Hull method, while achieving a large performance improvement.
more »
« less
Single Observation Adaptive Search for Continuous Simulation
Optimizing the performance of complex systems modeled by stochastic computer simulations is a challenging task, partly because of the lack of structural properties (e.g., convexity). This challenge is magnified by the presence of random error whereby an adaptive algorithm searching for better designs can at times mistakenly accept an inferior design. In contrast to performing multiple simulations at a design point to estimate the performance of the design, we propose a framework for adaptive search algorithms that executes a single simulation for each design point encountered. Here the estimation errors are reduced by averaging the performances from previously evaluated designs drawn from a shrinking ball around the current design point. We show under mild regularity conditions for continuous design spaces that the accumulated errors, although dependent, form a martingale process, and hence, by the strong law of large numbers for martingales, the average errors converge to zero as the algorithm proceeds. This class of algorithms is shown to converge to a global optimum with probability one. By employing a shrinking ball approach with single observations, an adaptive search algorithm can simultaneously improve the estimates of performance while exploring new and potentially better design points. Numerical experiments offer empirical support for this paradigm of single observation simulation optimization.
more »
« less
- Award ID(s):
- 1632793
- PAR ID:
- 10108680
- Date Published:
- Journal Name:
- Operations research
- Volume:
- 66
- Issue:
- 6
- ISSN:
- 0030-364X
- Page Range / eLocation ID:
- 1713-1727201
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
We focus on simulation optimization algorithms that are designed to accommodate noisy black-box functions on mixed integer/continuous domains. There are several approaches used to account for noise which include aggregating multiple function replications from sample points and a newer method of aggregating single replications within a “shrinking ball.” We examine a range of algorithms, including, simulated annealing, interacting particle, covariance-matrix adaption evolutionary strategy, and particle swarm optimization to compare the effectiveness in generating optimal solutions using averaged function replications versus a shrinking ball approximation. We explore problems in mixed integer/continuous domains. Six test functions are examined with 10 and 20 dimensions, with integer restrictions enforced on 0%, 50%, and 100% of the dimensions, and with noise ranging from 10% to 20% of function output. This study demonstrates the relative effectiveness of using the shrinking ball approach, demonstrating that its use typically enhances solver performance for the tested optimization methods.more » « less
-
Quantum circuit simulations are critical for evaluating quantum algorithms and machines. However, the number of state amplitudes required for full simulation increases exponentially with the number of qubits. In this study, we leverage data compression to reduce memory requirements, trading computation time and fidelity for memory space. Specifically, we develop a hybrid solution by combining the lossless compression and our tailored lossy compression method with adaptive error bounds at each timestep of the simulation. Our approach optimizes for compression speed and makes sure that errors due to lossy compression are uncorrelated, an important property for comparing simulation output with physical machines. Experiments show that our approach reduces the memory requirement of simulating the 61-qubit Grover's search algorithm from 32 exabytes to 768 terabytes of memory on Argonne's Theta supercomputer using 4,096 nodes. The results suggest that our techniques can increase the simulation size by 2~16 qubits for general quantum circuits.more » « less
-
This paper considers the formation flying of multiple quadrotors with a desired orientation and a leader. In the formation flying control, it is assumed that the desired formation is time-varying and there are the system uncertainty and the information uncertainty. In order to deal with different uncertainties, a backstepping-based approach is proposed for the controller design. In the proposed approach, different types of uncertainties are considered in different steps. By integrating adaptive/robust control results and Laplacian algebraic theory, distributed robust adaptive control laws are proposed such that the formation errors exponentially converge to zero and the attitude of each quadrotor exponentially converges to the desired value. Simulation results show the effectiveness of the proposed algorithms.more » « less
-
Adaptive random search approaches have been shown to be effective for global optimization problems, where under certain conditions, the expected performance time increases only linearly with dimension. However, previous analyses assume that the objective function can be observed directly. We consider the case where the objective function must be estimated, often using a noisy function, as in simulation. We present a finite-time analysis of algorithm performance that combines estimation with a sampling distribution. We present a framework called Hesitant Adaptive Search with Estimation, and derive an upper bound on function evaluations that is cubic in dimension, under certain conditions. We extend the framework to Quantile Adaptive Search with Estimation, which focuses sampling points from a series of nested quantile level sets. The analyses suggest that computational effort is better expended on sampling improving points than refining estimates of objective function values during the progress of an adaptive search algorithm.more » « less
An official website of the United States government

