skip to main content


Title: Scalable Adaptive Batch Sampling in Simulation-Based Design With Heteroscedastic Noise
Abstract In this study, we propose a scalable batch sampling scheme for optimization of simulation models with spatially varying noise. The proposed scheme has two primary advantages: (i) reduced simulation cost by recommending batches of samples at carefully selected spatial locations and (ii) improved scalability by actively considering replicating at previously observed sampling locations. Replication improves the scalability of the proposed sampling scheme as the computational cost of adaptive sampling schemes grow cubicly with the number of unique sampling locations. Our main consideration for the allocation of computational resources is the minimization of the uncertainty in the optimal design. We analytically derive the relationship between the “exploration versus replication decision” and the posterior variance of the spatial random process used to approximate the simulation model’s mean response. Leveraging this reformulation in a novel objective-driven adaptive sampling scheme, we show that we can identify batches of samples that minimize the prediction uncertainty only in the regions of the design space expected to contain the global optimum. Finally, the proposed sampling scheme adopts a modified preposterior analysis that uses a zeroth-order interpolation of the spatially varying simulation noise to identify sampling batches. Through the optimization of three numerical test functions and one engineering problem, we demonstrate (i) the efficacy and of the proposed sampling scheme to deal with a wide array of stochastic functions, (ii) the superior performance of the proposed method on all test functions compared to existing methods, (iii) the empirical validity of using a zeroth-order approximation for the allocation of sampling batches, and (iv) its applicability to molecular dynamics simulations by optimizing the performance of an organic photovoltaic cell as a function of its processing settings.  more » « less
Award ID(s):
1753770 1662509 1662435
NSF-PAR ID:
10279167
Author(s) / Creator(s):
; ; ; ; ; ; ; ;
Date Published:
Journal Name:
Journal of Mechanical Design
Volume:
143
Issue:
3
ISSN:
1050-0472
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Abstract

    Objective-driven adaptive sampling is a widely used tool for the optimization of deterministic black-box functions. However, the optimization of stochastic simulation models as found in the engineering, biological, and social sciences is still an elusive task. In this work, we propose a scalable adaptive batch sampling scheme for the optimization of stochastic simulation models with input-dependent noise. The developed algorithm has two primary advantages: (i) by recommending sampling batches, the designer can benefit from parallel computing capabilities, and (ii) by replicating of previously observed sampling locations the method can be scaled to higher-dimensional and more noisy functions. Replication improves numerical tractability as the computational cost of Bayesian optimization methods is known to grow cubicly with the number of unique sampling locations. Deciding when to replicate and when to explore depends on what alternative minimizes the posterior prediction accuracy at and around the spatial locations expected to contain the global optimum. The algorithm explores a new sampling location to reduce the interpolation uncertainty and replicates to improve the accuracy of the mean prediction at a single sampling location. Through the application of the proposed sampling scheme to two numerical test functions and one real engineering problem, we show that we can reliably and efficiently find the global optimum of stochastic simulation models with input-dependent noise.

     
    more » « less
  2. Summary

    A key component in controlling the spread of an epidemic is deciding where, when and to whom to apply an intervention. We develop a framework for using data to inform these decisions in realtime. We formalize a treatment allocation strategy as a sequence of functions, one per treatment period, that map up-to-date information on the spread of an infectious disease to a subset of locations where treatment should be allocated. An optimal allocation strategy optimizes some cumulative outcome, e.g. the number of uninfected locations, the geographic footprint of the disease or the cost of the epidemic. Estimation of an optimal allocation strategy for an emerging infectious disease is challenging because spatial proximity induces interference between locations, the number of possible allocations is exponential in the number of locations, and because disease dynamics and intervention effectiveness are unknown at outbreak. We derive a Bayesian on-line estimator of the optimal allocation strategy that combines simulation–optimization with Thompson sampling. The estimator proposed performs favourably in simulation experiments. This work is motivated by and illustrated using data on the spread of white nose syndrome, which is a highly fatal infectious disease devastating bat populations in North America.

     
    more » « less
  3. This paper proposes AdaTest, a novel adaptive test pattern generation framework for efficient and reliable Hardware Trojan (HT) detection. HT is a backdoor attack that tampers with the design of victim integrated circuits (ICs). AdaTest improves the existing HT detection techniques in terms of scalability and accuracy of detecting smaller Trojans in the presence of noise and variations. To achieve high trigger coverage, AdaTest leverages Reinforcement Learning (RL) to produce a diverse set of test inputs. Particularly, we progressively generate test vectors with high ‘reward’ values in an iterative manner. In each iteration, the test set is evaluated and adaptively expanded as needed. Furthermore, AdaTest integrates adaptive sampling to prioritize test samples that provide more information for HT detection, thus reducing the number of samples while improving the samples’ quality for faster exploration. We develop AdaTest with a Software/Hardware co-design principle and provide an optimized on-chip architecture solution. AdaTest’s architecture minimizes the hardware overhead in two ways: (i) Deploying circuit emulation on programmable hardware to accelerate reward evaluation of the test input; (ii) Pipelining each computation stage in AdaTest by automatically constructing auxiliary circuit for test input generation, reward evaluation, and adaptive sampling. We evaluate AdaTest’s performance on various HT benchmarks and compare it with two prior works that use logic testing for HT detection. Experimental results show that AdaTest engenders up to two orders of test generation speedup and two orders of test set size reduction compared to the prior works while achieving the same level or higher Trojan detection rate. 
    more » « less
  4. Facing stochastic variations of the loads due to an increasing penetration of renewable energy generation, online decision making under uncertainty in modern power systems is capturing power researchers' attention in recent years. To address this issue while achieving a good balance between system security and economic objectives, we propose a surrogate-enhanced scheme under a joint chance-constrained (JCC) optimal power-flow (OPF) framework. Starting from a stochastic-sampling procedure, we first utilize the copula theory to simulate the dependence among multivariate uncertain inputs. Then, to reduce the prohibitive computational time required in the traditional Monte-Carlo (MC) method, we propose to use a polynomial-chaos-based surrogate that allows us to efficiently evaluate the power-system model at non-Gaussian distributed sampled values with a negligible computing cost. Learning from the MC simulated samples, we further proposed a hybrid adaptive approach to overcome the conservativeness of the JCC-OPF by utilizing correlation of the system states, which is ignored in the traditional Boole's inequality. The simulations conducted on the modified Illinois test system demonstrate the excellent performance of the proposed method. 
    more » « less
  5. null (Ed.)
    There are 26 million refugees worldwide seeking safety from persecution, violence, conflict, and human rights violations. Camp-based refugees are those that seek shelter in refugee camps, whereas urban refugees inhabit nearby, surrounding populations. The systems that supply aid to refugee camps may suffer from ineffective distribution due to challenges in administration, demand uncertainty and volatility in funding. Aid allocation should be carried out in a manner that properly balances the need of ensuring sufficient aid for camp-based refugees, with the ability to share excess inventory, when available, with urban refugees that at times seek nearby camp-based aid. We develop an inventory management policy to govern a camp’s sharing of aid with urban refugee populations in the midst of uncertainties related to camp-based and urban demands, and replenishment cycles due to funding issues. We use the policy to construct costs associated with: i) referring urban populations elsewhere, ii) depriving camp-based refugee populations, and iii) holding excess inventory in the refugee camp system. We then seek to allocate aid in a manner that minimizes the expected overall cost to the system. We propose two approaches to solve the resulting optimization problem, and conduct computational experiments on a real-world case study as well as on synthetic data. Our results are complemented by an extensive simulation study that reveals broad support for our optimal thresholds and allocations to generalize across varied key parameters and distributions. We conclude by presenting related discussions that reveal key managerial insights into humanitarian aid allocation under uncertainty. 
    more » « less