skip to main content

This content will become publicly available on December 1, 2022

Title: Getting to "rate optimal" in ranking & selection
In their 2004 seminal paper, Glynn and Juneja formally and precisely established the rate-optimal, probability of incorrect-selection, replication allocation scheme for selecting the best of k simulated systems. In the case of independent, normally distributed outputs this allocation has a simple form that depends in an intuitively appealing way on the true means and variances. Of course the means and (typically) variances are unknown, but the rate-optimal allocation provides a target for implementable, dynamic, data-driven policies to achieve. In this paper we compare the empirical behavior of four related replication-allocation policies: mCEI from Chen and Rzyhov and our new gCEI policy that both converge to the Glynn and Juneja allocation; AOMAP from Peng and Fu that converges to the OCBA optimal allocation; and TTTS from Russo that targets the rate of convergence of the posterior probability of incorrect selection. We find that these policies have distinctly different behavior in some settings.
Authors:
; ;
Award ID(s):
1854562
Publication Date:
NSF-PAR ID:
10335103
Journal Name:
Proceedings of the 2021 Winter Simulation Conference
Sponsoring Org:
National Science Foundation
More Like this
  1. This paper studies a remote sensing system where multiple wireless sensors generate possibly noisy information updates of various surveillance fields and delivering these updates to a control center over a wireless network. The control center needs a sufficient number of recently generated information updates to have an accurate estimate of the current system status, which is critical for the control center to make appropriate control decisions. The goal of this work is then to design the optimal policy for scheduling the transmissions of information updates. Through Brownian approximation, we demonstrate that the control center’s ability to make accurate real-time estimatesmore »depends on the averages and temporal variances of the delivery processes. We then formulate a constrained optimization problem to find the optimal means and variances. We also develop a simple online scheduling policy that employs the optimal means and variances to achieve the optimal system-wide performance. Simulation results show that our scheduling policy enjoys fast convergence speed and better performance when compared to other state-of-the-art policies.« less
  2. This paper focuses on optimizing resource allocation amongst a set of tenants, network slices, supporting dynamic customer loads over a set of distributed resources, e.g., base stations. The aim is to reap the benefits of statistical multiplexing resulting from flexible sharing of ‘pooled’ resources, while enabling tenants to differentiate and protect their performance from one another’s load fluctuations. To that end we consider a setting where resources are grouped into Virtual Resource Pools (VRPs) wherein resource allocation is jointly and dynam- ically managed. Specifically for each VRP we adopt a Share- Constrained Proportionally Fair (SCPF) allocation scheme where each tenantmore »is allocated a fixed share (budget). This budget is to be distributed equally amongst its active customers which in turn are granted fractions of their associated VRP resources in proportion to customer shares. For a VRP with a single resource, this translates to the well known Generalized Processor Sharing (GPS) policy. For VRPs with multiple resources SCPF provides a flexible means to achieve load elastic allocations across tenants sharing the pool. Given tenants’ per resource shares and expected loads, this paper formulates the problem of determining optimal VRP partitions which maximize the overall expected shared weighted utility while ensuring protection guarantees. For a high load/capacity setting we exhibit this network utility function explicitly, quantifying the benefits and penalties of any VRP partition, in terms of network slices’ ability to achieve performance differentiation, load balancing, and statistical multiplexing. Although the problem is shown to be NP-Hard, a simple greedy heuristic is shown to be effective. Analysis and simulations confirm that the selection of optimal VRP partitions provide a practical avenue towards improving network utility in network slicing scenarios with dynamic loads.« less
  3. Many decision problems are set in changing environments. For example, determining the optimal investment in cyber maintenance depends on whether there is evidence of an unusual vulnerability, such as “Heartbleed,” that is causing an especially high rate of incidents. This gives rise to the need for timely information to update decision models so that optimal policies can be generated for each decision period. Social media provide a streaming source of relevant information, but that information needs to be efficiently transformed into numbers to enable the needed updates. This article explores the use of social media as an observation source formore »timely decision making. To efficiently generate the observations for Bayesian updates, we propose a novel computational method to fit an existing clustering model. The proposed method is called k-means latent Dirichlet allocation (KLDA).We illustrate the method using a cybersecurity problem. Many organizations ignore “medium” vulnerabilities identified during periodic scans. Decision makers must choose whether staff should be required to address these vulnerabilities during periods of elevated risk. Also, we study four text corpora with 100 replications and show that KLDA is associated with significantly reduced computational times and more consistent model accuracy.« less
  4. We develop a framework for designing simple and efficient policies for a family of online allocation and pricing problems that includes online packing, budget-constrained probing, dynamic pricing, and online contextual bandits with knapsacks. In each case, we evaluate the performance of our policies in terms of their regret (i.e., additive gap) relative to an offline controller that is endowed with more information than the online controller. Our framework is based on Bellman inequalities, which decompose the loss of an algorithm into two distinct sources of error: (1) arising from computational tractability issues, and (2) arising from estimation/prediction of random trajectories.more »Balancing these errors guides the choice of benchmarks, and leads to policies that are both tractable and have strong performance guarantees. In particular, in all our examples, we demonstrate constant-regret policies that only require resolving a linear program in each period, followed by a simple greedy action-selection rule; thus, our policies are practical as well as provably near optimal.« less
  5. An immunotherapy trial often uses the phase I/II design to identify the optimal biological dose, which monitors the efficacy and toxicity outcomes simultaneously in a single trial. The progression-free survival rate is often used as the efficacy outcome in phase I/II immunotherapy trials. As a result, patients developing disease progression in phase I/II immunotherapy trials are generally seriously ill and are often treated off the trial for ethical consideration. Consequently, the happening of disease progression will terminate the toxicity event but not vice versa, so the issue of the semi-competing risks arises. Moreover, this issue can become more intractable withmore »the late-onset outcomes, which happens when a relatively long follow-up time is required to ascertain progression-free survival. This paper proposes a novel Bayesian adaptive phase I/II design accounting for semi-competing risks outcomes for immunotherapy trials, referred to as the dose-finding design accounting for semi-competing risks outcomes for immunotherapy trials (SCI) design. To tackle the issue of the semi-competing risks in the presence of late-onset outcomes, we re-construct the likelihood function based on each patient's actual follow-up time and develop a data augmentation method to efficiently draw posterior samples from a series of Beta-binomial distributions. We propose a concise curve-free dose-finding algorithm to adaptively identify the optimal biological dose using accumulated data without making any parametric dose–response assumptions. Numerical studies show that the proposed SCI design yields good operating characteristics in dose selection, patient allocation, and trial duration.« less