This paper studies a remote sensing system where multiple wireless sensors generate possibly noisy information updates of various surveillance fields and delivering these updates to a control center over a wireless network. The control center needs a sufficient number of recently generated information updates to have an accurate estimate of the current system status, which is critical for the control center to make appropriate control decisions. The goal of this work is then to design the optimal policy for scheduling the transmissions of information updates. Through Brownian approximation, we demonstrate that the control center’s ability to make accurate real-time estimatesmore »
This content will become publicly available on December 1, 2022
Getting to "rate optimal" in ranking & selection
In their 2004 seminal paper, Glynn and Juneja formally and precisely established the rate-optimal, probability of incorrect-selection, replication allocation scheme for selecting the best of k simulated systems. In the case of independent, normally distributed outputs this allocation has a simple form that depends in an intuitively appealing way on the true means and variances. Of course the means and (typically) variances are unknown, but the rate-optimal allocation provides a target for implementable, dynamic, data-driven policies to achieve. In this paper we compare the empirical behavior of four related replication-allocation policies: mCEI from Chen and Rzyhov and our new gCEI policy that both converge to the Glynn and Juneja allocation; AOMAP from Peng and Fu that converges to the OCBA optimal allocation; and TTTS from Russo that targets the rate of convergence of the posterior probability of incorrect selection. We find that these policies have distinctly different behavior in some settings.
- Award ID(s):
- 1854562
- Publication Date:
- NSF-PAR ID:
- 10335103
- Journal Name:
- Proceedings of the 2021 Winter Simulation Conference
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
This paper focuses on optimizing resource allocation amongst a set of tenants, network slices, supporting dynamic customer loads over a set of distributed resources, e.g., base stations. The aim is to reap the benefits of statistical multiplexing resulting from flexible sharing of ‘pooled’ resources, while enabling tenants to differentiate and protect their performance from one another’s load fluctuations. To that end we consider a setting where resources are grouped into Virtual Resource Pools (VRPs) wherein resource allocation is jointly and dynam- ically managed. Specifically for each VRP we adopt a Share- Constrained Proportionally Fair (SCPF) allocation scheme where each tenantmore »
-
Many decision problems are set in changing environments. For example, determining the optimal investment in cyber maintenance depends on whether there is evidence of an unusual vulnerability, such as “Heartbleed,” that is causing an especially high rate of incidents. This gives rise to the need for timely information to update decision models so that optimal policies can be generated for each decision period. Social media provide a streaming source of relevant information, but that information needs to be efficiently transformed into numbers to enable the needed updates. This article explores the use of social media as an observation source formore »
-
We develop a framework for designing simple and efficient policies for a family of online allocation and pricing problems that includes online packing, budget-constrained probing, dynamic pricing, and online contextual bandits with knapsacks. In each case, we evaluate the performance of our policies in terms of their regret (i.e., additive gap) relative to an offline controller that is endowed with more information than the online controller. Our framework is based on Bellman inequalities, which decompose the loss of an algorithm into two distinct sources of error: (1) arising from computational tractability issues, and (2) arising from estimation/prediction of random trajectories.more »
-
An immunotherapy trial often uses the phase I/II design to identify the optimal biological dose, which monitors the efficacy and toxicity outcomes simultaneously in a single trial. The progression-free survival rate is often used as the efficacy outcome in phase I/II immunotherapy trials. As a result, patients developing disease progression in phase I/II immunotherapy trials are generally seriously ill and are often treated off the trial for ethical consideration. Consequently, the happening of disease progression will terminate the toxicity event but not vice versa, so the issue of the semi-competing risks arises. Moreover, this issue can become more intractable withmore »