 Award ID(s):
 1908665
 NSFPAR ID:
 10329540
 Date Published:
 Journal Name:
 Mathematics of Operations Research
 Volume:
 47
 Issue:
 1
 ISSN:
 0364765X
 Page Range / eLocation ID:
 616 to 642
 Format(s):
 Medium: X
 Sponsoring Org:
 National Science Foundation
More Like this

Employing mobile actuators and sensors for control and estimation of spatially distributed processes offers a significant advantage over immobile actuators and sensors. In addition to the control performance improvement, one also comes across the economic advantages since fewer devices, if allowed to be repositioned within a spatial domain, must be employed. While simulation studies of mobile actuators report superb controller performance, they are far from reality as the mechanical constraints of the mobile platforms carrying actuators and sensors have to satisfy motional constraints. Terrain platforms cannot behave as point masses without inertia; instead they must satisfy constraints which are adequately represented as pathdependent reachability sets. When the control algorithm commands a mobile platform to reposition itself in a different spatial location within the spatial domain, this does not occur instantaneously and for the most part the motion is not omnidirectional. This constraint is combined with a computationally feasible and suboptimal control policy with mobile actuators to arrive at a numerically viable control and guidance scheme. The feasible control decision comes from a continuousdiscrete control policy whereby the mobile platform carrying the actuator is repositioned at discrete times and dwells in a specific position for a certain time interval. Moving to a subsequent spatial location and computing its associated path over a physicsimposed time interval, a set of candidate positions and paths is derived using a pathdependent reachability set. Embedded into the pathdependent reachability sets that dictate the mobile actuator repositioning, a scheme is proposed to integrate collocated sensing measurements in order to minimize costly state estimation schemes. The proposed scheme is demonstrated with a 2D PDE having two sets of collocated actuatorsensor pairs onboard mobile platforms.more » « less

null (Ed.)Abstract We consider a natural generalization of classical scheduling problems to a setting in which using a time unit for processing a job causes some timedependent cost, the timeofuse tariff, which must be paid in addition to the standard scheduling cost. We focus on preemptive singlemachine scheduling and two classical scheduling cost functions, the sum of (weighted) completion times and the maximum completion time, that is, the makespan. While these problems are easy to solve in the classical scheduling setting, they are considerably more complex when timeofuse tariffs must be considered. We contribute optimal polynomialtime algorithms and best possible approximation algorithms. For the problem of minimizing the total (weighted) completion time on a single machine, we present a polynomialtime algorithm that computes for any given sequence of jobs an optimal schedule, i.e., the optimal set of time slots to be used for preemptively scheduling jobs according to the given sequence. This result is based on dynamic programming using a subtle analysis of the structure of optimal solutions and a potential function argument. With this algorithm, we solve the unweighted problem optimally in polynomial time. For the more general problem, in which jobs may have individual weights, we develop a polynomialtime approximation scheme (PTAS) based on a dual scheduling approach introduced for scheduling on a machine of varying speed. As the weighted problem is strongly NPhard, our PTAS is the best possible approximation we can hope for. For preemptive scheduling to minimize the makespan, we show that there is a comparably simple optimal algorithm with polynomial running time. This is true even in a certain generalized model with unrelated machines.more » « less

Scholkopf, Bernhard ; Uhler, Caroline ; Zhang, Kun (Ed.)In order to test if a treatment is perceptibly different from a placebo in a randomized experiment with covariates, classical nonparametric tests based on ranks of observations/residuals have been employed (eg: by Rosenbaum), with finitesample valid inference enabled via permutations. This paper proposes a different principle on which to base inference: if — with access to all covariates and outcomes, but without access to any treatment assignments — one can form a ranking of the subjects that is sufficiently nonrandom (eg: mostly treated followed by mostly control), then we can confidently conclude that there must be a treatment effect. Based on a more nuanced, quantifiable, version of this principle, we design an interactive test called ibet: the analyst forms a single permutation of the subjects one element at a time, and at each step the analyst bets toy money on whether that subject was actually treated or not, and learns the truth immediately after. The wealth process forms a realvalued measure of evidence against the global causal null, and we may reject the null at level if the wealth ever crosses 1= . Apart from providing a fresh “gametheoretic” principle on which to base the causal conclusion, the ibet has other statistical and computational benefits, for example (A) allowing a human to adaptively design the test statistic based on increasing amounts of data being revealed (along with any working causal models and prior knowledge), and (B) not requiring permutation resampling, instead noting that under the null, the wealth forms a nonnegative martingale, and the type1 error control of the aforementioned decision rule follows from a tight inequality by Ville. Further, if the null is not rejected, new subjects can later be added and the test can be simply continued, without any corrections (unlike with permutation pvalues). Numerical experiments demonstrate good power under various heterogeneous treatment effects. We first describe ibet test for twosample comparisons with unpaired data, and then adapt it to paired data, multisample comparison, and sequential settings; these may be viewed as interactive martingale variants of the Wilcoxon, KruskalWallis, and Friedman tests.more » « less

Recent clinical trials have shown that adaptive drug therapies can be more efficient than a standard cancer treatment based on a continuous use of maximum tolerated doses (MTD). The adaptive therapy paradigm is not based on a preset schedule; instead, the doses are administered based on the current state of tumour. But the adaptive treatment policies examined so far have been largely ad hoc. We propose a method for systematically optimizing adaptive policies based on an evolutionary game theory model of cancer dynamics. Given a set of treatment objectives, we use the framework of dynamic programming to find the optimal treatment strategies. In particular, we optimize the total drug usage and time to recovery by solving a Hamilton–Jacobi–Bellman equation. We compare MTDbased treatment strategy with optimal adaptive treatment policies and show that the latter can significantly decrease the total amount of drugs prescribed while also increasing the fraction of initial tumour states from which the recovery is possible. We conclude that the use of optimal control theory to improve adaptive policies is a promising concept in cancer treatment and should be integrated into clinical trial design.more » « less

Abstract Let
be a configuration of$$\textbf{p}$$ $p$n points in for some$$\mathbb R^d$$ ${R}^{d}$n and some . Each pair of points defines an edge, which has a Euclidean length in the configuration. A path is an ordered sequence of the points, and a loop is a path that begins and ends at the same point. A path or loop, as a sequence of edges, also has a Euclidean length, which is simply the sum of its Euclidean edge lengths. We are interested in reconstructing$$d \ge 2$$ $d\ge 2$ given a set of edge, path and loop lengths. In particular, we consider the unlabeled setting where the lengths are given simply as a set of real numbers, and are not labeled with the combinatorial data describing which paths or loops gave rise to these lengths. In this paper, we study the question of when$$\textbf{p}$$ $p$ will be uniquely determined (up to an unknowable Euclidean transform) from some given set of path or loop lengths through an exhaustive trilateration process. Such a process has already been used for the simpler problem of reconstruction using unlabeled edge lengths. This paper also provides a complete proof that this process must work in that edgesetting when given a sufficiently rich set of edge measurements and assuming that$$\textbf{p}$$ $p$ is generic.$$\textbf{p}$$ $p$