skip to main content


Title: Proactive Dynamic Distributed Constraint Optimization Problems
The Distributed Constraint Optimization Problem (DCOP) formulation is a powerful tool for modeling multi-agent coordination problems. To solve DCOPs in a dynamic environment, Dynamic DCOPs (D-DCOPs) have been proposed to model the inherent dynamism present in many coordination problems. D-DCOPs solve a sequence of static problems by reacting to changes in the environment as the agents observe them. Such reactive approaches ignore knowledge about future changes of the problem. To overcome this limitation, we introduce Proactive Dynamic DCOPs (PD-DCOPs), a novel formalism to model D-DCOPs in the presence of exogenous uncertainty. In contrast to reactive approaches, PD-DCOPs are able to explicitly model possible changes of the problem and take such information into account when solving the dynamically changing problem in a proactive manner. The additional expressivity of this formalism allows it to model a wider variety of distributed optimization problems. Our work presents both theoretical and practical contributions that advance current dynamic DCOP models: (i) We introduce Proactive Dynamic DCOPs (PD-DCOPs), which explicitly model how the DCOP will change over time; (ii) We develop exact and heuristic algorithms to solve PD-DCOPs in a proactive manner; (iii) We provide theoretical results about the complexity of this new class of DCOPs; and (iv) We empirically evaluate both proactive and reactive algorithms to determine the trade-offs between the two classes. The final contribution is important as our results are the first that identify the characteristics of the problems that the two classes of algorithms excel in.  more » « less
Award ID(s):
2143706
NSF-PAR ID:
10337587
Author(s) / Creator(s):
; ; ; ; ;
Date Published:
Journal Name:
Journal of Artificial Intelligence Research
Volume:
74
ISSN:
1076-9757
Page Range / eLocation ID:
179 to 225
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. In this work, we study the optimal design of two-armed clinical trials to maximize the accuracy of parameter estimation in a statistical model, where the interaction between patient covariates and treatment are explicitly incorporated to enable precision medication decisions. Such a modeling extension leads to significant complexities for the produced optimization problems because they include optimization over design and covariates concurrently. We take a min-max optimization model and minimize (over design) the maximum (over population) variance of the estimated interaction effect between treatment and patient covariates. This results in a min-max bilevel mixed integer nonlinear programming problem, which is notably challenging to solve. To address this challenge, we introduce a surrogate optimization model by approximating the objective function, for which we propose two solution approaches. The first approach provides an exact solution based on reformulation and decomposition techniques. In the second approach, we provide a lower bound for the inner optimization problem and solve the outer optimization problem over the lower bound. We test our proposed algorithms with synthetic and real-world data sets and compare them with standard (re)randomization methods. Our numerical analysis suggests that the proposed approaches provide higher-quality solutions in terms of the variance of estimators and probability of correct selection. We also show the value of covariate information in precision medicine clinical trials by comparing our proposed approaches to an alternative optimal design approach that does not consider the interaction terms between covariates and treatment. Summary of Contribution: Precision medicine is the future of healthcare where treatment is prescribed based on each patient information. Designing precision medicine clinical trials, which are the cornerstone of precision medicine, is extremely challenging because sample size is limited and patient information may be multidimensional. This work proposes a novel approach to optimally estimate the treatment effect for each patient type in a two-armed clinical trial by reducing the largest variance of personalized treatment effect. We use several statistical and optimization techniques to produce efficient solution methodologies. Results have the potential to save countless lives by transforming the design and implementation of future clinical trials to ensure the right treatments for the right patients. Doing so will reduce patient risks and reduce costs in the healthcare system. 
    more » « less
  2. Parallel and distributed computing (PDC) has become pervasive in all aspects of computing, and thus it is essential that students include parallelism and distribution in the computational thinking that they apply to problem solving, from the very beginning. Computer science education is still teaching to a 20th century model of algorithmic problem solving. Sequence, branch, and loop are taught in our early courses as the only organizing principles needed for algorithms, and we invest considerable time in showing how best to sequentially process large volumes of data. All computing devices that students use currently have multiple cores as well as a GPU in many cases. Most of their favorite applications use multiple cores and numbers of distributed processors. Often concurrency offers simpler solutions than sequential approaches. Industry is desperate for software engineers who think naturally in terms of exploiting these capabilities, rather than seeing them as an exotic upper-level topic that gets layered over a sequential solution. However, we are still teaching students to solve problems using sequential thinking. In this workshop we overview key PDC concepts and provide examples of how they may naturally be incorporated in early computing classes. We will introduce plugged and unplugged curriculum modules that have been successfully integrated in existing computing classes at multiple institutions. We will highlight the upcoming summer training workshop, for which we have funding to support attendance, as well as other CDER (Center for Parallel and Distributed Computing Curriculum Development and Educational Resources) activities. 
    more » « less
  3. This paper presents a novel distributed method for dynamic economic dispatch (DED) problems with cubic fuel cost functions based on the dual alternating direction method of multipliers (D‐ADMM) and interior point method (IPM). This scheme enables us to combine the excellent parallelism of ADMM with the superior convergence properties of the IPM. The idea adopted by the proposed algorithm is that the outer‐loop uses ADMM decoupling, and the inner‐loop uses IPM to solve the cubic polynomial optimization subproblems. When the unit set is divided into multiple partitions, D‐ADMM also has the nice central processing unit (CPU) runtime as the well as the number of iterations. In addition, we use a Decoupling‐Decomposition‐Backtracking and parallel improved multiple centrality corrections decoupling interior point method (DDB‐PIMCCD) to solve the aforementioned larger subproblems more efficiently. Finally, the numerical results, in both serial and parallel modes, on a set of 22 DED cases with the range of units from 8 to 1000 units show that the proposed method is very promising for large scale distributed DED problems. Because it can keep the unit information about each subset in secret to suit the market environment and can obtain high‐quality solutions with reasonable time. © 2020 Institute of Electrical Engineers of Japan. Published by Wiley Periodicals LLC.

     
    more » « less
  4. Localization in underwater environments is a fundamental problem for autonomous vehicles with important applications such as underwater ecology monitoring, infrastructure maintenance, and conservation of marine species. However, several traditional sensing modalities used for localization in outdoor robotics (e.g., GPS, compasses, LIDAR, and Vision) are compromised in underwater scenarios. In addition, other problems such as aliasing, drifting, and dynamic changes in the environment also affect state estimation in aquatic environments. Motivated by these issues, we propose novel state estimation algorithms for underwater vehicles that can read noisy sensor observations in spatio-temporal varying fields in water (e.g., temperature, pH, chlorophyll-A, and dissolved oxygen) and have access to a model of the evolution of the fields as a set of partial differential equations. We frame the underwater robot localization in an optimization framework and formulate, study, and solve the state-estimation problem. First, we find the most likely position given a sequence of observations, and we prove upper and lower bounds for the estimation error given information about the error and the fields. Our methodology can find the actual location within a 95% confidence interval around the median in over 90% of the cases in different conditions and extensions. 
    more » « less
  5. null (Ed.)
    We consider the communication complexity of a number of distributed optimization problems. We start with the problem of solving a linear system. Suppose there is a coordinator together with s servers P1, …, Ps, the i-th of which holds a subset A(i) x = b(i) of ni constraints of a linear system in d variables, and the coordinator would like to output an x ϵ ℝd for which A(i) x = b(i) for i = 1, …, s. We assume each coefficient of each constraint is specified using L bits. We first resolve the randomized and deterministic communication complexity in the point-to-point model of communication, showing it is (d2 L + sd) and (sd2L), respectively. We obtain similar results for the blackboard communication model. As a result of independent interest, we show the probability a random matrix with integer entries in {–2L, …, 2L} is invertible is 1–2−Θ(dL), whereas previously only 1 – 2−Θ(d) was known. When there is no solution to the linear system, a natural alternative is to find the solution minimizing the ℓp loss, which is the ℓp regression problem. While this problem has been studied, we give improved upper or lower bounds for every value of p ≥ 1. One takeaway message is that sampling and sketching techniques, which are commonly used in earlier work on distributed optimization, are neither optimal in the dependence on d nor on the dependence on the approximation ε, thus motivating new techniques from optimization to solve these problems. Towards this end, we consider the communication complexity of optimization tasks which generalize linear systems, such as linear, semi-definite, and convex programming. For linear programming, we first resolve the communication complexity when d is constant, showing it is (sL) in the point-to-point model. For general d and in the point-to-point model, we show an Õ(sd3L) upper bound and an (d2 L + sd) lower bound. In fact, we show if one perturbs the coefficients randomly by numbers as small as 2−Θ(L), then the upper bound is Õ(sd2L) + poly(dL), and so this bound holds for almost all linear programs. Our study motivates understanding the bit complexity of linear programming, which is related to the running time in the unit cost RAM model with words of O(log(nd)) bits, and we give the fastest known algorithms for linear programming in this model. Read More: https://epubs.siam.org/doi/10.1137/1.9781611975994.106 
    more » « less