skip to main content


This content will become publicly available on June 1, 2025

Title: Explanations for combinatorial optimization problems
We introduce a representation for generating explanations for the outcomes of combinatorial optimization algorithms. The two key ideas are (A) to maintain fine-grained representations of the values manipulated by these algorithms and (B) to derive explanations from these representations through merge, filter, and aggregation operations. An explanation in our model presents essentially a high-level comparison of the solution to a problem with a hypothesized alternative, illuminating why the solution is better than the alternative. Our value representation results in explanations smaller than other dynamic program representations, such as traces. Based on a measure for the conciseness of explanations we demonstrate through a number of experiments that the explanations produced by our approach are small and scale well with problem size across a number of different applications.  more » « less
Award ID(s):
2114642
NSF-PAR ID:
10527161
Author(s) / Creator(s):
;
Publisher / Repository:
Elsevier
Date Published:
Journal Name:
Journal of Computer Languages
Volume:
79
Issue:
C
ISSN:
2590-1184
Page Range / eLocation ID:
101272
Subject(s) / Keyword(s):
Contrastive explanation, Explanation simplification
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. MADMAX is a Haskell-embedded DSL for multi-attribute, multi-layered decision making. An important feature of this DSL is the ability to generate explanations of why a computed optimal solution is better than its alternatives. The functional approach and Haskell's type system support a high-level formulation of decision-making problems, which facilitates a number of innovations, including the gradual evolution and adaptation of problem representations, a more user-friendly form of sensitivity analysis based on problem domain data, and fine-grained control over explanations. 
    more » « less
  2. null (Ed.)
    Abstract In this paper, we present a method for explaining the results produced by dynamic programming (DP) algorithms. Our approach is based on retaining a granular representation of values that are aggregated during program execution. The explanations that are created from the granular representations can answer questions of why one result was obtained instead of another and therefore can increase the confidence in the correctness of program results. Our focus on dynamic programming is motivated by the fact that dynamic programming offers a systematic approach to implementing a large class of optimization algorithms which produce decisions based on aggregated value comparisons. It is those decisions that the granular representation can help explain. Moreover, the fact that dynamic programming can be formalized using semirings supports the creation of a Haskell library for dynamic programming that has two important features. First, it allows programmers to specify programs by recurrence relationships from which efficient implementations are derived automatically. Second, the dynamic programs can be formulated generically (as type classes), which supports the smooth transition from programs that only produce result to programs that can run with granular representation and also produce explanations. Finally, we also demonstrate how to anticipate user questions about program results and how to produce corresponding explanations automatically in advance. 
    more » « less
  3. Decentralized learning has emerged as an alternative method to the popular parameter-server framework which suffers from high communication burden, single-point failure and scalability issues due to the need of a central server. However, most existing works focus on a single shared model for all workers regardless of the data heterogeneity problem, rendering the resulting model performing poorly on individual workers. In this work, we propose a novel personalized decentralized learning algorithm named DePRL via shared representations. Our algorithm relies on ideas from representation learning theory to learn a low-dimensional global representation collaboratively among all workers in a fully decentralized manner, as well as a user-specific low-dimensional local head leading to a personalized solution for each worker. We show that DePRL achieves, for the first time, a provable \textit{linear speedup for convergence} with general non-linear representations (i.e., the convergence rate is improved linearly with respect to the number of workers). Experimental results support our theoretical findings showing the superiority of our method in data heterogeneous environments.

     
    more » « less
  4. Problem solvers vary their approaches to solving problems depending on the context of the problem, the requirements of the solution, and the ways in which the problems and material to solve the problem are represented, or representations. Representations take many forms (i.e. tables, graphs, figures, images, formulas, visualizations, and other similar contexts) and are used to communicate information to a problem solver. Engagement with certain representations varies between problem solvers and can influence design and solution quality. A problem solver’s evaluation of representations and the reasons for using a representation can be considered factors in problem-solving heuristics. These factors describe unique problem-solving behaviors that can help understand problem solvers. These behaviors may lead to important relationships between a problem solver’s decisions and their ability to solve a problem and overall quality of the solution. Therefore, we pose the following research question: How do factors of problem-solving heuristics describe the unique behaviors of engineering students as they solve multiple problems? To answer this question, we interviewed 16 undergraduate engineering students studying civil engineering. The interviews consisted of a problem-solving portion that was followed immediately by a semi-structured retrospective interview with probing questions created based on the real time monitoring of the problem-solving interview using eye tracking techniques. The problem-solving portion consisted of solving three problems related to the concept of headloss in fluid flow through pipes. Each of the three problems included the same four representations that were used by the students as approaches to solving the problem. The representations are common ways to present the concept of headloss in pipe flow and included two formulas, a set of tables, and a graph. This paper presents a set of common reasons for why decisions were made during the problem-solving process that help to understand more about the problem-solving behavior of engineering students. 
    more » « less
  5. In human-aware planning systems, a planning agent might need to explain its plan to a human user when that plan appears to be non-feasible or sub-optimal. A popular approach, called model reconciliation, has been proposed as a way to bring the model of the human user closer to the agent’s model. To do so, the agent provides an explanation that can be used to update the model of human such that the agent’s plan is feasible or optimal to the human user. Existing approaches to solve this problem have been based on automated planning methods and have been limited to classical planning problems only. In this paper, we approach the model reconciliation problem from a different perspective, that of knowledge representation and reasoning, and demonstrate that our approach can be applied not only to classical planning problems but also hybrid systems planning problems with durative actions and events/processes. In particular, we propose a logic-based framework for explanation generation, where given a knowledge base KBa (of an agent) and a knowledge base KBh (of a human user), each encoding their knowledge of a planning problem, and that KBa entails a query q (e.g., that a proposed plan of the agent is valid), the goal is to identify an explanation ε ⊆ KBa such that when it is used to update KBh, then the updated KBh also entails q. More specifically, we make the following contributions in this paper: (1) We formally define the notion of logic-based explanations in the context of model reconciliation problems; (2) We introduce a number of cost functions that can be used to reflect preferences between explanations; (3) We present algorithms to compute explanations for both classical planning and hybrid systems planning problems; and (4) We empirically evaluate their performance on such problems. Our empirical results demonstrate that, on classical planning problems, our approach is faster than the state of the art when the explanations are long or when the size of the knowledge base is small (e.g., the plans to be explained are short). They also demonstrate that our approach is efficient for hybrid systems planning problems. Finally, we evaluate the real-world efficacy of explanations generated by our algorithms through a controlled human user study, where we develop a proof-of-concept visualization system and use it as a medium for explanation communication. 
    more » « less