skip to main content


Title: Polynomial-time Implicit Learnability in SMT
To deploy knowledge-based systems in the real world, the challenge of knowledge acquisition must be addressed. Knowledge engineering by hand is a daunting task, so machine learning has been widely proposed as an alternative. However, machine learning has difficulty acquiring rules that feature the kind of exceptions that are prevalent in real-world knowledge. Moreover, it is conjectured to be impossible to reliably learn representations featuring a desirable level of expressiveness. Works by Khardon and Roth and by Juba proposed solutions to such problems by learning to reason directly, bypassing the intractable step of producing an explicit representation of the learned knowledge. These works focused on Boolean, propositional logics. In this work, we consider such implicit learning to reason for arithmetic theories, including logics considered with satisfiability modulo theory (SMT) solvers. We show that for standard fragments of linear arithmetic, we can learn to reason efficiently. These results are consequences of a more general finding: we show that there is an efficient reduction from the learning to reason problem for a logic to any sound and complete solver for that logic.  more » « less
Award ID(s):
1718380
NSF-PAR ID:
10187840
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Proceedings of the 24th European Conference on Artificial Intelligence - ECAI 2020
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Improving the performance and explanations of ML algorithms is a priority for adoption by humans in the real world. In critical domains such as healthcare, such technology has significant potential to reduce the burden on humans and considerably reduce manual assessments by providing quality assistance at scale. In today’s data-driven world, artificial intelligence (AI) systems are still experiencing issues with bias, explainability, and human-like reasoning and interpretability. Causal AI is the technique that can reason and make human-like choices making it possible to go beyond narrow Machine learning-based techniques and can be integrated into human decision-making. It also offers intrinsic explainability, new domain adaptability, bias free predictions, and works with datasets of all sizes. In this tutorial of type lecture style, we detail how a richer representation of causality in AI systems using a knowledge graph (KG) based approach is needed for intervention and counterfactual reasoning (Figure 1), how do we get to model-based and domain explainability, how causal representations helps in web and health care. 
    more » « less
  2. Recently, there has been a growing interest in developing machine learning (ML) models that can promote fairness, i.e., eliminating biased predictions towards certain populations (e.g., individuals from a specific demographic group). Most existing works learn such models based on well-designed fairness constraints in optimization. Nevertheless, in many practical ML tasks, only very few labeled data samples can be collected, which can lead to inferior fairness performance. This is because existing fairness constraints are designed to restrict the prediction disparity among different sensitive groups, but with few samples, it becomes difficult to accurately measure the disparity, thus rendering ineffective fairness optimization. In this paper, we define the fairness-aware learning task with limited training samples as the fair few-shot learning problem. To deal with this problem, we devise a novel framework that accumulates fairness-aware knowledge across different meta-training tasks and then generalizes the learned knowledge to meta-test tasks. To compensate for insufficient training samples, we propose an essential strategy to select and leverage an auxiliary set for each meta-test task. These auxiliary sets contain several labeled training samples that can enhance the model performance regarding fairness in meta-test tasks, thereby allowing for the transfer of learned useful fairness-oriented knowledge to meta-test tasks. Furthermore, we conduct extensive experiments on three real-world datasets to validate the superiority of our framework against the state-of-the-art baselines. 
    more » « less
  3. Knowledge graph reasoning plays a pivotal role in many real-world applications, such as network alignment, computational fact-checking, recommendation, and many more. Among these applications, knowledge graph completion (KGC) and multi-hop question answering over knowledge graph (Multi-hop KGQA) are two representative reasoning tasks. In the vast majority of the existing works, the two tasks are considered separately with different models or algorithms. However, we envision that KGC and Multi-hop KGQA are closely related to each other. Therefore, the two tasks will benefit from each other if they are approached adequately. In this work, we propose a neural model named BiNet to jointly handle KGC and multi-hop KGQA, and formulate it as a multi-task learning problem. Specifically, our proposed model leverages a shared embedding space and an answer scoring module, which allows the two tasks to automatically share latent features and learn the interactions between natural language question decoder and answer scoring module. Compared to the existing methods, the proposed BiNet model addresses both multi-hop KGQA and KGC tasks simultaneously with superior performance. Experiment results show that BiNet outperforms state-of-the-art methods on a wide range of KGQA and KGC benchmark datasets. 
    more » « less
  4. Contrary to the vast literature in modeling, perceiving, and understanding agent-object (e.g., human-object, hand-object, robot-object) interaction in computer vision and robotics, very few past works have studied the task of object-object interaction, which also plays an important role in robotic manipulation and planning tasks. There is a rich space of object-object interaction scenarios in our daily life, such as placing an object on a messy tabletop, fitting an object inside a drawer, pushing an object using a tool, etc. In this paper, we propose a unified affordance learning framework to learn object-object interaction for various tasks. By constructing four object-object interaction task environments using physical simulation (SAPIEN) and thousands of ShapeNet models with rich geometric diversity, we are able to conduct large-scale object-object affordance learning without the need for human annotations or demonstrations. At the core of technical contribution, we propose an object-kernel point convolution network to reason about detailed interaction between two objects. Experiments on large-scale synthetic data and real-world data prove the effectiveness of the proposed approach. 
    more » « less
  5. Demand for image editing has been increasing as users' desire for expression is also increasing. However, for most users, image editing tools are not easy to use since the tools require certain expertise in photo effects and have complex interfaces. Hence, users might need someone to help edit their images, but having a personal dedicated human assistant for every user is impossible to scale. For that reason, an automated assistant system for image editing is desirable. Additionally, users want more image sources for diverse image editing works, and integrating an image search functionality into the editing tool is a potential remedy for this demand. Thus, we propose a dataset of an automated Conversational Agent for Image Search and Editing (CAISE). To our knowledge, this is the first dataset that provides conversational image search and editing annotations, where the agent holds a grounded conversation with users and helps them to search and edit images according to their requests. To build such a system, we first collect image search and editing conversations between pairs of annotators. The assistant-annotators are equipped with a customized image search and editing tool to address the requests from the user-annotators. The functions that the assistant-annotators conduct with the tool are recorded as executable commands, allowing the trained system to be useful for real-world application execution. We also introduce a generator-extractor baseline model for this task, which can adaptively select the source of the next token (i.e., from the vocabulary or from textual/visual contexts) for the executable command. This serves as a strong starting point while still leaving a large human-machine performance gap for useful future work.conversational image search and editing annotations, where the agent holds a grounded conversation with users and helps them to search and edit images according to their requests. To build such a system, we first collect image search and editing conversations between pairs of annotators. The assistant-annotators are equipped with a customized image search and editing tool to address the requests from the user-annotators. The functions that the assistant-annotators conduct with the tool are recorded as executable commands, allowing the trained system to be useful for real-world application execution. We also introduce a generator-extractor baseline model for this task, which can adaptively select the source of the next token (i.e., from the vocabulary or from textual/visual contexts) for the executable command. This serves as a strong starting point while still leaving a large human-machine performance gap for useful future work. 
    more » « less