skip to main content


Title: Extending Answer Set Programs with Neural Networks
The integration of low-level perception with high-level reasoning is one of the oldest problems in Artificial Intelligence. Recently, several proposals were made to implement the reasoning process in complex neural network architectures. While these works aim at extending neural networks with the capability of reasoning, a natural question that we consider is: can we extend answer set programs with neural networks to allow complex and high-level reasoning on neural network outputs? As a preliminary result, we propose NeurASP – a simple extension of answer set programs by embracing neural networks where neural network outputs are treated as probability distributions over atomic facts in answer set programs. We show that NeurASP can not only improve the perception accuracy of a pre-trained neural network, but also help to train a neural network better by giving restrictions through logic rules. However, training with NeurASP would take much more time than pure neural network training due to the internal use of a symbolic reasoning engine. For future work, we plan to investigate the potential ways to solve the scalability issue of NeurASP. One potential way is to embed logic programs directly in neural networks. On this route, we plan to first design a SAT solver using neural networks, then extend such a solver to allow logic programs.  more » « less
Award ID(s):
1815337 2006747
NSF-PAR ID:
10295441
Author(s) / Creator(s):
Date Published:
Journal Name:
Electronic proceedings in theoretical computer science
ISSN:
2075-2180
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. We present NeurASP, a simple extension of answer set programs by embracing neural networks. By treating the neural network output as the probability distribution over atomic facts in answer set programs, NeurASP provides a simple and effective way to integrate sub-symbolic and symbolic computation. We demonstrate how NeurASP can make use of a pre-trained neural network in symbolic computation and how it can improve the neural network's perception result by applying symbolic reasoning in answer set programming. Also, NeurASP can make use of ASP rules to train a neural network better so that a neural network not only learns from implicit correlations from the data but also from the explicit complex semantic constraints expressed by the rules.

     
    more » « less
  2. Ricca, Francesco et (Ed.)
    The integration of low-level perception with high-level reasoning is one of the oldest problems in Artificial Intelligence. Today, the topic is revisited with the recent rise of deep neural networks. However, it is still not clear how complex and high-level reasoning, such as default reasoning, ontology reasoning, and causal reasoning, can be successfully computed by these approaches. The latter subject has been well-studied in the area of knowledge representation (KR), but many KR formalisms, including answer set programming (ASP), are logic-oriented and do not incorporate high-dimensional feature space as in deep learning, which limits the applicability of KR in many practical applications. 
    more » « less
  3. Large language models (LLMs), such as GPT-3 and GPT-4, have demonstrated exceptional performance in various natural language processing tasks and have shown the ability to solve certain reasoning problems. However, their reasoning capabilities are limited and relatively shallow, despite the application of various prompting techniques. In contrast, formal logic is adept at handling complex reasoning, but translating natural language descriptions into formal logic is a challenging task that non-experts struggle with. This paper proposes a neuro-symbolic method that combines the strengths of large language models and answer set programming. Specifically, we employ an LLM to transform natural language descriptions of logic puzzles into answer set programs. We carefully design prompts for an LLM to convert natural language descriptions into answer set programs in a step by step manner. Surprisingly, with just a few in-context learning examples, LLMs can generate reasonably complex answer set programs. The majority of errors made are relatively simple and can be easily corrected by humans, thus enabling LLMs to effectively assist in the creation of answer set programs. 
    more » « less
  4. Abstract We present a general approach to planning with incomplete information in Answer Set Programming (ASP). More precisely, we consider the problems of conformant and conditional planning with sensing actions and assumptions. We represent planning problems using a simple formalism where logic programs describe the transition function between states, the initial states and the goal states. For solving planning problems, we use Quantified Answer Set Programming (QASP), an extension of ASP with existential and universal quantifiers over atoms that is analogous to Quantified Boolean Formulas (QBFs). We define the language of quantified logic programs and use it to represent the solutions different variants of conformant and conditional planning. On the practical side, we present a translation-based QASP solver that converts quantified logic programs into QBFs and then executes a QBF solver, and we evaluate experimentally the approach on conformant and conditional planning benchmarks. 
    more » « less
  5. The globalization of the IC supply chain has raised many security threats, especially when untrusted parties are involved. This has created a demand for a dependable logic obfuscation solution to combat these threats. Amongst a wide range of threats and countermeasures on logic obfuscation in the 2010s decade, the Boolean satisfiability (SAT) attack, or one of its derivatives, could break almost all state-of-the-art logic obfuscation countermeasures. However, in some cases, particularly when the logic locked circuits contain complex structures, such as big multipliers, large routing networks, or big tree structures, the logic locked circuit is hard-to-be-solved for the SAT attack. Usage of these structures for obfuscation may lead a strong defense, as many SAT solvers fail to handle such complexity. However, in this paper, we propose a neural-network-guided SAT attack (NNgSAT), in which we examine the capability and effectiveness of a message-passing neural network (MPNN) for solving these complex structures (SAT-hard instances). In NNgSAT, after being trained as a classifier to predict SAT/UNSAT on a SAT problem (NN serves as a SAT solver), the neural network is used to guide/help the actual SAT solver for finding the SAT assignment(s). By training NN on conjunctive normal forms (CNFs) corresponded to a dataset of logic locked circuits, as well as fine-tuning the confidence rate of the NN prediction, our experiments show that NNgSAT could solve 93.5% of the logic locked circuits containing complex structures within a reasonable time, while the existing SAT attack cannot proceed the attack flow in them. 
    more » « less