skip to main content


Title: Negative Weights in Hinge-Loss Markov Random Fields.
Probabilistic soft logic (PSL) is a framework for instantiating probabilistic graphical models (PGM) representing complex relational data. Weighted first-order logical statements are used as templates for creating potential functions which define the PGM density. Traditionally, PSL constrains weights to be non-negative to ensure maximum a posteriori (MAP) inference is a tractable convex optimization problem. We propose three novel approaches to extending PSL's expressivity to allow negative weights. Notably, we propose the use of Gödel logic for defining potentials from negatively weighted rules. This method improves upon prior work on this topic by preserving both the convexity and scale of the MAP inference problem. Moreover, we show where each of the five methods discussed in this paper overlap and where they most differ. All negative methods are implemented in PSL, and we introduce a tunable synthetic dataset designed to empirically compare the performance of predictions.  more » « less
Award ID(s):
2023495
NSF-PAR ID:
10282805
Author(s) / Creator(s):
Date Published:
Journal Name:
Workshop on Tractable Probabilistic Modeling (TPM)
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Inference of unknown opinions with uncertain, adversarial (e.g., incorrect or conflicting) evidence in large datasets is not a trivial task. Without proper handling, it can easily mislead decision making in data mining tasks. In this work, we propose a highly scalable opinion inference probabilistic model, namely Adversarial Collective Opinion Inference (Adv-COI), which provides a solution to infer unknown opinions with high scalability and robustness under the presence of uncertain, adversarial evidence by enhancing Collective Subjective Logic (CSL) which is developed by combining SL and Probabilistic Soft Logic (PSL). The key idea behind the Adv-COI is to learn a model of robust ways against uncertain, adversarial evidence which is formulated as a min-max problem. We validate the out-performance of the Adv-COI compared to baseline models and its competitive counterparts under possible adversarial attacks on the logic-rule based structured data and white and black box adversarial attacks under both clean and perturbed semi-synthetic and real-world datasets in three real world applications. The results show that the Adv-COI generates the lowest mean absolute error in the expected truth probability while producing the lowest running time among all. 
    more » « less
  2. null (Ed.)
    Using unreliable information sources generating conflicting evidence may lead to a large uncertainty, which significantly hurts the decision making process. Recently, many approaches have been taken to integrate conflicting data from multiple sources and/or fusing conflicting opinions from different entities. To explicitly deal with uncertainty, a belief model called Subjective Logic (SL), as a variant of Dumpster-Shafer Theory, has been proposed to represent subjective opinions and to merge multiple opinions by offering a rich volume of fusing operators, which have been used to solve many opinion inference problems in trust networks. However, the operators of SL are known to be lack of scalability in inferring unknown opinions from large network data as a result of the sequential procedures of merging multiple opinions. In addition, SL does not consider deriving opinions in the presence of conflicting evidence. In this work, we propose a hybrid inference method that combines SL and Probabilistic Soft Logic (PSL), namely, Collective Subjective Plus, CSL + , which is resistible to highly conflicting evidence or a lack of evidence. PSL can reason a belief in a collective manner to deal with large-scale network data, allowing high scalability based on relationships between opinions. However, PSL does not consider an uncertainty dimension in a subjective opinion. To take benefits from both SL and PSL, we proposed a hybrid approach called CSL + for achieving high scalability and high prediction accuracy for unknown opinions with uncertainty derived from a lack of evidence and/or conflicting evidence. Through the extensive experiments on four semi-synthetic and two real-world datasets, we showed that the CSL + outperforms the state-of-the-art belief model (i.e., SL), probabilistic inference models (i.e., PSL, CSL), and deep learning model (i.e., GCN-VAE-opinion) in terms of prediction accuracy, computational complexity, and real running time. 
    more » « less
  3. Statistical relational learning models are powerful tools that combine ideas from first-order logic with probabilistic graphical models to represent complex dependencies. Despite their success in encoding large problems with a compact set of weighted rules, performing inference over these models is often challenging. In this paper, we show how to effectively combine two powerful ideas for scaling inference for large graphical models. The first idea, lifted inference, is a wellstudied approach to speeding up inference in graphical models by exploiting symmetries in the underlying problem. The second idea is to frame Maximum a posteriori (MAP) inference as a convex optimization problem and use alternating direction method of multipliers (ADMM) to solve the problem in parallel. A well-studied relaxation to the combinatorial optimization problem defined for logical Markov random fields gives rise to a hinge-loss Markov random field (HLMRF) for which MAP inference is a convex optimization problem. We show how the formalism introduced for coloring weighted bipartite graphs using a color refinement algorithm can be integrated with the ADMM optimization technique to take advantage of the sparse dependency structures of HLMRFs. Our proposed approach, lifted hinge-loss Markov random fields (LHL-MRFs), preserves the structure of the original problem after lifting and solves lifted inference as distributed convex optimization with ADMM. In our empirical evaluation on real-world problems, we observe up to a three times speed up in inference over HL-MRFs. 
    more » « less
  4. Explaining the results of Machine learning algorithms is crucial given the rapid growth and potential applicability of these methods in critical domains including healthcare, defense, autonomous driving, etc. In this paper, we address this problem in the context of Markov Logic Networks (MLNs) which are highly expressive statistical relational models that combine first-order logic with probabilistic graphical models. MLNs in general are known to be interpretable models, i.e., MLNs can be understood more easily by humans as compared to models learned by approaches such as deep learning. However, at the same time, it is not straightforward to obtain human-understandable explanations specific to an observed inference result (e.g. marginal probability estimate). This is because, the MLN provides a lifted interpretation, one that generalizes to all possible worlds/instantiations, which are not query/evidence specific. In this paper, we extract grounded-explanations, i.e., explanations defined w.r.t specific inference queries and observed evidence. We extract these explanations from importance weights defined over the MLN formulas that encode the contribution of formulas towards the final inference results. We validate our approach in real world problems related to analyzing reviews from Yelp, and show through user-studies that our explanations are richer than state-of-the-art non-relational explainers such as LIME . 
    more » « less
  5. Abstract Statistical relational learning (SRL) frameworks are effective at defining probabilistic models over complex relational data. They often use weighted first-order logical rules where the weights of the rules govern probabilistic interactions and are usually learned from data. Existing weight learning approaches typically attempt to learn a set of weights that maximizes some function of data likelihood; however, this does not always translate to optimal performance on a desired domain metric, such as accuracy or F1 score. In this paper, we introduce a taxonomy of search-based weight learning approaches for SRL frameworks that directly optimize weights on a chosen domain performance metric. To effectively apply these search-based approaches, we introduce a novel projection, referred to as scaled space (SS), that is an accurate representation of the true weight space. We show that SS removes redundancies in the weight space and captures the semantic distance between the possible weight configurations. In order to improve the efficiency of search, we also introduce an approximation of SS which simplifies the process of sampling weight configurations. We demonstrate these approaches on two state-of-the-art SRL frameworks: Markov logic networks and probabilistic soft logic. We perform empirical evaluation on five real-world datasets and evaluate them each on two different metrics. We also compare them against four other weight learning approaches. Our experimental results show that our proposed search-based approaches outperform likelihood-based approaches and yield up to a 10% improvement across a variety of performance metrics. Further, we perform an extensive evaluation to measure the robustness of our approach to different initializations and hyperparameters. The results indicate that our approach is both accurate and robust. 
    more » « less