%AAl-Farabi, K.M.%ASarkhel, S.%ADey, S.%AVenugopal, D.%BJournal Name: Fine-Grained Explanations Using Markov LogicMachine Learning and Knowledge Discovery in Databases; Journal Volume: 11907
%D2019%I
%JJournal Name: Fine-Grained Explanations Using Markov LogicMachine Learning and Knowledge Discovery in Databases; Journal Volume: 11907
%K
%MOSTI ID: 10157043
%PMedium: X
%TFine-Grained Explanations Using Markov Logic
%XExplaining the results of Machine learning algorithms is crucial given the rapid growth and potential applicability of these methods in critical domains including healthcare, defense, autonomous driving, etc. In this paper, we address this problem in the context of Markov Logic Networks (MLNs) which are highly expressive statistical relational models that combine first-order logic with probabilistic graphical models. MLNs in general are known to be interpretable models, i.e., MLNs can be understood more easily by humans as compared to models learned by approaches such as deep learning. However, at the same time, it is not straightforward to obtain human-understandable explanations specific to an observed inference result (e.g. marginal probability estimate). This is because, the MLN provides a lifted interpretation, one that generalizes to all possible worlds/instantiations, which are not query/evidence specific. In this paper, we extract grounded-explanations, i.e., explanations defined w.r.t specific inference queries and observed evidence. We extract these explanations from importance weights defined over the MLN formulas that encode the contribution of formulas towards the final inference results. We validate our approach in real world problems related to analyzing reviews from Yelp, and show through user-studies that our explanations are richer than state-of-the-art non-relational explainers such as LIME .
%0Journal Article
Country unknown/Code not availableOSTI-MSA