Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to nonfederal websites. Their policies may differ from this site.

Explaining the results of Machine learning algorithms is crucial given the rapid growth and potential applicability of these methods in critical domains including healthcare, defense, autonomous driving, etc. In this paper, we address this problem in the context of Markov Logic Networks (MLNs) which are highly expressive statistical relational models that combine firstorder logic with probabilistic graphical models. MLNs in general are known to be interpretable models, i.e., MLNs can be understood more easily by humans as compared to models learned by approaches such as deep learning. However, at the same time, it is not straightforward to obtain humanunderstandable explanations specific to an observed inference result (e.g. marginal probability estimate). This is because, the MLN provides a lifted interpretation, one that generalizes to all possible worlds/instantiations, which are not query/evidence specific. In this paper, we extract groundedexplanations, i.e., explanations defined w.r.t specific inference queries and observed evidence. We extract these explanations from importance weights defined over the MLN formulas that encode the contribution of formulas towards the final inference results. We validate our approach in real world problems related to analyzing reviews from Yelp, and show through userstudies that our explanations are richer than stateoftheart nonrelational explainers such as LIMEmore »

Free, publiclyaccessible full text available May 1, 2024

Free, publiclyaccessible full text available March 1, 2024

Free, publiclyaccessible full text available April 1, 2024

Free, publiclyaccessible full text available February 1, 2024

Free, publiclyaccessible full text available February 1, 2024

A bstract We present a measurement of the CabibboKobayashiMaskawa unitarity triangle angle ϕ 3 (also known as γ ) using a modelindependent Dalitz plot analysis of B + → D ( $$ {K}_S^0 $$ K S 0 h + h − ) h + , where D is either a D 0 or $$ \overline{D} $$ D ¯ 0 meson and h is either a π or K . This is the first measurement that simultaneously uses Belle and Belle II data, combining samples corresponding to integrated luminosities of 711 fb − 1 and 128 fb − 1 , respectively. All data were accumulated from energyasymmetric e + e − collisions at a centreofmass energy corresponding to the mass of the Υ(4 S ) resonance. We measure ϕ 3 = (78 . 4 ± 11 . 4 ± 0 . 5 ± 1 . 0)°, where the first uncertainty is statistical, the second is the experimental systematic uncertainty and the third is from the uncertainties on external measurements of the D decay strongphase parameters.