skip to main content


Title: Learning Predictions for Algorithms with Predictions
Award ID(s):
1910321 1919453 1901403
NSF-PAR ID:
10380389
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
Advances in Neural Information Processing Systems
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. A nuclear physics example of statistical bootstrap is used on the MARATHON nucleon structure function ratio data in the quark momentum fraction regions xB → 0 and xB → 1. The extrapolated F2 ratio as quark momentum fraction xB → 1 is Fn 2 F p 2 → 0.4 ± 0.05 and this value is compared to theoretical predictions. The extrapolated ratio when xB → 0 favors the simple model of isospin symmetry with the complete dominance of sea quarks at low momentum fraction. At high-xB, the proton quark distribution function ratio d/u is derived from the F2 ratio and found to be d/u → 1/6. Our extrapolated values for both the Fn 2 F p 2 ratio and the d/u parton distribution function ratio are within uncertainties of perturbative QCD values from quark counting, helicity conservation arguments, and a Dyson-Schwinger equation with a contact interaction model. In addition, it is possible to match the statistical bootstrap value to theoretical predictions by allowing two compatible models to act simultaneously in the nucleon wave function. One such example is nucleon wave functions composed of a linear combination of a quark-diquark state and a three-valence quark correlated state with coefficients that combine to give the extrapolated F2 ratio at xB = 1. 
    more » « less
  2. Humans are the final decision makers in critical tasks that involve ethical and legal concerns, ranging from recidivism prediction, to medical diagnosis, to fighting against fake news. Although machine learning models can sometimes achieve impressive performance in these tasks, these tasks are not amenable to full automation. To realize the potential of machine learning for improving human decisions, it is important to understand how assistance from machine learning models affects human performance and human agency. In this paper, we use deception detection as a testbed and investigate how we can harness explanations and predictions of machine learning models to improve human performance while retaining human agency. We propose a spectrum between full human agency and full automation, and develop varying levels of machine assistance along the spectrum that gradually increase the influence of machine predictions. We find that without showing predicted labels, explanations alone slightly improve human performance in the end task. In comparison, human performance is greatly improved by showing predicted labels (>20% relative improvement) and can be further improved by explicitly suggesting strong machine performance. Interestingly, when predicted labels are shown, explanations of machine predictions induce a similar level of accuracy as an explicit statement of strong machine performance. Our results demonstrate a tradeoff between human performance and human agency and show that explanations of machine predictions can moderate this tradeoff. 
    more » « less