We present two models of how people form beliefs that are based on machine learning theory. We illustrate how these models give insight into observed human phenomena by showing how polarized beliefs can arise even when people are exposed to almost identical sources of information. In our first model, people form beliefs that are deterministic functions that best fit their past data (training sets). In that model, their inability to form probabilistic beliefs can lead people to have opposing views even if their data are drawn from distributions that only slightly disagree. In the second model, people pay a cost that is increasing in the complexity of the function that represents their beliefs. In this second model, even with large training sets drawn from exactly the same distribution, agents can disagree substantially because they simplify the world along different dimensions. We discuss what these models of belief formation suggest for improving people’s accuracy and agreement.
- Publication Date:
- NSF-PAR ID:
- 10225699
- Journal Name:
- Proceedings of the National Academy of Sciences
- Volume:
- 118
- Issue:
- 19
- Page Range or eLocation-ID:
- Article No. e2010144118
- ISSN:
- 0027-8424
- Publisher:
- Proceedings of the National Academy of Sciences
- Sponsoring Org:
- National Science Foundation
More Like this
-
Obeid, I. (Ed.)The Neural Engineering Data Consortium (NEDC) is developing the Temple University Digital Pathology Corpus (TUDP), an open source database of high-resolution images from scanned pathology samples [1], as part of its National Science Foundation-funded Major Research Instrumentation grant titled “MRI: High Performance Digital Pathology Using Big Data and Machine Learning” [2]. The long-term goal of this project is to release one million images. We have currently scanned over 100,000 images and are in the process of annotating breast tissue data for our first official corpus release, v1.0.0. This release contains 3,505 annotated images of breast tissue including 74 patients with cancerous diagnoses (out of a total of 296 patients). In this poster, we will present an analysis of this corpus and discuss the challenges we have faced in efficiently producing high quality annotations of breast tissue. It is well known that state of the art algorithms in machine learning require vast amounts of data. Fields such as speech recognition [3], image recognition [4] and text processing [5] are able to deliver impressive performance with complex deep learning models because they have developed large corpora to support training of extremely high-dimensional models (e.g., billions of parameters). Other fields that do notmore »
-
In the present article, we follow up our recent work on the experimental assessment of two data-driven nonlinear system identification methodologies. The first methodology constructs a single nonlinear-mode model from periodic vibration data obtained under phase-controlled harmonic excitation. The second methodology constructs a state-space model with polynomial nonlinear terms from vibration data obtained under uncontrolled broadband random excitation. The conclusions drawn from our previous work (experimental) were limited by uncertainties inherent to the specimen, instrumentation, and signal processing. To avoid these uncertainties in the present work, we pursued a completely numerical approach based on synthetic measurement data obtained from simulated experiments. Three benchmarks are considered, which feature geometric, unilateral contact, and dry friction nonlinearity, respectively. As in our previous work, we assessed the prediction accuracy of the identified models with a focus on the regime near a particular resonance. This way, we confirmed our findings on the strengths and weaknesses of the two methodologies and derive several new findings: First, the state-space method struggles even for polynomial nonlinearities if the training data is chaotic. Second, the polynomial state-space models can reach high accuracy only in a rather limited range of vibration levels for systems with non-polynomial nonlinearities. Such cases demonstratemore »
-
The DeepLearningEpilepsyDetectionChallenge: design, implementation, andtestofanewcrowd-sourced AIchallengeecosystem Isabell Kiral*, Subhrajit Roy*, Todd Mummert*, Alan Braz*, Jason Tsay, Jianbin Tang, Umar Asif, Thomas Schaffter, Eren Mehmet, The IBM Epilepsy Consortium◊ , Joseph Picone, Iyad Obeid, Bruno De Assis Marques, Stefan Maetschke, Rania Khalaf†, Michal Rosen-Zvi† , Gustavo Stolovitzky† , Mahtab Mirmomeni† , Stefan Harrer† * These authors contributed equally to this work † Corresponding authors: rkhalaf@us.ibm.com, rosen@il.ibm.com, gustavo@us.ibm.com, mahtabm@au1.ibm.com, sharrer@au.ibm.com ◊ Members of the IBM Epilepsy Consortium are listed in the Acknowledgements section J. Picone and I. Obeid are with Temple University, USA. T. Schaffter is with Sage Bionetworks, USA. E. Mehmet is with the University of Illinois at Urbana-Champaign, USA. All other authors are with IBM Research in USA, Israel and Australia. Introduction This decade has seen an ever-growing number of scientific fields benefitting from the advances in machine learning technology and tooling. More recently, this trend reached the medical domain, with applications reaching from cancer diagnosis [1] to the development of brain-machine-interfaces [2]. While Kaggle has pioneered the crowd-sourcing of machine learning challenges to incentivise data scientists from around the world to advance algorithm and model design, the increasing complexity of problem statements demands of participants to be expert datamore »
-
People naturally bring their prior beliefs to bear on how they interpret the new information, yet few formal models exist for accounting for the influence of users' prior beliefs in interactions with data presentations like visualizations. We demonstrate a Bayesian cognitive model for understanding how people interpret visualizations in light of prior beliefs and show how this model provides a guide for improving visualization evaluation. In a first study, we show how applying a Bayesian cognition model to a simple visualization scenario indicates that people's judgments are consistent with a hypothesis that they are doing approximate Bayesian inference. In a second study, we evaluate how sensitive our observations of Bayesian behavior are to different techniques for eliciting people subjective distributions, and to different datasets. We find that people don't behave consistently with Bayesian predictions for large sample size datasets, and this difference cannot be explained by elicitation technique. In a final study, we show how normative Bayesian inference can be used as an evaluation framework for visualizations, including of uncertainty.
-
Hurricanes are devastating natural disasters. In deciding how to respond to a hurricane, in particular whether and when to evacuate, a decision-maker must weigh often highly uncertain and contradic- tory information about the future path and intensity of the storm. To effectively plan to help people during a hurricane, it is crucial to be able to predict and understand this evacuation decision. To this end, we propose a computational model of human sequential decision-making in response to a hurricane based on a Partial Ob- servable Markov Decision Process (POMDP) that models concerns, uncertain beliefs about the hurricane, and future information. We evaluate the model in two ways. First, hurricane data from 2018 was used to evaluate the model’s predictive ability on real data. Second, a simulation study was conducted to qualitatively evaluate the sequential aspect of the model to illustrate the role that the acquisition of future, more accurate information can play on cur- rent decision-making. The evaluation with 2018 hurricane season data shows that our proposed features are significant predictors and the model can predict the data well, within and across distinct hurricane datasets. The simulation results show that, across dif- ferent setups, our model generates predictions on themore »