skip to main content


Title: Context dependency in risky decision making: Is there a description-experience gap?
When making decisions involving risk, people may learn about the risk from descriptions or from experience. The description-experience gap refers to the difference in decision patterns driven by this discrepancy in learning format. Across two experiments, we investigated whether learning from description versus experience differentially affects the direction and the magnitude of a context effect in risky decision making. In Study 1 and 2, a computerized game called the Decisions about Risk Task (DART) was used to measure people’s risk-taking tendencies toward hazard stimuli that exploded probabilistically. The rate at which a context hazard caused harm was manipulated, while the rate at which a focal hazard caused harm was held constant. The format by which this information was learned was also manipulated; it was learned primarily by experience or by description. The results revealed that participants’ behavior toward the focal hazard varied depending on what they had learned about the context hazard. Specifically, there were contrast effects in which participants were more likely to choose a risky behavior toward the focal hazard when the harm rate posed by the context hazard was high rather than low. Critically, these contrast effects were of similar strength irrespective of whether the risk information was learned from experience or description. Participants’ verbal assessments of risk likelihood also showed contrast effects, irrespective of learning format. Although risk information about a context hazard in DART does nothing to affect the objective expected value of risky versus safe behaviors toward focal hazards, it did affect participants’ perceptions and behaviors—regardless of whether the information was learned from description or experience. Our findings suggest that context has a broad-based role in how people assess and make decisions about hazards.  more » « less
Award ID(s):
1851738
NSF-PAR ID:
10216391
Author(s) / Creator(s):
; ; ; ; ;
Editor(s):
Worthy, Darrell A.
Date Published:
Journal Name:
PLOS ONE
Volume:
16
Issue:
2
ISSN:
1932-6203
Page Range / eLocation ID:
e0245969
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract: 100 words Jurors are increasingly exposed to scientific information in the courtroom. To determine whether providing jurors with gist information would assist in their ability to make well-informed decisions, the present experiment utilized a Fuzzy Trace Theory-inspired intervention and tested it against traditional legal safeguards (i.e., judge instructions) by varying the scientific quality of the evidence. The results indicate that jurors who viewed high quality evidence rated the scientific evidence significantly higher than those who viewed low quality evidence, but were unable to moderate the credibility of the expert witness and apply damages appropriately resulting in poor calibration. Summary: <1000 words Jurors and juries are increasingly exposed to scientific information in the courtroom and it remains unclear when they will base their decisions on a reasonable understanding of the relevant scientific information. Without such knowledge, the ability of jurors and juries to make well-informed decisions may be at risk, increasing chances of unjust outcomes (e.g., false convictions in criminal cases). Therefore, there is a critical need to understand conditions that affect jurors’ and juries’ sensitivity to the qualities of scientific information and to identify safeguards that can assist with scientific calibration in the courtroom. The current project addresses these issues with an ecologically valid experimental paradigm, making it possible to assess causal effects of evidence quality and safeguards as well as the role of a host of individual difference variables that may affect perceptions of testimony by scientific experts as well as liability in a civil case. Our main goal was to develop a simple, theoretically grounded tool to enable triers of fact (individual jurors) with a range of scientific reasoning abilities to appropriately weigh scientific evidence in court. We did so by testing a Fuzzy Trace Theory-inspired intervention in court, and testing it against traditional legal safeguards. Appropriate use of scientific evidence reflects good calibration – which we define as being influenced more by strong scientific information than by weak scientific information. Inappropriate use reflects poor calibration – defined as relative insensitivity to the strength of scientific information. Fuzzy Trace Theory (Reyna & Brainerd, 1995) predicts that techniques for improving calibration can come from presentation of easy-to-interpret, bottom-line “gist” of the information. Our central hypothesis was that laypeople’s appropriate use of scientific information would be moderated both by external situational conditions (e.g., quality of the scientific information itself, a decision aid designed to convey clearly the “gist” of the information) and individual differences among people (e.g., scientific reasoning skills, cognitive reflection tendencies, numeracy, need for cognition, attitudes toward and trust in science). Identifying factors that promote jurors’ appropriate understanding of and reliance on scientific information will contribute to general theories of reasoning based on scientific evidence, while also providing an evidence-based framework for improving the courts’ use of scientific information. All hypotheses were preregistered on the Open Science Framework. Method Participants completed six questionnaires (counterbalanced): Need for Cognition Scale (NCS; 18 items), Cognitive Reflection Test (CRT; 7 items), Abbreviated Numeracy Scale (ABS; 6 items), Scientific Reasoning Scale (SRS; 11 items), Trust in Science (TIS; 29 items), and Attitudes towards Science (ATS; 7 items). Participants then viewed a video depicting a civil trial in which the defendant sought damages from the plaintiff for injuries caused by a fall. The defendant (bar patron) alleged that the plaintiff (bartender) pushed him, causing him to fall and hit his head on the hard floor. Participants were informed at the outset that the defendant was liable; therefore, their task was to determine if the plaintiff should be compensated. Participants were randomly assigned to 1 of 6 experimental conditions: 2 (quality of scientific evidence: high vs. low) x 3 (safeguard to improve calibration: gist information, no-gist information [control], jury instructions). An expert witness (neuroscientist) hired by the court testified regarding the scientific strength of fMRI data (high [90 to 10 signal-to-noise ratio] vs. low [50 to 50 signal-to-noise ratio]) and gist or no-gist information both verbally (i.e., fairly high/about average) and visually (i.e., a graph). After viewing the video, participants were asked if they would like to award damages. If they indicated yes, they were asked to enter a dollar amount. Participants then completed the Positive and Negative Affect Schedule-Modified Short Form (PANAS-MSF; 16 items), expert Witness Credibility Scale (WCS; 20 items), Witness Credibility and Influence on damages for each witness, manipulation check questions, Understanding Scientific Testimony (UST; 10 items), and 3 additional measures were collected, but are beyond the scope of the current investigation. Finally, participants completed demographic questions, including questions about their scientific background and experience. The study was completed via Qualtrics, with participation from students (online vs. in-lab), MTurkers, and non-student community members. After removing those who failed attention check questions, 469 participants remained (243 men, 224 women, 2 did not specify gender) from a variety of racial and ethnic backgrounds (70.2% White, non-Hispanic). Results and Discussion There were three primary outcomes: quality of the scientific evidence, expert credibility (WCS), and damages. During initial analyses, each dependent variable was submitted to a separate 3 Gist Safeguard (safeguard, no safeguard, judge instructions) x 2 Scientific Quality (high, low) Analysis of Variance (ANOVA). Consistent with hypotheses, there was a significant main effect of scientific quality on strength of evidence, F(1, 463)=5.099, p=.024; participants who viewed the high quality evidence rated the scientific evidence significantly higher (M= 7.44) than those who viewed the low quality evidence (M=7.06). There were no significant main effects or interactions for witness credibility, indicating that the expert that provided scientific testimony was seen as equally credible regardless of scientific quality or gist safeguard. Finally, for damages, consistent with hypotheses, there was a marginally significant interaction between Gist Safeguard and Scientific Quality, F(2, 273)=2.916, p=.056. However, post hoc t-tests revealed significantly higher damages were awarded for low (M=11.50) versus high (M=10.51) scientific quality evidence F(1, 273)=3.955, p=.048 in the no gist with judge instructions safeguard condition, which was contrary to hypotheses. The data suggest that the judge instructions alone are reversing the pattern, though nonsignificant, those who received the no gist without judge instructions safeguard awarded higher damages in the high (M=11.34) versus low (M=10.84) scientific quality evidence conditions F(1, 273)=1.059, p=.30. Together, these provide promising initial results indicating that participants were able to effectively differentiate between high and low scientific quality of evidence, though inappropriately utilized the scientific evidence through their inability to discern expert credibility and apply damages, resulting in poor calibration. These results will provide the basis for more sophisticated analyses including higher order interactions with individual differences (e.g., need for cognition) as well as tests of mediation using path analyses. [References omitted but available by request] Learning Objective: Participants will be able to determine whether providing jurors with gist information would assist in their ability to award damages in a civil trial. 
    more » « less
  2. null (Ed.)
    The DeepLearningEpilepsyDetectionChallenge: design, implementation, andtestofanewcrowd-sourced AIchallengeecosystem Isabell Kiral*, Subhrajit Roy*, Todd Mummert*, Alan Braz*, Jason Tsay, Jianbin Tang, Umar Asif, Thomas Schaffter, Eren Mehmet, The IBM Epilepsy Consortium◊ , Joseph Picone, Iyad Obeid, Bruno De Assis Marques, Stefan Maetschke, Rania Khalaf†, Michal Rosen-Zvi† , Gustavo Stolovitzky† , Mahtab Mirmomeni† , Stefan Harrer† * These authors contributed equally to this work † Corresponding authors: rkhalaf@us.ibm.com, rosen@il.ibm.com, gustavo@us.ibm.com, mahtabm@au1.ibm.com, sharrer@au.ibm.com ◊ Members of the IBM Epilepsy Consortium are listed in the Acknowledgements section J. Picone and I. Obeid are with Temple University, USA. T. Schaffter is with Sage Bionetworks, USA. E. Mehmet is with the University of Illinois at Urbana-Champaign, USA. All other authors are with IBM Research in USA, Israel and Australia. Introduction This decade has seen an ever-growing number of scientific fields benefitting from the advances in machine learning technology and tooling. More recently, this trend reached the medical domain, with applications reaching from cancer diagnosis [1] to the development of brain-machine-interfaces [2]. While Kaggle has pioneered the crowd-sourcing of machine learning challenges to incentivise data scientists from around the world to advance algorithm and model design, the increasing complexity of problem statements demands of participants to be expert data scientists, deeply knowledgeable in at least one other scientific domain, and competent software engineers with access to large compute resources. People who match this description are few and far between, unfortunately leading to a shrinking pool of possible participants and a loss of experts dedicating their time to solving important problems. Participation is even further restricted in the context of any challenge run on confidential use cases or with sensitive data. Recently, we designed and ran a deep learning challenge to crowd-source the development of an automated labelling system for brain recordings, aiming to advance epilepsy research. A focus of this challenge, run internally in IBM, was the development of a platform that lowers the barrier of entry and therefore mitigates the risk of excluding interested parties from participating. The challenge: enabling wide participation With the goal to run a challenge that mobilises the largest possible pool of participants from IBM (global), we designed a use case around previous work in epileptic seizure prediction [3]. In this “Deep Learning Epilepsy Detection Challenge”, participants were asked to develop an automatic labelling system to reduce the time a clinician would need to diagnose patients with epilepsy. Labelled training and blind validation data for the challenge were generously provided by Temple University Hospital (TUH) [4]. TUH also devised a novel scoring metric for the detection of seizures that was used as basis for algorithm evaluation [5]. In order to provide an experience with a low barrier of entry, we designed a generalisable challenge platform under the following principles: 1. No participant should need to have in-depth knowledge of the specific domain. (i.e. no participant should need to be a neuroscientist or epileptologist.) 2. No participant should need to be an expert data scientist. 3. No participant should need more than basic programming knowledge. (i.e. no participant should need to learn how to process fringe data formats and stream data efficiently.) 4. No participant should need to provide their own computing resources. In addition to the above, our platform should further • guide participants through the entire process from sign-up to model submission, • facilitate collaboration, and • provide instant feedback to the participants through data visualisation and intermediate online leaderboards. The platform The architecture of the platform that was designed and developed is shown in Figure 1. The entire system consists of a number of interacting components. (1) A web portal serves as the entry point to challenge participation, providing challenge information, such as timelines and challenge rules, and scientific background. The portal also facilitated the formation of teams and provided participants with an intermediate leaderboard of submitted results and a final leaderboard at the end of the challenge. (2) IBM Watson Studio [6] is the umbrella term for a number of services offered by IBM. Upon creation of a user account through the web portal, an IBM Watson Studio account was automatically created for each participant that allowed users access to IBM's Data Science Experience (DSX), the analytics engine Watson Machine Learning (WML), and IBM's Cloud Object Storage (COS) [7], all of which will be described in more detail in further sections. (3) The user interface and starter kit were hosted on IBM's Data Science Experience platform (DSX) and formed the main component for designing and testing models during the challenge. DSX allows for real-time collaboration on shared notebooks between team members. A starter kit in the form of a Python notebook, supporting the popular deep learning libraries TensorFLow [8] and PyTorch [9], was provided to all teams to guide them through the challenge process. Upon instantiation, the starter kit loaded necessary python libraries and custom functions for the invisible integration with COS and WML. In dedicated spots in the notebook, participants could write custom pre-processing code, machine learning models, and post-processing algorithms. The starter kit provided instant feedback about participants' custom routines through data visualisations. Using the notebook only, teams were able to run the code on WML, making use of a compute cluster of IBM's resources. The starter kit also enabled submission of the final code to a data storage to which only the challenge team had access. (4) Watson Machine Learning provided access to shared compute resources (GPUs). Code was bundled up automatically in the starter kit and deployed to and run on WML. WML in turn had access to shared storage from which it requested recorded data and to which it stored the participant's code and trained models. (5) IBM's Cloud Object Storage held the data for this challenge. Using the starter kit, participants could investigate their results as well as data samples in order to better design custom algorithms. (6) Utility Functions were loaded into the starter kit at instantiation. This set of functions included code to pre-process data into a more common format, to optimise streaming through the use of the NutsFlow and NutsML libraries [10], and to provide seamless access to the all IBM services used. Not captured in the diagram is the final code evaluation, which was conducted in an automated way as soon as code was submitted though the starter kit, minimising the burden on the challenge organising team. Figure 1: High-level architecture of the challenge platform Measuring success The competitive phase of the "Deep Learning Epilepsy Detection Challenge" ran for 6 months. Twenty-five teams, with a total number of 87 scientists and software engineers from 14 global locations participated. All participants made use of the starter kit we provided and ran algorithms on IBM's infrastructure WML. Seven teams persisted until the end of the challenge and submitted final solutions. The best performing solutions reached seizure detection performances which allow to reduce hundred-fold the time eliptologists need to annotate continuous EEG recordings. Thus, we expect the developed algorithms to aid in the diagnosis of epilepsy by significantly shortening manual labelling time. Detailed results are currently in preparation for publication. Equally important to solving the scientific challenge, however, was to understand whether we managed to encourage participation from non-expert data scientists. Figure 2: Primary occupation as reported by challenge participants Out of the 40 participants for whom we have occupational information, 23 reported Data Science or AI as their main job description, 11 reported being a Software Engineer, and 2 people had expertise in Neuroscience. Figure 2 shows that participants had a variety of specialisations, including some that are in no way related to data science, software engineering, or neuroscience. No participant had deep knowledge and experience in data science, software engineering and neuroscience. Conclusion Given the growing complexity of data science problems and increasing dataset sizes, in order to solve these problems, it is imperative to enable collaboration between people with differences in expertise with a focus on inclusiveness and having a low barrier of entry. We designed, implemented, and tested a challenge platform to address exactly this. Using our platform, we ran a deep-learning challenge for epileptic seizure detection. 87 IBM employees from several business units including but not limited to IBM Research with a variety of skills, including sales and design, participated in this highly technical challenge. 
    more » « less
  3. Background/Context:

    Computer programming is rarely accessible to K–12 students, especially for those from culturally and linguistically diverse backgrounds. Middle school age is a transitioning time when adolescents are more likely to make long-term decisions regarding their academic choices and interests. Having access to productive and positive knowledge and experiences in computer programming can grant them opportunities to realize their abilities and potential in this field.

    Purpose/Focus of Study:

    This study focuses on the exploration of the kind of relationship that bilingual Latinx students developed with themselves and computer programming and mathematics (CPM) practices through their participation in a CPM after-school program, first as students and then as cofacilitators teaching CPM practices to other middle school peers.

    Setting:

    An after-school program, Advancing Out-of-School Learning in Mathematics and Engineering (AOLME), was held at two middle schools located in rural and urban areas in the Southwest. It was designed to support an inclusive cultural environment that nurtured students’ opportunities to learn CPM practices through the inclusion of languages (Spanish and English), tasks, and participants congruent to students in the program. Students learned how to represent, design, and program digital images and videos using a sequence of 2D arrays of hexadecimal numbers with Python on a Raspberry Pi computer. The six bilingual cofacilitators attended Levels 1 and 2 as students and were offered the opportunity to participate as cofacilitators in the next implementation of Level 1.

    Research Design:

    This longitudinal case study focused on analyzing the experiences and shifts (if any) of students who participated as cofacilitators in AOLME. Their narratives were analyzed collectively, and our analysis describes the experiences of the cofacilitators as a single case study (with embedded units) of what it means to be a bilingual cofacilitator in AOLME. Data included individual exit interviews of the six cofacilitators and their focus groups (30–45 minutes each), an adapted 20-item CPM attitude 5-point Likert scale, and self-report from each of them. Results from attitude scales revealed cofacilitators’ greater initial and posterior connections to CPM practices. The self-reports on CPM included two number lines (0–10) for before and after AOLME for students to self-assess their liking and knowledge of CPM. The numbers were used as interview prompts to converse with students about experiences. The interview data were analyzed qualitatively and coded through a contrast-comparative process regarding students’ description of themselves, their experiences in the program, and their perception of and relationship toward CPM practices.

    Findings:

    Findings indicated that students had continued/increased motivation and confidence in CPM as they engaged in a journey as cofacilitators, described through two thematic categories: (a) shifting views by personally connecting to CPM, and (b) affirming CPM practices through teaching. The shift in connecting to CPM practices evolved as students argued that they found a new way of learning mathematics, in that they used mathematics as a tool to create videos and images that they programmed by using Python while making sense of the process bilingually (Spanish and English). This mathematics was viewed by students as high level, which in turned helped students gain self-confidence in CPM practices. Additionally, students affirmed their knowledge and confidence in CPM practices by teaching them to others, a process in which they had to mediate beyond the understanding of CPM practices. They came up with new ways of explaining CPM practices bilingually to their peers. In this new role, cofacilitators considered the topic and language, and promoted a communal support among the peers they worked with.

    Conclusions/Recommendations:

    Bilingual middle school students can not only program, but also teach bilingually and embrace new roles with nurturing support. Schools can promote new student roles, which can yield new goals and identities. There is a great need to redesign the school mathematics curriculum as a discipline that teenagers can use and connect with by creating and finding things they care about. In this way, school mathematics can support a closer “fit” with students’ identification with the world of mathematics. Cofacilitators learned more about CPM practices by teaching them, extending beyond what was given to them, and constructing new goals that were in line with a sophisticated knowledge and shifts in the practice. Assigned responsibility in a new role can strengthen students’ self-image, agency, and ways of relating to mathematics.

     
    more » « less
  4. Navigating conflict is integral to decision-making, serving a central role both in the subjective experience of choice as well as contemporary theories of how we choose. However, the lack of a sensitive, accessible, and interpretable metric of conflict has led researchers to focus on choice itself rather than how individuals arrive at that choice. Using mouse-tracking—continuously sampling computer mouse location as participants decide—we demonstrate the theoretical and practical uses of dynamic assessments of choice from decision onset through conclusion. Specifically, we use mouse tracking to index conflict, quantified by the relative directness to the chosen option, in a domain for which conflict is integral: decisions involving risk. In deciding whether to accept risk, decision makers must integrate gains, losses, status quos, and outcome probabilities, a process that inevitably involves conflict. Across three preregistered studies, we tracked participants’ motor movements while they decided whether to accept or reject gambles. Our results show that 1) mouse-tracking metrics of conflict sensitively detect differences in the subjective value of risky versus certain options; 2) these metrics of conflict strongly predict participants’ risk preferences (loss aversion and decreasing marginal utility), even on a single-trial level; 3) these mouse-tracking metrics outperform participants’ reaction times in predicting risk preferences; and 4) manipulating risk preferences via a broad versus narrow bracketing manipulation influences conflict as indexed by mouse tracking. Together, these results highlight the importance of measuring conflict during risky choice and demonstrate the usefulness of mouse tracking as a tool to do so.

     
    more » « less
  5. Theory—understanding mental processes that drive decisions—is important to help patients and providers make decisions that reflect medical advances and personal values. Building on a 2008 review, we summarize current tenets of fuzzy-trace theory (FTT) in light of new evidence that provides insight regarding mental representations of options and how such representations connect to values and evoke emotions. We discuss implications for communicating risks, preventing risky behaviors, discouraging misinformation, and choosing appropriate treatments. Findings suggest that simple, fuzzy but meaningful gist representations of information often determine decisions. Within minutes of conversing with their doctor, reading a health-related web post, or processing other health information, patients rely on gist memories of that information rather than verbatim details. This fuzzy-processing preference explains puzzles and paradoxes in how patients (and sometimes providers) think about probabilities (e.g., “50-50” chance), outcomes of treatment (e.g., with antibiotics), experiences of pain, end-of-life decisions, memories for medication instructions, symptoms of concussion, and transmission of viruses (e.g., in AIDS and COVID-19). As examples, participation in clinical trials or seeking treatments with low probabilities of success (e.g., with antibiotics or at the end of life) may indicate a defensibly different categorical gist perspective on risk as opposed to simply misunderstanding probabilities or failing to make prescribed tradeoffs. Thus, FTT explains why people avoid precise tradeoffs despite computing them. Facilitating gist representations of information offers an alternative approach that goes beyond providing uninterpreted “neutral” facts versus persuading or shifting the balance between fast versus slow thinking (or emotion vs. cognition). In contrast to either taking mental shortcuts or deliberating about details, gist processing facilitates application of advanced knowledge and deeply held values to choices. Highlights Fuzzy-trace theory (FTT) supports practical approaches to improving health and medicine. FTT differs in important respects from other theories of decision making, which has implications for how to help patients, providers, and health communicators. Gist mental representations emphasize categorical distinctions, reflect understanding in context, and help cue values relevant to health and patient care. Understanding the science behind theory is crucial for evidence-based medicine. 
    more » « less