skip to main content

Title: Do People Prescribe (Over)Optimism?
Past work has suggested that people prescribe optimism—believing it is better to be optimistic, instead of accurate or pessimistic, about uncertain future events. Here, we identified and addressed an important ambiguity about whether those findings reflect an endorsement of biased beliefs—i.e., whether people prescribe likelihood estimates that reflect overoptimism. In three studies, participants (total N = 663 U.S. university students) read scenarios about protagonists facing uncertain events with a desired outcome. Results replicated prescriptions of optimism when using the same solicitations as in past work. However, we found quite different prescriptions when using alternative solicitations that asked about potential bias in likelihood estimations and that did not involve vague terms like “optimistic.” Participants generally prescribed being optimistic, feeling optimistic, and even thinking optimistically about the events, but they did not prescribe overestimating the likelihood of those events.
Authors:
Editors:
Brandt, M.; Bauer, P.
Award ID(s):
1851738
Publication Date:
NSF-PAR ID:
10216393
Journal Name:
Psychological science
ISSN:
0956-7976
Sponsoring Org:
National Science Foundation
More Like this
  1. Past work has suggested that people prescribe optimism—believing it is better to be optimistic, instead of accurate or pessimistic, about uncertain future events. Here, we identified and addressed an important ambiguity about whether those findings reflect an endorsement of biased beliefs—that is, whether people prescribe likelihood estimates that reflect overoptimism. In three studies, participants ( N = 663 U.S. university students) read scenarios about protagonists facing uncertain events with a desired outcome. Results replicated prescriptions of optimism when we used the same solicitations as in past work. However, we found quite different prescriptions when using alternative solicitations that asked about potential bias in likelihood estimations and that did not involve vague terms such as “optimistic.” Participants generally prescribed being optimistic, feeling optimistic, and even thinking optimistically about the events, but they did not prescribe overestimating the likelihood of those events.
  2. Baron, J. (Ed.)
    People often use tools for tasks, and sometimes there is uncertainty about whether a given task can be completed with a given tool. This project explored whether, when, and how people’s optimism about successfully completing a task with a given tool is affected by the contextual salience of a better or worse tool. In six studies, participants were faced with novel tasks. For each task, they were assigned a tool but also exposed to a comparison tool that was better or worse in utility (or sometimes similar in utility). In some studies, the tool comparisons were essentially social comparisons, because the tool was assigned to another person. In other studies, the tool comparisons were merely counterfactual rather than social. The studies revealed contrast effects on optimism, and the effect worked in both directions. That is, worse comparison tools boosted optimism and better tools depressed optimism. The contrast effects were observed regardless of the general type of comparison (e.g., social, counterfactual). The comparisons also influenced discrete decisions about which task to attempt (for a prize), which is an important finding for ruling out superficial scaling explanations for the contrast effects. It appears that people fail to exclude irrelevant tool-comparison information frommore »consideration when assessing their likelihood of success on a task, resulting in biased optimism and decisions.« less
  3. People who grow up speaking a language without lexical tones typically find it difficult to master tonal languages after childhood. Accumulating research suggests that much of the challenge for these second language (L2) speakers has to do not with identification of the tones themselves, but with the bindings between tones and lexical units. The question that remains open is how much of these lexical binding problems are problems of encoding (incomplete knowledge of the tone-to-word relations) vs. retrieval (failure to access those relations in online processing). While recent work using lexical decision tasks suggests that both may play a role, one issue is that failure on a lexical decision task may reflect a lack of learner confidence about what is not a word, rather than non-native representation or processing of known words. Here we provide complementary evidence using a picture- phonology matching paradigm in Mandarin in which participants decide whether or not a spoken target matches a specific image, with concurrent event-related potential (ERP) recording to provide potential insight into differences in L1 and L2 tone processing strategies. As in the lexical decision case, we find that advanced L2 learners show a clear disadvantage in accurately identifying tone mismatched targetsmore »relative to vowel mismatched targets. We explore the contribution of incomplete/uncertain lexical knowledge to this performance disadvantage by examining individual data from an explicit tone knowledge post-test. Results suggest that explicit tone word knowledge and confidence explains some but not all of the errors in picture-phonology matching. Analysis of ERPs from correct trials shows some differences in the strength of L1 and L2 responses, but does not provide clear evidence toward differences in processing that could explain the L2 disadvantage for tones. In sum, these results converge with previous evidence from lexical decision tasks in showing that advanced L2 listeners continue to have difficulties with lexical tone recognition, and in suggesting that these difficulties reflect problems both in encoding lexical tone knowledge and in retrieving that knowledge in real time.« less
  4. Abstract: Jury notetaking can be controversial despite evidence suggesting benefits for recall and understanding. Research on note taking has historically focused on the deliberation process. Yet, little research explores the notes themselves. We developed a 10-item coding guide to explore what jurors take notes on (e.g., simple vs. complex evidence) and how they take notes (e.g., gist vs. specific representation). In general, jurors made gist representations of simple and complex information in their notes. This finding is consistent with Fuzzy Trace Theory (Reyna & Brainerd, 1995) and suggests notes may serve as a general memory aid, rather than verbatim representation. Summary: The practice of jury notetaking in the courtroom is often contested. Some states allow it (e.g., Nebraska: State v. Kipf, 1990), while others forbid it (e.g., Louisiana: La. Code of Crim. Proc., Art. 793). Some argue notes may serve as a memory aid, increase juror confidence during deliberation, and help jurors engage in the trial (Hannaford & Munsterman, 2001; Heuer & Penrod, 1988, 1994). Others argue notetaking may distract jurors from listening to evidence, that juror notes may be given undue weight, and that those who took notes may dictate the deliberation process (Dann, Hans, & Kaye, 2005). Whilemore »research has evaluated the efficacy of juror notes on evidence comprehension, little work has explored the specific content of juror notes. In a similar project on which we build, Dann, Hans, and Kaye (2005) found jurors took on average 270 words of notes each with 85% including references to jury instructions in their notes. In the present study we use a content analysis approach to examine how jurors take notes about simple and complex evidence. We were particularly interested in how jurors captured gist and specific (verbatim) information in their notes as they have different implications for information recall during deliberation. According to Fuzzy Trace Theory (Reyna & Brainerd, 1995), people extract “gist” or qualitative meaning from information, and also exact, verbatim representations. Although both are important for helping people make well-informed judgments, gist-based understandings are purported to be even more important than verbatim understanding (Reyna, 2008; Reyna & Brainer, 2007). As such, it could be useful to examine how laypeople represent information in their notes during deliberation of evidence. Methods Prior to watching a 45-minute mock bank robbery trial, jurors were given a pen and notepad and instructed they were permitted to take notes. The evidence included testimony from the defendant, witnesses, and expert witnesses from prosecution and defense. Expert testimony described complex mitochondrial DNA (mtDNA) evidence. The present analysis consists of pilot data representing 2,733 lines of notes from 52 randomly-selected jurors across 41 mock juries. Our final sample for presentation at AP-LS will consist of all 391 juror notes in our dataset. Based on previous research exploring jury note taking as well as our specific interest in gist vs. specific encoding of information, we developed a coding guide to quantify juror note-taking behaviors. Four researchers independently coded a subset of notes. Coders achieved acceptable interrater reliability [(Cronbach’s Alpha = .80-.92) on all variables across 20% of cases]. Prior to AP-LS, we will link juror notes with how they discuss scientific and non-scientific evidence during jury deliberation. Coding Note length. Before coding for content, coders counted lines of text. Each notepad line with at minimum one complete word was coded as a line of text. Gist information vs. Specific information. Any line referencing evidence was coded as gist or specific. We coded gist information as information that did not contain any specific details but summarized the meaning of the evidence (e.g., “bad, not many people excluded”). Specific information was coded as such if it contained a verbatim descriptive (e.g.,“<1 of people could be excluded”). We further coded whether this information was related to non-scientific evidence or related to the scientific DNA evidence. Mentions of DNA Evidence vs. Other Evidence. We were specifically interested in whether jurors mentioned the DNA evidence and how they captured complex evidence. When DNA evidence was mention we coded the content of the DNA reference. Mentions of the characteristics of mtDNA vs nDNA, the DNA match process or who could be excluded, heteroplasmy, references to database size, and other references were coded. Reliability. When referencing DNA evidence, we were interested in whether jurors mentioned the evidence reliability. Any specific mention of reliability of DNA evidence was noted (e.g., “MT DNA is not as powerful, more prone to error”). Expert Qualification. Finally, we were interested in whether jurors noted an expert’s qualifications. All references were coded (e.g., “Forensic analyst”). Results On average, jurors took 53 lines of notes (range: 3-137 lines). Most (83%) mentioned jury instructions before moving on to case specific information. The majority of references to evidence were gist references (54%) focusing on non-scientific evidence and scientific expert testimony equally (50%). When jurors encoded information using specific references (46%), they referenced non-scientific evidence and expert testimony equally as well (50%). Thirty-three percent of lines were devoted to expert testimony with every juror including at least one line. References to the DNA evidence were usually focused on who could be excluded from the FBIs database (43%), followed by references to differences between mtDNA vs nDNA (30%), and mentions of the size of the database (11%). Less frequently, references to DNA evidence focused on heteroplasmy (5%). Of those references that did not fit into a coding category (11%), most focused on the DNA extraction process, general information about DNA, and the uniqueness of DNA. We further coded references to DNA reliability (15%) as well as references to specific statistical information (14%). Finally, 40% of jurors made reference to an expert’s qualifications. Conclusion Jury note content analysis can reveal important information about how jurors capture trial information (e.g., gist vs verbatim), what evidence they consider important, and what they consider relevant and irrelevant. In our case, it appeared jurors largely created gist representations of information that focused equally on non-scientific evidence and scientific expert testimony. This finding suggests note taking may serve not only to represent information verbatim, but also and perhaps mostly as a general memory aid summarizing the meaning of evidence. Further, jurors’ references to evidence tended to be equally focused on the non-scientific evidence and the scientifically complex DNA evidence. This observation suggests jurors may attend just as much to non-scientific evidence as they to do complex scientific evidence in cases involving complicated evidence – an observation that might inform future work on understanding how jurors interpret evidence in cases with complex information. Learning objective: Participants will be able to describe emerging evidence about how jurors take notes during trial.« less
  5. Linkov, Igor (Ed.)
    Risk-cost-benefit analysis requires the enumeration of decision alternatives, their associated outcomes, and the quantification of uncertainty. Public and private decision-making surrounding the COVID-19 pandemic must contend with uncertainty about the probability of infection during activities involving groups of people, in order to decide whether that activity is worth undertaking. We propose a model of SARS-CoV-2 infection probability that can produce estimates of relative risk of infection for diverse activities, so long as those activities meet a list of assumptions, including that they do not last longer than one day (e.g., sporting events, flights, concerts), and that the probability of infection among possible routes of infection (i.e., droplet, aerosol, fomite, and direct contact) are independent. We show how the model can be used to inform decisions facing governments and industry, such as opening stadiums or flying on airplanes; in particular, it allows for estimating the ranking of the constituent components of activities (e.g., going through a turnstile, sitting in one’s seat) by their relative risk of infection, even when the probability of infection is unknown or uncertain. We prove that the model is a good approximation of a more refined model in which we assume infections come from a series ofmore »independent risks. A linearity assumption governing several potentially modifiable risks factors—such as duration of the activity, density of participants, and infectiousness of the attendees—makes interpreting and using the model straightforward, and we argue that it does so without significantly diminishing the reliability of the model.« less