skip to main content


Title: How Does Past Performance of Competitors Influence Designers’ Cognition, Behaviors, and Outcomes?
Abstract Existing literature on information sharing in contests has established that sharing contest-specific information influences contestant behaviors, and thereby, the outcomes of a contest. However, in the context of engineering design contests, there is a gap in knowledge about how contest-specific information such as competitors’ historical performance influences designers’ actions and the resulting design outcomes. To address this gap, the objective of this study is to quantify the influence of information about competitors’ past performance on designers’ belief about the outcomes of a contest, which influences their design decisions, and the resulting design outcomes. We focus on a single-stage design competition where an objective figure of merit is available to the contestants for assessing the performance of their design. Our approach includes (i) developing a behavioral model of sequential decision making that accounts for information about competitors’ historical performance and (ii) using the model in conjunction with a human-subject experiment where participants make design decisions given controlled strong or weak performance records of past competitors. Our results indicate that participants spend greater efforts when they know that the contest history reflects that past competitors had a strong performance record than when it reflects a weak performance record. Moreover, we quantify cognitive underpinnings of such informational influence via our model parameters. Based on the parametric inferences about participants’ cognition, we suggest that contest designers are better off not providing historical performance records if past contest outcomes do not match their expectations setup for a given design contest.  more » « less
Award ID(s):
1662230
NSF-PAR ID:
10382270
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Journal of Mechanical Design
Volume:
144
Issue:
10
ISSN:
1050-0472
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Abstract

    In this study, we focus on crowdsourcing contests for engineering design problems where contestants search for design alternatives. Our stakeholder is a designer of such a contest who requires support to make decisions, such as whether to share opponent-specific information with the contestants. There is a significant gap in our understanding of how sharing opponent-specific information influences a contestant’s information acquisition decision such as whether to stop searching for design alternatives. Such decisions in turn affect the outcomes of a design contest. To address this gap, the objective of this study is to investigate how participants’ decision to stop searching for a design solution is influenced by the knowledge about their opponent’s past performance. The objective is achieved by conducting a protocol study where participants are interviewed at the end of a behavioral experiment. In the experiment, participants compete against opponents with strong (or poor) performance records. We find that individuals make decisions to stop acquiring information based on various thresholds such as a target design quality, the number of resources they want to spend, and the amount of design objective improvement they seek in sequential search. The threshold values for such stopping criteria are influenced by the contestant’s perception about the competitiveness of their opponent. Such insights can enable contest designers to make decisions about sharing opponent-specific information with participants, such as the resources utilized by the opponent towards purposefully improving the outcomes of an engineering design contest.

     
    more » « less
  2. Abstract: 100 words Jurors are increasingly exposed to scientific information in the courtroom. To determine whether providing jurors with gist information would assist in their ability to make well-informed decisions, the present experiment utilized a Fuzzy Trace Theory-inspired intervention and tested it against traditional legal safeguards (i.e., judge instructions) by varying the scientific quality of the evidence. The results indicate that jurors who viewed high quality evidence rated the scientific evidence significantly higher than those who viewed low quality evidence, but were unable to moderate the credibility of the expert witness and apply damages appropriately resulting in poor calibration. Summary: <1000 words Jurors and juries are increasingly exposed to scientific information in the courtroom and it remains unclear when they will base their decisions on a reasonable understanding of the relevant scientific information. Without such knowledge, the ability of jurors and juries to make well-informed decisions may be at risk, increasing chances of unjust outcomes (e.g., false convictions in criminal cases). Therefore, there is a critical need to understand conditions that affect jurors’ and juries’ sensitivity to the qualities of scientific information and to identify safeguards that can assist with scientific calibration in the courtroom. The current project addresses these issues with an ecologically valid experimental paradigm, making it possible to assess causal effects of evidence quality and safeguards as well as the role of a host of individual difference variables that may affect perceptions of testimony by scientific experts as well as liability in a civil case. Our main goal was to develop a simple, theoretically grounded tool to enable triers of fact (individual jurors) with a range of scientific reasoning abilities to appropriately weigh scientific evidence in court. We did so by testing a Fuzzy Trace Theory-inspired intervention in court, and testing it against traditional legal safeguards. Appropriate use of scientific evidence reflects good calibration – which we define as being influenced more by strong scientific information than by weak scientific information. Inappropriate use reflects poor calibration – defined as relative insensitivity to the strength of scientific information. Fuzzy Trace Theory (Reyna & Brainerd, 1995) predicts that techniques for improving calibration can come from presentation of easy-to-interpret, bottom-line “gist” of the information. Our central hypothesis was that laypeople’s appropriate use of scientific information would be moderated both by external situational conditions (e.g., quality of the scientific information itself, a decision aid designed to convey clearly the “gist” of the information) and individual differences among people (e.g., scientific reasoning skills, cognitive reflection tendencies, numeracy, need for cognition, attitudes toward and trust in science). Identifying factors that promote jurors’ appropriate understanding of and reliance on scientific information will contribute to general theories of reasoning based on scientific evidence, while also providing an evidence-based framework for improving the courts’ use of scientific information. All hypotheses were preregistered on the Open Science Framework. Method Participants completed six questionnaires (counterbalanced): Need for Cognition Scale (NCS; 18 items), Cognitive Reflection Test (CRT; 7 items), Abbreviated Numeracy Scale (ABS; 6 items), Scientific Reasoning Scale (SRS; 11 items), Trust in Science (TIS; 29 items), and Attitudes towards Science (ATS; 7 items). Participants then viewed a video depicting a civil trial in which the defendant sought damages from the plaintiff for injuries caused by a fall. The defendant (bar patron) alleged that the plaintiff (bartender) pushed him, causing him to fall and hit his head on the hard floor. Participants were informed at the outset that the defendant was liable; therefore, their task was to determine if the plaintiff should be compensated. Participants were randomly assigned to 1 of 6 experimental conditions: 2 (quality of scientific evidence: high vs. low) x 3 (safeguard to improve calibration: gist information, no-gist information [control], jury instructions). An expert witness (neuroscientist) hired by the court testified regarding the scientific strength of fMRI data (high [90 to 10 signal-to-noise ratio] vs. low [50 to 50 signal-to-noise ratio]) and gist or no-gist information both verbally (i.e., fairly high/about average) and visually (i.e., a graph). After viewing the video, participants were asked if they would like to award damages. If they indicated yes, they were asked to enter a dollar amount. Participants then completed the Positive and Negative Affect Schedule-Modified Short Form (PANAS-MSF; 16 items), expert Witness Credibility Scale (WCS; 20 items), Witness Credibility and Influence on damages for each witness, manipulation check questions, Understanding Scientific Testimony (UST; 10 items), and 3 additional measures were collected, but are beyond the scope of the current investigation. Finally, participants completed demographic questions, including questions about their scientific background and experience. The study was completed via Qualtrics, with participation from students (online vs. in-lab), MTurkers, and non-student community members. After removing those who failed attention check questions, 469 participants remained (243 men, 224 women, 2 did not specify gender) from a variety of racial and ethnic backgrounds (70.2% White, non-Hispanic). Results and Discussion There were three primary outcomes: quality of the scientific evidence, expert credibility (WCS), and damages. During initial analyses, each dependent variable was submitted to a separate 3 Gist Safeguard (safeguard, no safeguard, judge instructions) x 2 Scientific Quality (high, low) Analysis of Variance (ANOVA). Consistent with hypotheses, there was a significant main effect of scientific quality on strength of evidence, F(1, 463)=5.099, p=.024; participants who viewed the high quality evidence rated the scientific evidence significantly higher (M= 7.44) than those who viewed the low quality evidence (M=7.06). There were no significant main effects or interactions for witness credibility, indicating that the expert that provided scientific testimony was seen as equally credible regardless of scientific quality or gist safeguard. Finally, for damages, consistent with hypotheses, there was a marginally significant interaction between Gist Safeguard and Scientific Quality, F(2, 273)=2.916, p=.056. However, post hoc t-tests revealed significantly higher damages were awarded for low (M=11.50) versus high (M=10.51) scientific quality evidence F(1, 273)=3.955, p=.048 in the no gist with judge instructions safeguard condition, which was contrary to hypotheses. The data suggest that the judge instructions alone are reversing the pattern, though nonsignificant, those who received the no gist without judge instructions safeguard awarded higher damages in the high (M=11.34) versus low (M=10.84) scientific quality evidence conditions F(1, 273)=1.059, p=.30. Together, these provide promising initial results indicating that participants were able to effectively differentiate between high and low scientific quality of evidence, though inappropriately utilized the scientific evidence through their inability to discern expert credibility and apply damages, resulting in poor calibration. These results will provide the basis for more sophisticated analyses including higher order interactions with individual differences (e.g., need for cognition) as well as tests of mediation using path analyses. [References omitted but available by request] Learning Objective: Participants will be able to determine whether providing jurors with gist information would assist in their ability to award damages in a civil trial. 
    more » « less
  3. null (Ed.)
    Abstract

    Designers make information acquisition decisions, such as where to search and when to stop the search. Such decisions are typically made sequentially, such that at every search step designers gain information by learning about the design space. However, when designers begin acquiring information, their decisions are primarily based on their prior knowledge. Prior knowledge influences the initial set of assumptions that designers use to learn about the design space. These assumptions are collectively termed as inductive biases. Identifying such biases can help us better understand how designers use their prior knowledge to solve problems in the light of uncertainty. Thus, in this study, we identify inductive biases in humans in sequential information acquisition tasks. To do so, we analyze experimental data from a set of behavioral experiments conducted in the past [1–5]. All of these experiments were designed to study various factors that influence sequential information acquisition behaviors. Across these studies, we identify similar decision making behaviors in the participants in their very first decision to “choose x”. We find that their choices of “x” are not uniformly distributed in the design space. Since such experiments are abstractions of real design scenarios, it implies that further contextualization of such experiments would only increase the influence of these biases. Thus, we highlight the need to study the influence of such biases to better understand designer behaviors. We conclude that in the context of Bayesian modeling of designers’ behaviors, utilizing the identified inductive biases would enable us to better model designer’s priors for design search contexts as compared to using non-informative priors.

     
    more » « less
  4. U.S. elections rely heavily on computers such as voter registration databases, electronic pollbooks, voting machines, scanners, tabulators, and results reporting websites. These introduce digital threats to election outcomes. Risk-limiting audits (RLAs) mitigate threats to some of these systems by manually inspecting random samples of ballot cards. RLAs have a large chance of correcting wrong outcomes (by conducting a full manual tabulation of a trustworthy record of the votes), but can save labor when reported outcomes are correct. This efficiency is eroded when sampling cannot be targeted to ballot cards that contain the contest(s) under audit. If the sample is drawn from all cast cards, then RLA sample sizes scale like the reciprocal of the fraction of ballot cards that contain the contest(s) under audit. That fraction shrinks as the number of cards per ballot grows (i.e., when elections contain more contests) and as the fraction of ballots that contain the contest decreases (i.e., when a smaller percentage of voters are eligible to vote in the contest). States that conduct RLAs of contests on multi-card ballots or RLAs of small contests can dramatically reduce sample sizes by using information about which ballot cards contain which contests—by keeping track of card-style data (CSD). For instance, CSD reduce the expected number of draws needed to audit a single countywide contest on a 4-card ballot by 75%. Similarly, CSD reduce the expected number of draws by 95% or more for an audit of two contests with the same margin on a 4-card ballot if one contest is on every ballot and the other is on 10% of ballots. In realistic examples, the savings can be several orders of magnitude. 
    more » « less
  5. The socio-technical perspective on engineering system design emphasizes the mutual dynamics between interdisciplinary interactions and system design outcomes. How different disciplines interact with each other depends on technical factors such as design interdependence and system performance. On the other hand, the design outcomes are influenced by social factors such as the frequency of interactions and their distribution. Understanding this co-evolution can lead to not only better behavioral insights, but also efficient communication pathways. In this context, we investigate how to quantify the temporal influences of social and technical factors on interdisciplinary interactions and their influence on system performance. We present a stochastic network-behavior dynamics model that quantifies the design interdependence, discipline-specific interaction decisions, the evolution of system performance, as well as their mutual dynamics. We employ two datasets, one of student subjects designing an automotive engine and the other of NASA engineers designing a spacecraft. Then, we apply statistical Bayesian inference to estimate model parameters and compare insights across the two datasets. The results indicate that design interdependence and social network statistics both have strong positive effects on interdisciplinary interactions for the expert and student subjects alike. For the student subjects, an additional modulating effect of system performance on interactions is observed. Inversely, the total number of interactions, irrespective of their discipline-wise distribution, has a weak but statistically significant positive effect on system performance in both cases. However, excessive interactions mirrored with design interdependence and inflexible design space exploration reduce system performance. These insights support the case for open organizational boundaries as a way for increasing interactions and improving system performance. 
    more » « less