skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: How Does Past Performance of Competitors Influence Designers’ Cognition, Behaviors, and Outcomes?
Abstract Existing literature on information sharing in contests has established that sharing contest-specific information influences contestant behaviors, and thereby, the outcomes of a contest. However, in the context of engineering design contests, there is a gap in knowledge about how contest-specific information such as competitors’ historical performance influences designers’ actions and the resulting design outcomes. To address this gap, the objective of this study is to quantify the influence of information about competitors’ past performance on designers’ belief about the outcomes of a contest, which influences their design decisions, and the resulting design outcomes. We focus on a single-stage design competition where an objective figure of merit is available to the contestants for assessing the performance of their design. Our approach includes (i) developing a behavioral model of sequential decision making that accounts for information about competitors’ historical performance and (ii) using the model in conjunction with a human-subject experiment where participants make design decisions given controlled strong or weak performance records of past competitors. Our results indicate that participants spend greater efforts when they know that the contest history reflects that past competitors had a strong performance record than when it reflects a weak performance record. Moreover, we quantify cognitive underpinnings of such informational influence via our model parameters. Based on the parametric inferences about participants’ cognition, we suggest that contest designers are better off not providing historical performance records if past contest outcomes do not match their expectations setup for a given design contest.  more » « less
Award ID(s):
1662230
PAR ID:
10382270
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Journal of Mechanical Design
Volume:
144
Issue:
10
ISSN:
1050-0472
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Abstract In this study, we focus on crowdsourcing contests for engineering design problems where contestants search for design alternatives. Our stakeholder is a designer of such a contest who requires support to make decisions, such as whether to share opponent-specific information with the contestants. There is a significant gap in our understanding of how sharing opponent-specific information influences a contestant’s information acquisition decision such as whether to stop searching for design alternatives. Such decisions in turn affect the outcomes of a design contest. To address this gap, the objective of this study is to investigate how participants’ decision to stop searching for a design solution is influenced by the knowledge about their opponent’s past performance. The objective is achieved by conducting a protocol study where participants are interviewed at the end of a behavioral experiment. In the experiment, participants compete against opponents with strong (or poor) performance records. We find that individuals make decisions to stop acquiring information based on various thresholds such as a target design quality, the number of resources they want to spend, and the amount of design objective improvement they seek in sequential search. The threshold values for such stopping criteria are influenced by the contestant’s perception about the competitiveness of their opponent. Such insights can enable contest designers to make decisions about sharing opponent-specific information with participants, such as the resources utilized by the opponent towards purposefully improving the outcomes of an engineering design contest. 
    more » « less
  2. U.S. elections rely heavily on computers such as voter registration databases, electronic pollbooks, voting machines, scanners, tabulators, and results reporting websites. These introduce digital threats to election outcomes. Risk-limiting audits (RLAs) mitigate threats to some of these systems by manually inspecting random samples of ballot cards. RLAs have a large chance of correcting wrong outcomes (by conducting a full manual tabulation of a trustworthy record of the votes), but can save labor when reported outcomes are correct. This efficiency is eroded when sampling cannot be targeted to ballot cards that contain the contest(s) under audit. If the sample is drawn from all cast cards, then RLA sample sizes scale like the reciprocal of the fraction of ballot cards that contain the contest(s) under audit. That fraction shrinks as the number of cards per ballot grows (i.e., when elections contain more contests) and as the fraction of ballots that contain the contest decreases (i.e., when a smaller percentage of voters are eligible to vote in the contest). States that conduct RLAs of contests on multi-card ballots or RLAs of small contests can dramatically reduce sample sizes by using information about which ballot cards contain which contests—by keeping track of card-style data (CSD). For instance, CSD reduce the expected number of draws needed to audit a single countywide contest on a 4-card ballot by 75%. Similarly, CSD reduce the expected number of draws by 95% or more for an audit of two contests with the same margin on a 4-card ballot if one contest is on every ballot and the other is on 10% of ballots. In realistic examples, the savings can be several orders of magnitude. 
    more » « less
  3. Rational decision-making is crucial in the later stages of engineering system design to allocate resources efficiently and minimize costs. However, human rationality is bounded by cognitive biases and limitations. Understanding how humans deviate from rationality is critical for guiding designers toward better design outcomes. In this paper, we quantify designer rationality in competitive scenarios based on utility theory. Using an experiment inspired by crowd-sourced contests, we show that designers employ varied search strategies. Some participants approximate a Bayesian agent that aimed to maximize its expected utility. Those with higher rationality reduce uncertainty more effectively. Furthermore, rationality correlates with both the proximity to optimal design and design iteration costs, with winning participants exhibiting greater rationality than losing participants. 
    more » « less
  4. null (Ed.)
    Abstract Designers make information acquisition decisions, such as where to search and when to stop the search. Such decisions are typically made sequentially, such that at every search step designers gain information by learning about the design space. However, when designers begin acquiring information, their decisions are primarily based on their prior knowledge. Prior knowledge influences the initial set of assumptions that designers use to learn about the design space. These assumptions are collectively termed as inductive biases. Identifying such biases can help us better understand how designers use their prior knowledge to solve problems in the light of uncertainty. Thus, in this study, we identify inductive biases in humans in sequential information acquisition tasks. To do so, we analyze experimental data from a set of behavioral experiments conducted in the past [1–5]. All of these experiments were designed to study various factors that influence sequential information acquisition behaviors. Across these studies, we identify similar decision making behaviors in the participants in their very first decision to “choose x”. We find that their choices of “x” are not uniformly distributed in the design space. Since such experiments are abstractions of real design scenarios, it implies that further contextualization of such experiments would only increase the influence of these biases. Thus, we highlight the need to study the influence of such biases to better understand designer behaviors. We conclude that in the context of Bayesian modeling of designers’ behaviors, utilizing the identified inductive biases would enable us to better model designer’s priors for design search contexts as compared to using non-informative priors. 
    more » « less
  5. Abstract Engineering design involves information acquisition decisions such as selecting designs in the design space for testing, selecting information sources, and deciding when to stop design exploration. Existing literature has established normative models for these decisions, but there is lack of knowledge about how human designers make these decisions and which strategies they use. This knowledge is important for accurately modeling design decisions, identifying sources of inefficiencies, and improving the design process. Therefore, the primary objective in this study is to identify models that provide the best description of a designer’s information acquisition decisions when multiple information sources are present and the total budget is limited. We conduct a controlled human subject experiment with two independent variables: the amount of fixed budget and the monetary incentive proportional to the saved budget. By using the experimental observations, we perform Bayesian model comparison on various simple heuristic models and expected utility (EU)-based models. As expected, the subjects’ decisions are better represented by the heuristic models than the EU-based models. While the EU-based models result in better net payoff, the heuristic models used by the subjects generate better design performance. The net payoff using heuristic models is closer to the EU-based models in experimental treatments where the budget is low and there is incentive for saving the budget. This indicates the potential for nudging designers’ decisions toward maximizing the net payoff by setting the fixed budget at low values and providing monetary incentives proportional to saved budget. 
    more » « less