skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Less is more: information needs, information wants, and what makes causal models useful
Abstract Each day people make decisions about complex topics such as health and personal finances. Causal models of these domains have been created to aid decisions, but the resulting models are often complex and it is not known whether people can use them successfully. We investigate the trade-off between simplicity and complexity in decision making, testing diagrams tailored to target choices (Experiments 1 and 2), and with relevant causal paths highlighted (Experiment 3), finding that simplicity or directing attention to simple causal paths leads to better decisions. We test the boundaries of this effect (Experiment 4), finding that including a small amount of information beyond that related to the target answer has a detrimental effect. Finally, we examine whether people know what information they need (Experiment 5). We find that simple, targeted, information still leads to the best decisions, while participants who believe they do not need information or seek out the most complex information performed worse.  more » « less
Award ID(s):
1907951
PAR ID:
10450872
Author(s) / Creator(s):
;
Publisher / Repository:
Springer Science + Business Media
Date Published:
Journal Name:
Cognitive Research: Principles and Implications
Volume:
8
Issue:
1
ISSN:
2365-7464
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Fitch., T.; Lamm, C.; Leder, H.; Teßmar-Raible, K. (Ed.)
    We make frequent decisions about how to manage our health, yet do so with information that is highly complex or received piecemeal. Causal models can provide guidance about how components of a complex system interact, yet models that provide a complete causal story may be more complex than people can reason about. Prior work has provided mixed insights into our ability to make decisions with causal models, showing that people can use them in novel domains but that they may impede decisions in familiar ones. We examine how tailoring causal information to the question at hand may aid decision making, using simple diagrams with only the relevant causal paths (Experiment 1) or those paths highlighted within a complex causal model (Experiment 2). We find that diagrams tailored to a choice improve decision accuracy over complex diagrams or prior knowledge, providing new evidence for how causal models can aid decisions. 
    more » « less
  2. Abstract Causal reasoning is a fundamental cognitive ability that enables individuals to learn about the complex interactions in the world around them. However, the mechanisms that underpin causal reasoning are not well understood. For example, it remains unresolved whether children's causal inferences are best explained by Bayesian inference or associative learning. The two experiments and computational models reported here were designed to examine whether 5‐ and 6‐year‐olds will retrospectively reevaluate objects—that is, adjust their beliefs about the causal status of some objects presented at an earlier point in time based on the observed causal status of other objects presented at a later point in time—when asked to reason about 3 and 4 objects and under varying degrees of information processing demands. Additionally, the experiments and models were designed to determine whether children's retrospective reevaluations were best explained by associative learning, Bayesian inference, or some combination of both. The results indicated that participants retrospectively reevaluated causal inferences under minimal information‐processing demands (Experiment 1) but failed to do so under greater information processing demands (Experiment 2) and that their performance was better captured by an associative learning mechanism, with less support for descriptions that rely on Bayesian inference. Research HighlightsFive‐ and 6‐year‐old children engage in retrospective reevaluation under minimal information‐processing demands (Experiment 1).Five‐ and 6‐year‐old children do not engage in retrospective reevaluation under more extensive information‐processing demands (Experiment 2).Across both experiments, children's retrospective reevaluations were better explained by a simple associative learning model, with only minimal support for a simple Bayesian model.These data contribute to our understanding of the cognitive mechanisms by which children make causal judgements. 
    more » « less
  3. A standard assumption in game theory is that decision-makers have preplanned strategies telling them what actions to take for every contingency. In contrast, nonstrategic decisions often involve an on-the-spot comparison process, with longer response times (RT) for choices between more similarly appealing options. If strategic decisions also exhibit these patterns, then RT might betray private information and alter game theory predictions. Here, we examined bargaining behavior to determine whether RT reveals private information in strategic settings. Using preexisting and experimental data from eBay, we show that both buyers and sellers take hours longer to accept bad offers and to reject good offers. We find nearly identical patterns in the two datasets, indicating a causal effect of offer size on RT. However, this relationship is half as strong for rejections as for acceptances, reducing the amount of useful private information revealed by the sellers. Counter to our predictions, buyers are discouraged by slow rejections—they are less likely to counteroffer to slow sellers. We also show that a drift-diffusion model (DDM), traditionally limited to decisions on the order of seconds, can account for decisions on the order of hours, sometimes days. The DDM reveals that more experienced sellers are less cautious and more inclined to accept offers. In summary, strategic decisions are inconsistent with preplanned strategies. This underscores the need for game theory to incorporate RT as a strategic variable and broadens the applicability of the DDM to slow decisions. 
    more » « less
  4. Past research has shown that people prefer different levels of visual complexity in websites: While some prefer simple websites with little text and few images, others prefer highly complex websites with many colors, images, and text. We investigated whether users’ visual preferences reflect which website complexity they can work with most efficiently. We conducted an online study with 165 participants in which we tested their search efficiency and information recall. We confirm that the visual complexity of a website has a significant negative effect on search efficiency and information recall. However, the search efficiency of those who preferred simple websites was more negatively affected by highly complex websites than those who preferred high visual complexity. Our results suggest that diverse visual preferences need to be accounted for when assessing search response time and information recall in HCI experiments, testing software, or A/B tests. 
    more » « less
  5. Keathley, H.; Enos, J.; Parrish, M. (Ed.)
    The role of human-machine teams in society is increasing, as big data and computing power explode. One popular approach to AI is deep learning, which is useful for classification, feature identification, and predictive modeling. However, deep learning models often suffer from inadequate transparency and poor explainability. One aspect of human systems integration is the design of interfaces that support human decision-making. AI models have multiple types of uncertainty embedded, which may be difficult for users to understand. Humans that use these tools need to understand how much they should trust the AI. This study evaluates one simple approach for communicating uncertainty, a visual confidence bar ranging from 0-100%. We perform a human-subject online experiment using an existing image recognition deep learning model to test the effect of (1) providing single vs. multiple recommendations from the AI and (2) including uncertainty information. For each image, participants described the subject in an open textbox and rated their confidence in their answers. Performance was evaluated at four levels of accuracy ranging from the same as the image label to the correct category of the image. The results suggest that AI recommendations increase accuracy, even if the human and AI have different definitions of accuracy. In addition, providing multiple ranked recommendations, with or without the confidence bar, increases operator confidence and reduces perceived task difficulty. More research is needed to determine how people approach uncertain information from an AI system and develop effective visualizations for communicating uncertainty. 
    more » « less