skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Award ID contains: 1816620

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Confirmation bias is a type of cognitive bias that involves seeking and prioritizing information that conforms to a pre-existing view or hypothesis that can negatively affect the decision-making process. We investigate the manifestation and mitigation of confirmation bias with an emphasis on the use of visualization. In a series of Amazon Mechanical Turk studies, participants selected evidence that supported or refuted a given hypothesis. We demonstrated the presence of confirmation bias and investigated the use of five simple visual representations, using color, positional, and length encodings for mitigating this bias. We found that at worst, visualization had no effect in the amount of confirmation bias present, and at best, it was successful in mitigating the bias. We discuss these results in light of factors that can complicate visual debiasing in non-experts. 
    more » « less
  2. null (Ed.)
  3. null (Ed.)
    Experimenter bias and expectancy effects have been well studied in the social sciences and even in human-computer interaction. They refer to the nonideal study-design choices made by experimenters which can unfairly influence the outcomes of their studies. While these biases need to be considered when designing any empirical study, they can be particularly significant in the context of replication studies which can stray from the studies being replicated in only a few admissible ways. Although there are general guidelines for making valid, unbiased choices in each of the several steps in experimental design, making such choices when conducting replication studies has not been well explored. We reviewed 16 replication studies in information visualization published in four top venues between 2008 to present to characterize how the study designs of the replication studies differed from those of the studies they replicated. We present our characterization categories which include the prevalence of crowdsourcing, and the commonly-found replication types and study-design differences. We draw guidelines based on these categories towards helping researchers make meaningful and unbiased decisions when designing replication studies. Our paper presents the first steps in gaining a larger understanding of this topic and contributes to the ongoing efforts of encouraging researchers to conduct and publish more replication studies in information visualization. 
    more » « less