skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

Attention:

The DOI auto-population feature in the Public Access Repository (PAR) will be unavailable from 4:00 PM ET on Tuesday, July 8 until 4:00 PM ET on Wednesday, July 9 due to scheduled maintenance. We apologize for the inconvenience caused.


This content will become publicly available on December 1, 2025

Title: Errors in visual search: Are they stochastic or deterministic?
Abstract In any visual search task in the lab or in the world, observers will make errors. Those errors can be categorized as “deterministic”: If you miss this target in this display once, you will definitely miss it again. Alternatively, errors can be “stochastic”, occurring randomly with some probability from trial to trial. Researchers and practitioners have sought to reduce errors in visual search, but different types of errors might require different techniques for mitigation. To empirically categorize errors in a simple search task, our observers searched for the letter “T” among “L” distractors, with each display presented twice. When the letters were clearly visible (white letters on a gray background), the errors were almost completely stochastic (Exp 1). An error made on the first appearance of a display did not predict that an error would be made on the second appearance. When the visibility of the letters was manipulated (letters of different gray levels on a noisy background), the errors became a mix of stochastic and deterministic. Unsurprisingly, lower contrast targets produced more deterministic errors. (Exp 2). Using the stimuli of Exp 2, we tested whether errors could be reduced using cues that guided attention around the display but knew nothing about the content of that display (Exp3a, b). This had no effect, but cueing all item locations did succeed in reducing deterministic errors (Exp3c).  more » « less
Award ID(s):
2146617
PAR ID:
10592367
Author(s) / Creator(s):
; ;
Publisher / Repository:
SpringerNature
Date Published:
Journal Name:
Cognitive Research: Principles and Implications
Volume:
9
Issue:
1
ISSN:
2365-7464
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Observers routinely make errors in almost any visual search task. In previous online experiments, we found that indiscriminately highlighting all item positions in a noisy search display reduces errors. In the present paper, we conducted two eye tracking studies to investigate the mechanics of this error reduction: does cueing direct attention to previously overlooked regions or enhance attention/processing at cued locations? Displays were presented twice. In Experiment 1, for half of the displays, the cue was only presented on the first copy (Cue - noCue) and for the other half, only presented on the second copy (noCue - Cue). Cueing successfully reduced errors but did not significantly affect RTs. This contrasts with the online experiment where the cue increased RTs while reducing errors. In Experiment 2, we replicated the design of the online experiment by splitting the displays into noCue – noCue and noCue – Cue pairs. We now found that the cue reduced errors, but increased RTs on trials with high- contrast targets. The eye tracking data shows that participants fixated closer to items and fixation durations were shorter in cued displays. The smaller fixation-item distance reduced search errors, where observers never fixated the target, for low contrast targets and the remaining low-contrast errors seemed to be recognition errors, where observers looked at the target but quickly looked away. Taken together, these results suggest that errors were reduced because attention was more properly directed to overlooked regions by the cues instead of being enhanced at the cued areas. 
    more » « less
  2. Abstract Irrelevant salient distractors can trigger early quitting in visual search, causing observers to miss targets they might otherwise find. Here, we asked whether task-relevant salient cues can produce a similar early quitting effect on the subset of trials where those cues fail to highlight the target. We presented participants with a difficult visual search task and used two cueing conditions. In the high-predictive condition, a salient cue in the form of a red circle highlighted the target most of the time a target was present. In the low-predictive condition, the cue was far less accurate and did not reliably predict the target (i.e., the cue was often a false positive). These were contrasted against a control condition in which no cues were presented. In the high-predictive condition, we found clear evidence of early quitting on trials where the cue was a false positive, as evidenced by both increased miss errors and shorter response times on target absent trials. No such effects were observed with low-predictive cues. Together, these results suggest that salient cues which are false positives can trigger early quitting, though perhaps only when the cues have a high-predictive value. These results have implications for real-world searches, such as medical image screening, where salient cues (referred to as computer-aided detection or CAD) may be used to highlight potentially relevant areas of images but are sometimes inaccurate. 
    more » « less
  3. Many augmented reality (AR) applications require observers to shift their gaze between AR and real-world content. To date, commercial optical see-through (OST) AR displays have presented content at either a single focal distance, or at a small number of fixed focal distances. Meanwhile, real-world stimuli can occur at a variety of focal distances. Therefore, when shifting gaze between AR and real-world content, in order to view new content in sharp focus, observers must often change their eye’s accommodative state. When performed repetitively, this can negatively affect task performance and eye fatigue. However, these effects may be under reported, because past research has not yet considered the potential additional effect of distracting real world backgrounds.An experimental method that analyzes background effects is presented, using a text-based visual search task that requires integrating information presented in both AR and the real world. An experiment is reported, which examined the effect of a distracting background versus a blank background, at focal switching distances of 0, 1.33, 2.0, and 3.33 meters. Qualitatively, a majority of the participants reported that the distracting background made the task more difficult and fatiguing. Quantitatively, increasing the focal switching distance resulted in reduced task performance and increased eye fatigue. However, changing the background, between blank and distracting, did not result in significant measured differences. Suggestions are given for further efforts to examine background effects. 
    more » « less
  4. null (Ed.)
    A striking range of individual differences has recently been reported in three different visual search tasks. These differences in performance can be attributed to strategy, that is, the efficiency with which participants control their search to complete the task quickly and accurately. Here, we ask whether an individual’s strategy and performance in one search task is correlated with how they perform in the other two. We tested 64 observers and found that even though the test–retest reliability of the tasks was high, an observer’s performance and strategy in one task was not predictive of their behaviour in the other two. These results suggest search strategies are stable over time, but context-specific. To understand visual search, we therefore need to account not only for differences between individuals but also how individuals interact with the search task and context. 
    more » « less
  5. We consider the novel task of learning disentangled representations of object shape and appearance across multiple domains (e.g., dogs and cars). The goal is to learn a generative model that learns an intermediate distribution, which borrows a subset of properties from each domain, enabling the generation of images that did not exist in any domain exclusively. This challenging problem requires an accurate disentanglement of object shape, appearance, and background from each domain, so that the appearance and shape factors from the two domains can be interchanged. We augment an existing approach that can disentangle factors within a single domain but struggles to do so across domains. Our key technical contribution is to represent object appearance with a differentiable histogram of visual features, and to optimize the generator so that two images with the same latent appearance factor but different latent shape factors produce similar histograms. On multiple multi-domain datasets, we demonstrate our method leads to accurate and consistent appearance and shape transfer across domains. 
    more » « less