skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Semantic novelty modulates neural responses to visual change across the human brain
Abstract Our continuous visual experience in daily life is dominated by change. Previous research has focused on visual change due to stimulus motion, eye movements or unfolding events, but not their combined impact across the brain, or their interactions with semantic novelty. We investigate the neural responses to these sources of novelty during film viewing. We analyzed intracranial recordings in humans across 6328 electrodes from 23 individuals. Responses associated with saccades and film cuts were dominant across the entire brain. Film cuts at semantic event boundaries were particularly effective in the temporal and medial temporal lobe. Saccades to visual targets with high visual novelty were also associated with strong neural responses. Specific locations in higher-order association areas showed selectivity to either high or low-novelty saccades. We conclude that neural activity associated with film cuts and eye movements is widespread across the brain and is modulated by semantic novelty.  more » « less
Award ID(s):
2201835
PAR ID:
10415066
Author(s) / Creator(s):
; ; ; ; ; ; ; ; ;
Publisher / Repository:
Nature Publishing Group
Date Published:
Journal Name:
Nature Communications
Volume:
14
Issue:
1
ISSN:
2041-1723
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Decades of research have shown that global brain states such as arousal can be indexed by measuring the properties of the eyes. The spiking responses of neurons throughout the brain have been associated with the pupil, small fixational saccades, and vigor in eye movements, but it has been difficult to isolate how internal states affect the eyes, and vice versa. While recording from populations of neurons in the visual and prefrontal cortex (PFC), we recently identified a latent dimension of neural activity called “slow drift,” which appears to reflect a shift in a global brain state. Here, we asked if slow drift is correlated with the action of the eyes in distinct behavioral tasks. We recorded from visual cortex (V4) while monkeys performed a change detection task, and PFC, while they performed a memory-guided saccade task. In both tasks, slow drift was associated with the size of the pupil and the microsaccade rate, two external indicators of the internal state of the animal. These results show that metrics related to the action of the eyes are associated with a dominant and task-independent mode of neural activity that can be accessed in the population activity of neurons across the cortex. 
    more » « less
  2. Oh, A; Naumann, T; Globerson, A; Saenko, K; Hardt, M; Levine, S (Ed.)
    The human visual system uses two parallel pathways for spatial processing and object recognition. In contrast, computer vision systems tend to use a single feedforward pathway, rendering them less robust, adaptive, or efficient than human vision. To bridge this gap, we developed a dual-stream vision model inspired by the human eyes and brain. At the input level, the model samples two complementary visual patterns to mimic how the human eyes use magnocellular and parvocellular retinal ganglion cells to separate retinal inputs to the brain. At the backend, the model processes the separate input patterns through two branches of convolutional neural networks (CNN) to mimic how the human brain uses the dorsal and ventral cortical pathways for parallel visual processing. The first branch (WhereCNN) samples a global view to learn spatial attention and control eye movements. The second branch (WhatCNN) samples a local view to represent the object around the fixation. Over time, the two branches interact recurrently to build a scene representation from moving fixations. We compared this model with the human brains processing the same movie and evaluated their functional alignment by linear transformation. The WhereCNN and WhatCNN branches were found to differentially match the dorsal and ventral pathways of the visual cortex, respectively, primarily due to their different learning objectives, rather than their distinctions in retinal sampling or sensitivity to attention-driven eye movements. These model-based results lead us to speculate that the distinct responses and representations of the ventral and dorsal streams are more influenced by their distinct goals in visual attention and object recognition than by their specific bias or selectivity in retinal inputs. This dual-stream model takes a further step in brain-inspired computer vision, enabling parallel neural networks to actively explore and understand the visual surroundings. 
    more » « less
  3. Abstract Unconscious neural activity has been shown to precede both motor and cognitive acts. In the present study, we investigated the neural antecedents of overt attention during visual search, where subjects make voluntary saccadic eye movements to search a cluttered stimulus array for a target item. Building on studies of both overt self-generated motor actions (Lau et al., 2004, Soon et al., 2008) and self-generated cognitive actions (Bengson et al., 2014, Soon et al., 2013), we hypothesized that brain activity prior to the onset of a search array would predict the direction of the first saccade during unguided visual search. Because both spatial attention and gaze are coordinated during visual search, both cognition and motor actions are coupled during visual search. A well-established finding in fMRI studies of willed action is that neural antecedents of the intention to make a motor act (e.g., reaching) can be identified seconds before the action occurs. Studies of the volitional control ofcovertspatial attention in EEG have shown that predictive brain activity is limited to only a few hundred milliseconds before a voluntary shift of covert spatial attention. In the present study, the visual search task and stimuli were designed so that subjects could not predict the onset of the search array. Perceptual task difficulty was high, such that they could not locate the target using covert attention alone, thus requiring overt shifts of attention (saccades) to carry out the visual search. If the first saccade to the array onset in unguided visual search shares mechanisms with willed shifts of covert attention, we expected predictive EEG alpha-band activity (8-12 Hz) immediately prior to the array onset (within 1 sec) (Bengson et al., 2014; Nadra et al., 2023). Alternatively, if they follow the principles of willed motor actions, predictive neural signals should be reflected in broadband EEG activity (Libet et al., 1983) and would likely emerge earlier (Soon et al., 2008). Applying support vector machine decoding, we found that the direction of the first saccade in an unguided visual search could be predicted up to two seconds preceding the search array’s onset in the broadband but not alpha-band EEG. These findings suggest that self-directed eye movements in visual search emerge from early preparatory neural activity more akin to willed motor actions than to covert willed attention. This highlights a distinct role for unconscious neural dynamics in shaping visual search behavior. 
    more » « less
  4. Scientists have pondered the perceptual effects of ocular motion, and those of its counterpart, ocular stillness, for over 200 years. The unremitting ‘trembling of the eye’ that occurs even during gaze fixation was first noted by Jurin in 1738. In 1794, Erasmus Darwin documented that gaze fixation produces perceptual fading, a phenomenon rediscovered in 1804 by Ignaz Paul Vital Troxler. Studies in the twentieth century established that Jurin's ‘eye trembling’ consisted of three main types of ‘fixational’ eye movements, now called microsaccades (or fixational saccades), drifts and tremor. Yet, owing to the constant and minute nature of these motions, the study of their perceptual and physiological consequences has met significant technological challenges. Studies starting in the 1950s and continuing in the present have attempted to study vision during retinal stabilization—a technique that consists on shifting any and all visual stimuli presented to the eye in such a way as to nullify all concurrent eye movements—providing a tantalizing glimpse of vision in the absence of change. No research to date has achieved perfect retinal stabilization, however, and so other work has devised substitute ways to counteract eye motion, such as by studying the perception of afterimages or of the entoptic images formed by retinal vessels, which are completely stable with respect to the eye. Yet other research has taken the alternative tack to control eye motion by behavioural instruction to fix one's gaze or to keep one's gaze still, during concurrent physiological and/or psychophysical measurements. Here, we review the existing data—from historical and contemporary studies that have aimed to nullify or minimize eye motion—on the perceptual and physiological consequences of perfect versus imperfect fixation. We also discuss the accuracy, quality and stability of ocular fixation, and the bottom–up and top–down influences that affect fixation behaviour. This article is part of the themed issue ‘Movement suppression: brain mechanisms for stopping and stillness’. 
    more » « less
  5. Spatial ability is the ability to generate, store, retrieve, and transform visual information to mentally represent a space and make sense of it. This ability is a critical facet of human cognition that affects knowledge acquisition, productivity, and workplace safety. Although having improved spatial ability is essential for safely navigating and perceiving a space on earth, it is more critical in altered environments of other planets and deep space, which may pose extreme and unfamiliar visuospatial conditions. Such conditions may range from microgravity settings with the misalignment of body and visual axes to a lack of landmark objects that offer spatial cues to perceive size, distance, and speed. These altered visuospatial conditions may pose challenges to human spatial cognitive processing, which assists humans in locating objects in space, perceiving them visually, and comprehending spatial relationships between the objects and surroundings. The main goal of this paper is to examine if eye-tracking data of gaze pattern can indicate whether such altered conditions may demand more mental efforts and attention. The key dimensions of spatial ability (i.e., spatial visualization, spatial relations, and spatial orientation) are examined under the three simulated conditions: (1) aligned body and visual axes (control group); (2) statically misaligned body and visual axes (experiment group I); and dynamically misaligned body and visual axes (experiment group II). The three conditions were simulated in Virtual Reality (VR) using Unity 3D game engine. Participants were recruited from Texas A&M University student population who wore HTC VIVE Head-Mounted Displays (HMDs) equipped with eye-tracking technology to work on three spatial tests to measure spatial visualization, orientation, and relations. The Purdue Spatial Visualization Test: Rotations (PSVT: R), the Mental Cutting Test (MCT), and the Perspective Taking Ability (PTA) test were used to evaluate the spatial visualization, spatial relations, and spatial orientation of 78 participants, respectively. For each test, gaze data was collected through Tobii eye-tracker integrated in the HTC Vive HMDs. Quick eye movements, known as saccades, were identified by analyzing raw eye-tracking data using the rate of change of gaze position over time as a measure of mental effort. The results showed that the mean number of saccades in MCT and PSVT: R tests was statistically larger in experiment group II than in the control group or experiment group I. However, PTA test data did not meet the required assumptions to compare the mean number of saccades in the three groups. The results suggest that spatial relations and visualization may require more mental effort under dynamically misaligned idiotropic and visual axes than aligned or statically misaligned idiotropic and visual axes. However, the data could not reveal whether spatial orientation requires more/less mental effort under aligned, statically misaligned, and dynamically misaligned idiotropic and visual axes. The results of this study are important to understand how altered visuospatial conditions impact spatial cognition and how simulation- or game-based training tools can be developed to train people in adapting to extreme or altered work environments and working more productively and safely. 
    more » « less