skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Ding, Mingzhou"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract It has been suggested that the visual system samples attended information rhythmically. Does rhythmic sampling also apply to distracting information? How do attended information and distracting information compete temporally for neural representations? We recorded electroencephalography from participants who detected instances of coherent motion in a random dot kinematogram (RDK; the target stimulus), overlayed on different categories (pleasant, neutral, and unpleasant) of affective images from the International Affective System (IAPS) (the distractor). The moving dots were flickered at 4.29 Hz whereas the IAPS pictures were flickered at 6 Hz. The time course of spectral power at 4.29 Hz (dot response) was taken to index the temporal dynamics of target processing. The spatial pattern of the power at 6 Hz was similarly extracted and subjected to a MVPA decoding analysis to index the temporal dynamics of processing pleasant, neutral, or unpleasant distractor pictures. We found that (1) both target processing and distractor processing exhibited rhythmicity at ∼1 Hz and (2) the phase difference between the two rhythmic time courses were related to task performance, i.e., relative phase closer to π predicted a higher rate of coherent motion detection whereas relative phase closer to 0 predicted a lower rate of coherent motion detection. These results suggest that (1) in a target-distractor scenario, both attended and distracting information were sampled rhythmically and (2) the more target sampling and distractor sampling were separated in time within a sampling cycle, the less distraction effects were observed, both at the neural and the behavioral level. 
    more » « less
    Free, publicly-accessible full text available April 24, 2026
  2. In models of visual spatial attention control, it is commonly held that top–down control signals originate in the dorsal attention network, propagating to the visual cortex to modulate baseline neural activity and bias sensory processing. However, the precise distribution of these top–down influences across different levels of the visual hierarchy is debated. In addition, it is unclear whether these baseline neural activity changes translate into improved performance. We analyzed attention-related baseline activity during the anticipatory period of a voluntary spatial attention task, using two independent functional magnetic resonance imaging datasets and two analytic approaches. First, as in prior studies, univariate analysis showed that covert attention significantly enhanced baseline neural activity in higher-order visual areas contralateral to the attended visual hemifield, while effects in lower-order visual areas (e.g., V1) were weaker and more variable. Second, in contrast, multivariate pattern analysis (MVPA) revealed significant decoding of attention conditions across all visual cortical areas, with lower-order visual areas exhibiting higher decoding accuracies than higher-order areas. Third, decoding accuracy, rather than the magnitude of univariate activation, was a better predictor of a subject's stimulus discrimination performance. Finally, the MVPA results were replicated across two experimental conditions, where the direction of spatial attention was either externally instructed by a cue or based on the participants’ free choice decision about where to attend. Together, these findings offer new insights into the extent of attentional biases in the visual hierarchy under top–down control and how these biases influence both sensory processing and behavioral performance. 
    more » « less
  3. The International Affective Picture System (IAPS) contains 1,182 well-characterized photographs depicting natural scenes varying in affective content. These pictures are used extensively in affective neuroscience to investigate the neural correlates of emotional processing. Recently, in an effort to augment this dataset, we have begun to generate synthetic emotional images by combining IAPS pictures and diffusion-based AI models. The goal of this study is to compare the neural responses to IAPS pictures and matching AI-generated images. The stimulus set consisted of 60 IAPS pictures (20 pleasant, 20 neutral, 20 unpleasant) and 60 matching AI-generated images (20 pleasant, 20 neutral, 20 unpleasant). In a recording session, a total of 30 IAPS pictures and 30 matching AI-generated images were presented in random order, where each image was displayed for 3 seconds with neighboring images being separated by an interval of 2.8 to 3.5 seconds. Each experiment consisted of 10 recording sessions. The fMRI data was recorded on a 3T Siemens Prisma scanner. Pupil responses to image presentation were monitored using an MRI-compatible eyetracker. Our preliminary analysis of the fMRI data (N=3) showed that IAPS pictures and matching AI-generated images evoked similar neural responses in the visual cortex. In particular, MVPA (Multivariate Pattern Analysis) classifiers built to decode emotional categories from neural responses to IAPS pictures can be used to decode emotional categories from neural responses to AI-generated images and vice versa. Efforts to confirm these findings are underway by recruiting additional participants. Analysis is also being expanded to include the comparison of such measures as functional connectivity and pupillometry. 
    more » « less
  4. Wei, Xue-Xin (Ed.)
    Recent neuroimaging studies have shown that the visual cortex plays an important role in representing the affective significance of visual input. The origin of these affect-specific visual representations is debated: they are intrinsic to the visual system versus they arise through reentry from frontal emotion processing structures such as the amygdala. We examined this problem by combining convolutional neural network (CNN) models of the human ventral visual cortex pre-trained on ImageNet with two datasets of affective images. Our results show that in all layers of the CNN models, there were artificial neurons that responded consistently and selectively to neutral, pleasant, or unpleasant images and lesioning these neurons by setting their output to zero or enhancing these neurons by increasing their gain led to decreased or increased emotion recognition performance respectively. These results support the idea that the visual system may have the intrinsic ability to represent the affective significance of visual input and suggest that CNNs offer a fruitful platform for testing neuroscientific theories. 
    more » « less