skip to main content

Title: Acquisition of visual priors and induced hallucinations in chronic schizophrenia

Prominent theories suggest that symptoms of schizophrenia stem from learning deficiencies resulting in distorted internal models of the world. To test these theories further, we used a visual statistical learning task known to induce rapid implicit learning of the stimulus statistics. In this task, participants are presented with a field of coherently moving dots and are asked to report the presented direction of the dots (estimation task), and whether they saw any dots or not (detection task). Two of the directions were more frequently presented than the others. In controls, the implicit acquisition of the stimuli statistics influences their perception in two ways: (i) motion directions are perceived as being more similar to the most frequently presented directions than they really are (estimation biases); and (ii) in the absence of stimuli, participants sometimes report perceiving the most frequently presented directions (a form of hallucinations). Such behaviour is consistent with probabilistic inference, i.e. combining learnt perceptual priors with sensory evidence. We investigated whether patients with chronic, stable, treated schizophrenia (n = 20) differ from controls (n = 23) in the acquisition of the perceptual priors and/or their influence on perception. We found that although patients were slower than controls, they showed comparable acquisition of perceptual priors, approximating the stimulus statistics. This suggests that patients have no statistical learning deficits in our task. This may reflect our patients’ relative wellbeing on antipsychotic medication. Intriguingly, however, patients experienced significantly fewer (P = 0.016) hallucinations of the most frequently presented directions than controls when the stimulus was absent or when it was very weak (prior-based lapse estimations). This suggests that prior expectations had less influence on patients’ perception than on controls when stimuli were absent or below perceptual threshold.

more » « less
Author(s) / Creator(s):
 ;  ;  ;  ;  ;  
Publisher / Repository:
Oxford University Press
Date Published:
Journal Name:
Page Range / eLocation ID:
p. 2523-2537
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Binding across sensory modalities yields substantial perceptual benefits, including enhanced speech intelligibility. The coincidence of sensory inputs across time is a fundamental cue for this integration process. Recent work has suggested that individuals with diagnoses of schizophrenia (SZ) and autism spectrum disorder (ASD) will characterize auditory and visual events as synchronous over larger temporal disparities than their neurotypical counterparts. Namely, these clinical populations possess an enlarged temporal binding window (TBW). Although patients withSZandASDshare aspects of their symptomatology, phenotypic similarities may result from distinct etiologies. To examine similarities and variances in audiovisual temporal function in these two populations, individuals diagnosed withASD(n = 46; controlsn = 40) andSZ(n = 16, controls = 16) completed an audiovisual simultaneity judgment task. In addition to standard psychometric analyses, synchrony judgments were assessed using Bayesian causal inference modeling. This approach permits distinguishing between distinct causes of an enlargedTBW: ana prioribias to bind sensory information and poor fidelity in the sensory representation. Findings indicate that bothASDandSZpopulations show deficits in multisensory temporal acuity. Importantly, results suggest that while the widerTBWs inASDmost prominently results from atypical priors, the widerTBWs inSZresults from a trend toward changes in prior and weaknesses in the sensory representations. Results are discussed in light of currentASDandSZtheories and highlight that different perceptual training paradigms focused on improving multisensory integration may be most effective in these two clinical populations and emphasize that similar phenotypes may emanate from distinct mechanistic causes.

    more » « less
  2. Abstract To accurately categorize items, humans learn to selectively attend to the stimulus dimensions that are most relevant to the task. Models of category learning describe how attention changes across trials as labeled stimuli are progressively observed. The Adaptive Attention Representation Model (AARM), for example, provides an account in which categorization decisions are based on the perceptual similarity of a new stimulus to stored exemplars, and dimension-wise attention is updated on every trial in the direction of a feedback-based error gradient. As such, attention modulation as described by AARM requires interactions among processes of orienting, visual perception, memory retrieval, prediction error, and goal maintenance to facilitate learning. The current study explored the neural bases of attention mechanisms using quantitative predictions from AARM to analyze behavioral and fMRI data collected while participants learned novel categories. Generalized linear model analyses revealed patterns of BOLD activation in the parietal cortex (orienting), visual cortex (perception), medial temporal lobe (memory retrieval), basal ganglia (prediction error), and pFC (goal maintenance) that covaried with the magnitude of model-predicted attentional tuning. Results are consistent with AARM's specification of attention modulation as a dynamic property of distributed cognitive systems. 
    more » « less
  3. Motor learning in visuomotor adaptation tasks results from both explicit and implicit processes, each responding differently to an error signal. Although the motor output side of these processes has been extensively studied, the visual input side is relatively unknown. We investigated if and how depth perception affects the computation of error information by explicit and implicit motor learning. Two groups of participants made reaching movements to bring a virtual cursor to a target in the frontoparallel plane. The Delayed group was allowed to reaim and their feedback was delayed to emphasize explicit learning, whereas the camped group received task-irrelevant clamped cursor feedback and continued to aim straight at the target to emphasize implicit adaptation. Both groups played this game in a highly detailed virtual environment (depth condition), leveraging a cover task of playing darts in a virtual tavern, and in an empty environment (no-depth condition). The delayed group showed an increase in error sensitivity under depth relative to no-depth. In contrast, the clamped group adapted to the same degree under both conditions. The movement kinematics of the delayed participants also changed under the depth condition, consistent with the target appearing more distant, unlike the Clamped group. A comparison of the delayed behavioral data with a perceptual task from the same individuals showed that the greater reaiming in the depth condition was consistent with an increase in the scaling of the error distance and size. These findings suggest that explicit and implicit learning processes may rely on different sources of perceptual information. NEW & NOTEWORTHY We leveraged a classic sensorimotor adaptation task to perform a first systematic assessment of the role of perceptual cues in the estimation of an error signal in the 3-D space during motor learning. We crossed two conditions presenting different amounts of depth information, with two manipulations emphasizing explicit and implicit learning processes. Explicit learning responded to the visual conditions, consistent with perceptual reports, whereas implicit learning appeared to be independent of them. 
    more » « less
  4. Serial dependence—an attractive perceptual bias whereby a current stimulus is perceived to be similar to previously seen ones—is thought to represent the process that facilitates the stability and continuity of visual perception. Recent results demonstrate a neural signature of serial dependence in numerosity perception, emerging very early in the time course during perceptual processing. However, whether such a perceptual signature is retained after the initial processing remains unknown. Here, we address this question by investigating the neural dynamics of serial dependence using a recently developed technique that allowed a reactivation of hidden memory states. Participants performed a numerosity discrimination task during EEG recording, with task-relevant dot array stimuli preceded by a task-irrelevant stimulus inducing serial dependence. Importantly, the neural network storing the representation of the numerosity stimulus was perturbed (or pinged) so that the hidden states of that representation can be explicitly quantified. The results first show that a neural signature of serial dependence emerges early in the brain signals, starting soon after stimulus onset. Critical to the central question, the pings at a later latency could successfully reactivate the biased representation of the initial stimulus carrying the signature of serial dependence. These results provide one of the first pieces of empirical evidence that the biased neural representation of a stimulus initially induced by serial dependence is preserved throughout a relatively long period. 
    more » « less
  5. Cross-modal effects provide a model framework for investigating hierarchical inter-areal processing, particularly, under conditions where unimodal cortical areas receive contextual feedback from other modalities. Here, using complementary behavioral and brain imaging techniques, we investigated the functional networks participating in face and voice processing during gender perception, a high-level feature of voice and face perception. Within the framework of a signal detection decision model, Maximum likelihood conjoint measurement (MLCM) was used to estimate the contributions of the face and voice to gender comparisons between pairs of audio-visual stimuli in which the face and voice were independently modulated. Top–down contributions were varied by instructing participants to make judgments based on the gender of either the face, the voice or both modalities ( N = 12 for each task). Estimated face and voice contributions to the judgments of the stimulus pairs were not independent; both contributed to all tasks, but their respective weights varied over a 40-fold range due to top–down influences. Models that best described the modal contributions required the inclusion of two different top–down interactions: (i) an interaction that depended on gender congruence across modalities (i.e., difference between face and voice modalities for each stimulus); (ii) an interaction that depended on the within modalities’ gender magnitude. The significance of these interactions was task dependent. Specifically, gender congruence interaction was significant for the face and voice tasks while the gender magnitude interaction was significant for the face and stimulus tasks. Subsequently, we used the same stimuli and related tasks in a functional magnetic resonance imaging (fMRI) paradigm ( N = 12) to explore the neural correlates of these perceptual processes, analyzed with Dynamic Causal Modeling (DCM) and Bayesian Model Selection. Results revealed changes in effective connectivity between the unimodal Fusiform Face Area (FFA) and Temporal Voice Area (TVA) in a fashion that paralleled the face and voice behavioral interactions observed in the psychophysical data. These findings explore the role in perception of multiple unimodal parallel feedback pathways. 
    more » « less