skip to main content


Title: Investigating Evoked EEG Responses to Targets Presented in Virtual Reality
Virtual reality (VR) offers the potential to study brain function in complex, ecologically realistic environments. However, the additional degrees of freedom make analysis more challenging, particularly with respect to evoked neural responses. In this paper we designed a target detection task in VR where we varied the visual angle of targets as subjects moved through a three dimensional maze. We investigated how the latency and shape of the classic P300 evoked response varied as a function of locking the electroencephalogram data to the target image onset, the target-saccade intersection, and the first fixation on the target. We found, as expected, a systematic shift in the timing of the evoked responses as a function of the type of response locking, as well as a difference in the shape of the waveforms. Interestingly, single-trial analysis showed that the peak discriminability of the evoked responses does not differ between image locked and saccade locked analysis, though it decreases significantly when fixation locked. These results suggest that there is a spread in the perception of visual information in VR environments across time and visual space. Our results point to the importance of considering how information may be perceived in naturalistic environments, specifically those that have more complexity and higher degrees of freedom than in traditional laboratory paradigms.  more » « less
Award ID(s):
1816363
NSF-PAR ID:
10295753
Author(s) / Creator(s):
; ; ; ; ;
Date Published:
Journal Name:
41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)
Page Range / eLocation ID:
5536 to 5539
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Objectively differentiating patient mental states based on electrical activity, as opposed to overt behavior, is a fundamental neuroscience problem with medical applications, such as identifying patients in locked-in state vs. coma. Electroencephalography (EEG), which detects millisecond-level changes in brain activity across a range of frequencies, allows for assessment of external stimulus processing by the brain in a non-invasive manner. We applied machine learning methods to 26-channel EEG data of 24 fluent Deaf signers watching videos of sign language sentences (comprehension condition), and the same videos reversed in time (non-comprehension condition), to objectively separate vision-based high-level cognition states. While spectrotemporal parameters of the stimuli were identical in comprehension vs. non-comprehension conditions, the neural responses of participants varied based on their ability to linguistically decode visual data. We aimed to determine which subset of parameters (specific scalp regions or frequency ranges) would be necessary and sufficient for high classification accuracy of comprehension state. Optical flow, characterizing distribution of velocities of objects in an image, was calculated for each pixel of stimulus videos using MATLAB Vision toolbox. Coherence between optical flow in the stimulus and EEG neural response (per video, per participant) was then computed using canonical component analysis with NoiseTools toolbox. Peak correlations were extracted for each frequency for each electrode, participant, and video. A set of standard ML algorithms were applied to the entire dataset (26 channels, frequencies from .2 Hz to 12.4 Hz, binned in 1 Hz increments), with consistent out-of-sample 100% accuracy for frequencies in .2-1 Hz range for all regions, and above 80% accuracy for frequencies < 4 Hz. Sparse Optimal Scoring (SOS) was then applied to the EEG data to reduce the dimensionality of the features and improve model interpretability. SOS with elastic-net penalty resulted in out-of-sample classification accuracy of 98.89%. The sparsity pattern in the model indicated that frequencies between 0.2–4 Hz were primarily used in the classification, suggesting that underlying data may be group sparse. Further, SOS with group lasso penalty was applied to regional subsets of electrodes (anterior, posterior, left, right). All trials achieved greater than 97% out-of-sample classification accuracy. The sparsity patterns from the trials using 1 Hz bins over individual regions consistently indicated frequencies between 0.2–1 Hz were primarily used in the classification, with anterior and left regions performing the best with 98.89% and 99.17% classification accuracy, respectively. While the sparsity pattern may not be the unique optimal model for a given trial, the high classification accuracy indicates that these models have accurately identified common neural responses to visual linguistic stimuli. Cortical tracking of spectro-temporal change in the visual signal of sign language appears to rely on lower frequencies proportional to the N400/P600 time-domain evoked response potentials, indicating that visual language comprehension is grounded in predictive processing mechanisms. 
    more » « less
  2. Neuroimaging studies of human memory have consistently found that univariate responses in parietal cortex track episodic experience with stimuli (whether stimuli are 'old' or 'new'). More recently, pattern-based fMRI studies have shown that parietal cortex also carries information about the semantic content of remembered experiences. However, it is not well understood how memory-based and content-based signals are integrated within parietal cortex. Here, in humans (males and females), we used voxel-wise encoding models and a recognition memory task to predict the fMRI activity patterns evoked by complex natural scene images based on (1) the episodic history and (2) the semantic content of each image. Models were generated and compared across distinct subregions of parietal cortex and for occipitotemporal cortex. We show that parietal and occipitotemporal regions each encode memory and content information, but they differ in how they combine this information. Among parietal subregions, angular gyrus was characterized by robust and overlapping effects of memory and content. Moreover, subject-specific semantic tuning functions revealed that successful recognition shifted the amplitude of tuning functions in angular gyrus but did not change the selectivity of tuning. In other words, effects of memory and content were additive in angular gyrus. This pattern of data contrasted with occipitotemporal cortex where memory and content effects were interactive: memory effects were preferentially expressed by voxels tuned to the content of a remembered image. Collectively, these findings provide unique insight into how parietal cortex combines information about episodic memory and semantic content.

    SIGNIFICANCE STATEMENTNeuroimaging studies of human memory have identified multiple brain regions that not only carry information about “whether” a visual stimulus is successfully recognized but also “what” the content of that stimulus includes. However, a fundamental and open question concerns how the brain integrates these two types of information (memory and content). Here, using a powerful combination of fMRI analysis methods, we show that parietal cortex, particularly the angular gyrus, robustly combines memory- and content-related information, but these two forms of information are represented via additive, independent signals. In contrast, memory effects in high-level visual cortex critically depend on (and interact with) content representations. Together, these findings reveal multiple and distinct ways in which the brain combines memory- and content-related information.

     
    more » « less
  3. null (Ed.)
    Soft, tip-extending "vine" robots offer a unique mode of inspection and manipulation in highly constrained environments. For practicality, it is desirable that the distal end of the robot can be manipulated freely, while the body remains stationary. However, in previous vine robots, either the shape of the body was fixed after growth with no ability to manipulate the distal end, or the whole body moved together with the tip. Here, we present a concept for shape-locking that enables a vine robot to move only its distal tip, while the body is locked in place. This is achieved using two inextensible, pressurized, tip-extending, chambers that "grow" along the sides of the robot body, preserving curvature in the section where they have been deployed. The length of the locked and free sections can be varied by controlling the extension and retraction of these chambers. We present models describing this shape-locking mechanism and workspace of the robot in both free and constrained environments. We experimentally validate these models, showing an increased dexterous workspace compared to previous vine robots. Our shape-locking concept allows improved performance for vine robots, advancing the field of soft robotics for inspection and manipulation in highly constrained environments. 
    more » « less
  4. Abstract

    Mimicry is a biological camouflage phenomenon whereby an organism can change its shape and color to resemble another object. Herein, the idea of biological mimicry and rich degrees of freedom in metasurface designs are combined to realize holographic mimicry devices. A general mathematical method, called phase matrix transformation, to accomplish the holographic mimicry process is proposed. Based on this method, a dynamic metasurface hologram is designed, which shows an image of a “bird” in the air, and a distinct image of a “fish” when the environment is changed to oil. Furthermore, to make the mimicry behavior more generic, holographic mimicry operating at dual wavelengths is also designed and experimentally demonstrated. Moreover, the fully independent phase modulation realized by phase matrix transformation makes the working efficiency of the device relatively higher than the conventional multiwavelength holographic devices with off‐axis illumination or interleaved subarrays. The work potentially opens a new research paradigm interfacing bionics with nanophotonics, which may produce novel applications for optical information encryption, virtual/augmented reality (VR/AR), and military camouflage systems.

     
    more » « less
  5. Abstract

    Head movement relative to the stationary environment gives rise to congruent vestibular and visual optic-flow signals. The resulting perception of a stationary visual environment, referred to herein as stationarity perception, depends on mechanisms that compare visual and vestibular signals to evaluate their congruence. Here we investigate the functioning of these mechanisms and their dependence on fixation behavior as well as on the activeversuspassive nature of the head movement. Stationarity perception was measured by modifying the gain on visual motion relative to head movement on individual trials and asking subjects to report whether the gain was too low or too high. Fitting a psychometric function to the data yields two key parameters of performance. The mean is a measure of accuracy, and the standard deviation is a measure of precision. Experiments were conducted using a head-mounted display with fixation behavior monitored by an embedded eye tracker. During active conditions, subjects rotated their heads in yaw ∼15 deg/s over ∼1 s. Each subject’s movements were recorded and played backviarotating chair during the passive condition. During head-fixed and scene-fixed fixation the fixation target moved with the head or scene, respectively. Both precision and accuracy were better during active than passive head movement, likely due to increased precision on the head movement estimate arising from motor prediction and neck proprioception. Performance was also better during scene-fixed than head-fixed fixation, perhaps due to decreased velocity of retinal image motion and increased precision on the retinal image motion estimate. These results reveal how the nature of head and eye movements mediate encoding, processing, and comparison of relevant sensory and motor signals.

     
    more » « less