skip to main content

Search for: All records

Creators/Authors contains: "Koorathota, Sharath"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract

    Objective.Sensorimotor decisions require the brain to process external information and combine it with relevant knowledge prior to actions. In this study, we explore the neural predictors of motor actions in a novel, realistic driving task designed to study decisions while driving.Approach.Through a spatiospectral assessment of functional connectivity during the premotor period, we identified the organization of visual cortex regions of interest into a distinct scene processing network. Additionally, we identified a motor action selection network characterized by coherence between the anterior cingulate cortex (ACC) and dorsolateral prefrontal cortex (DLPFC).Main results.We show that steering behavior can be predicted from oscillatory power in the visual cortex, DLPFC, and ACC. Power during the premotor periods (specific to the theta and beta bands) correlates with pupil-linked arousal and saccade duration.Significance.We interpret our findings in the context of network-level correlations with saccade-related behavior and show that the DLPFC is a key node in arousal circuitry and in sensorimotor decisions.

    more » « less
  2. Abstract

    Objective. When multitasking, we must dynamically reorient our attention between different tasks. Attention reorienting is thought to arise through interactions of physiological arousal and brain-wide network dynamics. In this study, we investigated the relationship between pupil-linked arousal and electroencephalography (EEG) brain dynamics in a multitask driving paradigm conducted in virtual reality. We hypothesized that there would be an interaction between arousal and EEG dynamics and that this interaction would correlate with multitasking performance.Approach. We collected EEG and eye tracking data while subjects drove a motorcycle through a simulated city environment, with the instructions to count the number of target images they observed while avoiding crashing into a lead vehicle. The paradigm required the subjects to continuously reorient their attention between the two tasks. Subjects performed the paradigm under two conditions, one more difficult than the other.Main results. We found that task difficulty did not strongly correlate with pupil-linked arousal, and overall task performance increased as arousal level increased. A single-trial analysis revealed several interesting relationships between pupil-linked arousal and task-relevant EEG dynamics. Employing exact low-resolution electromagnetic tomography, we found that higher pupil-linked arousal led to greater EEG oscillatory activity, especially in regions associated with the dorsal attention network and ventral attention network (VAN). Consistent with our hypothesis, we found a relationship between EEG functional connectivity and pupil-linked arousal as a function of multitasking performance. Specifically, we found decreased functional connectivity between regions in the salience network (SN) and the VAN as pupil-linked arousal increased, suggesting that improved multitasking performance at high arousal levels may be due to a down-regulation in coupling between the VAN and the SN. Our results suggest that when multitasking, our brain rebalances arousal-based reorienting so that individual task demands can be met without prematurely reorienting to competing tasks.

    more » « less
  3. Understanding neural function often requires multiple modalities of data, including electrophysiogical data, imaging techniques, and demographic surveys. In this paper, we introduce a novel neurophysiological model to tackle major challenges in modeling multimodal data. First, we avoid non-alignment issues between raw signals and extracted, frequency-domain features by addressing the issue of variable sampling rates. Second, we encode modalities through “cross-attention” with other modalities. Lastly, we utilize properties of our parent transformer architecture to model long-range dependencies between segments across modalities and assess intermediary weights to better understand how source signals affect prediction. We apply our Multimodal Neurophysiological Transformer (MNT) to predict valence and arousal in an existing open-source dataset. Experiments on non-aligned multimodal time-series show that our model performs similarly and, in some cases, outperforms existing methods in classification tasks. In addition, qualitative analysis suggests that MNT is able to model neural influences on autonomic activity in predicting arousal. Our architecture has the potential to be fine-tuned to a variety of downstream tasks, including for BCI systems. 
    more » « less
  4. Abstract

    Objective.Reorienting is central to how humans direct attention to different stimuli in their environment. Previous studies typically employ well-controlled paradigms with limited eye and head movements to study the neural and physiological processes underlying attention reorienting. Here, we aim to better understand the relationship between gaze and attention reorienting using a naturalistic virtual reality (VR)-based target detection paradigm.Approach.Subjects were navigated through a city and instructed to count the number of targets that appeared on the street. Subjects performed the task in a fixed condition with no head movement and in a free condition where head movements were allowed. Electroencephalography (EEG), gaze and pupil data were collected. To investigate how neural and physiological reorienting signals are distributed across different gaze events, we used hierarchical discriminant component analysis (HDCA) to identify EEG and pupil-based discriminating components. Mixed-effects general linear models (GLM) were used to determine the correlation between these discriminating components and the different gaze events time. HDCA was also used to combine EEG, pupil and dwell time signals to classify reorienting events.Main results.In both EEG and pupil, dwell time contributes most significantly to the reorienting signals. However, when dwell times were orthogonalized against other gaze events, the distributions of the reorienting signals were different across the two modalities, with EEG reorienting signals leading that of the pupil reorienting signals. We also found that the hybrid classifier that integrates EEG, pupil and dwell time features detects the reorienting signals in both the fixed (AUC = 0.79) and the free (AUC = 0.77) condition.Significance.We show that the neural and ocular reorienting signals are distributed differently across gaze events when a subject is immersed in VR, but nevertheless can be captured and integrated to classify target vs. distractor objects to which the human subject orients.

    more » « less
  5. null (Ed.)
    There is increasing interest in how the pupil dynamics of the eye reflect underlying cognitive processes and brain states. Problematic, however, is that pupil changes can be due to non-cognitive factors, for example luminance changes in the environment, accommodation and movement. In this paper we consider how by modeling the response of the pupil in real-world environments we can capture the non-cognitive related changes and remove these to extract a residual signal which is a better index of cognition and performance. Specifically, we utilize sequence measures such as fixation position, duration, saccades, and blink-related information as inputs to a deep recurrent neural network (RNN) model for predicting subsequent pupil diameter. We build and evaluate the model for a task where subjects are watching educational videos and subsequently asked questions based on the content. Compared to commonly-used models for this task, the RNN had the lowest errors rates in predicting subsequent pupil dilation given sequence data. Most importantly was how the model output related to subjects' cognitive performance as assessed by a post-viewing test. Consistent with our hypothesis that the model captures non-cognitive pupil dynamics, we found (1) the model's root-mean square error was less for lower performing subjects than for those having better performance on the post-viewing test, (2) the residuals of the RNN (LSTM) model had the highest correlation with subject post-viewing test scores and (3) the residuals had the highest discriminability (assessed via area under the ROC curve, AUC) for classifying high and low test performers, compared to the true pupil size or the RNN model predictions. This suggests that deep learning sequence models may be good for separating components of pupil responses that are linked to luminance and accommodation from those that are linked to cognition and arousal. 
    more » « less