skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Separable Representations for Duration and Distance in Virtual Movements
Abstract To navigate through the environment, humans must be able to measure both the distance traveled in space, and the interval elapsed in time. Yet, how the brain holds both of these metrics simultaneously is less well known. One possibility is that participants measure how far and how long they have traveled relative to a known reference point. To measure this, we had human participants (n = 24) perform a distance estimation task in a virtual environment in which they were cued to attend to either the spatial or temporal interval traveled while responses were measured with multiband fMRI. We observed that both dimensions evoked similar frontoparietal networks, yet with a striking rostrocaudal dissociation between temporal and spatial estimation. Multivariate classifiers trained on each dimension were further able to predict the temporal or spatial interval traveled, with centers of activation within the SMA and retrosplenial cortex for time and space, respectively. Furthermore, a cross-classification approach revealed the right supramarginal gyrus and occipital place area as regions capable of decoding the general magnitude of the traveled distance. Altogether, our findings suggest the brain uses separate systems for tracking spatial and temporal distances, which are combined together along with dimension-nonspecific estimates.  more » « less
Award ID(s):
1922598
PAR ID:
10541573
Author(s) / Creator(s):
; ; ; ;
Publisher / Repository:
MIT
Date Published:
Journal Name:
Journal of Cognitive Neuroscience
Volume:
36
Issue:
3
ISSN:
0898-929X
Page Range / eLocation ID:
447 to 459
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Emerging technologies offer the potential to expand the domain of the future workforce to extreme environments, such as outer space and alien terrains. To understand how humans navigate in such environments that lack familiar spatial cues this study examined spatial perception in three types of environments. The environments were simulated using virtual reality. We examined participants’ ability to estimate the size and distance of stimuli under conditions of minimal, moderate, or maximum visual cues, corresponding to an environment simulating outer space, an alien terrain, or a typical cityscape, respectively. The findings show underestimation of distance in both the maximum and the minimum visual cue environment but a tendency for overestimation of distance in the moderate environment. We further observed that depth estimation was substantially better in the minimum environment than in the other two environments. However, estimation of height was more accurate in the environment with maximum cues (cityscape) than the environment with minimum cues (outer space). More generally, our results suggest that familiar visual cues facilitated better estimation of size and distance than unfamiliar cues. In fact, the presence of unfamiliar, and perhaps misleading visual cues (characterizing the alien terrain environment), was more disruptive than an environment with a total absence of visual cues for distance and size perception. The findings have implications for training workers to better adapt to extreme environments. 
    more » « less
  2. We consider the problem of active learning for level set estimation (LSE), where the goal is to localize all regions where a function of interest lies above/below a given threshold as quickly as possible. We present a finite-horizon search procedure to perform LSE in one dimension while optimally balancing both the final estimation error and the distance traveled during active learning for a fixed number of samples. A tuning parameter is used to trade off between the estimation accuracy and distance traveled. We show that the resulting optimization problem can be solved in closed form and that the resulting policy generalizes existing approaches to this problem. We then show how this approach can be used to perform level set estimation in two dimensions, under some additional assumptions, under the popular Gaussian process model. Empirical results on synthetic data indicate that as the cost of travel increases, our method's ability to treat distance nonmyopically allows it to significantly improve on the state of the art. On real air quality data, our approach achieves roughly one fifth the estimation error at less than half the cost of competing algorithms. 
    more » « less
  3. The transformation and transmission of brain stimuli reflect the dynamical brain activity in space and time. Compared with functional magnetic resonance imaging (fMRI), magneto- or electroencephalography (M/EEG) fast couples to the neural activity through generated magnetic fields. However, the MEG signal is inhomogeneous throughout the whole brain, which is affected by the signal-to-noise ratio, the sensors’ location and distance. Current non-invasive neuroimaging modalities such as fMRI and M/EEG excel high resolution in space or time but not in both. To solve the main limitations of current technique for brain activity recording, we propose a novel recurrent memory optimization approach to predict the internal behavioral states in space and time. The proposed method uses Optimal Polynomial Projections to capture the long temporal history with robust online compression. The training process takes the pairs of fMRI and MEG data as inputs and predicts the recurrent brain states through the Siamese network. In the testing process, the framework only uses fMRI data to generate the corresponding neural response in space and time. The experimental results with Human connectome project (HCP) show that the predicted signal could reflect the neural activity with high spatial resolution as fMRI and high temporal resolution as MEG signal. The experimental results demonstrate for the first time that the proposed method is able to predict the brain response in both milliseconds and millimeters using only fMRI signal. 
    more » « less
  4. We consider the problem of active learning in the context of spatial sampling for boundary estimation, where the goal is to estimate an unknown boundary as accurately and quickly as possible. We present a finite-horizon search procedure to optimally minimize both the final estimation error and the distance traveled for a fixed number of samples, where a tuning parameter is used to trade off between the estimation accuracy and distance traveled. We show that the resulting optimization problem can be solved in closed form and that the resulting policy generalizes existing approaches to this problem. 
    more » « less
  5. We investigate the ability of individuals to visually validate statistical models in terms of their fit to the data. While visual model estimation has been studied extensively, visual model validation remains under-investigated. It is unknown how well people are able to visually validate models, and how their performance compares to visual and computational estimation. As a starting point, we conducted a study across two populations (crowdsourced and volunteers). Participants had to both visually estimate (i.e, draw) and visually validate (i.e., accept or reject) the frequently studied model of averages. Across both populations, the level of accuracy of the models that were considered valid was lower than the accuracy of the estimated models. We find that participants' validation and estimation were unbiased. Moreover, their natural critical point between accepting and rejecting a given mean value is close to the boundary of its 95\% confidence interval, indicating that the visually perceived confidence interval corresponds to a common statistical standard. Our work contributes to the understanding of visual model validation and opens new research opportunities. 
    more » « less