Abstract To navigate through the environment, humans must be able to measure both the distance traveled in space, and the interval elapsed in time. Yet, how the brain holds both of these metrics simultaneously is less well known. One possibility is that participants measure how far and how long they have traveled relative to a known reference point. To measure this, we had human participants (n = 24) perform a distance estimation task in a virtual environment in which they were cued to attend to either the spatial or temporal interval traveled while responses were measured with multiband fMRI. We observed that both dimensions evoked similar frontoparietal networks, yet with a striking rostrocaudal dissociation between temporal and spatial estimation. Multivariate classifiers trained on each dimension were further able to predict the temporal or spatial interval traveled, with centers of activation within the SMA and retrosplenial cortex for time and space, respectively. Furthermore, a cross-classification approach revealed the right supramarginal gyrus and occipital place area as regions capable of decoding the general magnitude of the traveled distance. Altogether, our findings suggest the brain uses separate systems for tracking spatial and temporal distances, which are combined together along with dimension-nonspecific estimates.
more »
« less
STADL Up! The Spatiotemporal Autoregressive Distributed Lag Model for TSCS Data Analysis
Time-series cross-section (TSCS) data are prevalent in political science, yet many distinct challenges presented by TSCS data remain underaddressed. We focus on how dependence in both space and time complicates estimating either spatial or temporal dependence, dynamics, and effects. Little is known about how modeling one of temporal or cross-sectional dependence well while neglecting the other affects results in TSCS analysis. We demonstrate analytically and through simulations how misspecification of either temporal or spatial dependence inflates estimates of the other dimension’s dependence and thereby induces biased estimates and tests of other covariate effects. Therefore, we recommend the spatiotemporal autoregressive distributed lag (STADL) model with distributed lags in both space and time as an effective general starting point for TSCS model specification. We illustrate with two example reanalyses and provide R code to facilitate researchers’ implementation—from automation of common spatial-weights matrices ( W ) through estimated spatiotemporal effects/response calculations—for their own TSCS analyses.
more »
« less
- Award ID(s):
- 2105847
- PAR ID:
- 10443292
- Date Published:
- Journal Name:
- American Political Science Review
- Volume:
- 117
- Issue:
- 1
- ISSN:
- 0003-0554
- Page Range / eLocation ID:
- 59 to 79
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Brain large-scale dynamics is constrained by the heterogeneity of intrinsic anatomical substrate. Little is known how the spatiotemporal dynamics adapt for the heterogeneous structural connectivity (SC). Modern neuroimaging modalities make it possible to study the intrinsic brain activity at the scale of seconds to minutes. Diffusion magnetic resonance imaging (dMRI) and functional MRI reveals the large-scale SC across different brain regions. Electrophysiological methods (i.e. MEG/EEG) provide direct measures of neural activity and exhibits complex neurobiological temporal dynamics which could not be solved by fMRI. However, most of existing multimodal analytical methods collapse the brain measurements either in space or time domain and fail to capture the spatio-temporal circuit dynamics. In this paper, we propose a novel spatio-temporal graph Transformer model to integrate the structural and functional connectivity in both spatial and temporal domain. The proposed method learns the heterogeneous node and graph representation via contrastive learning and multi-head attention based graph Transformer using multimodal brain data (i.e. fMRI, MRI, MEG and behavior performance). The proposed contrastive graph Transformer representation model incorporates the heterogeneity map constrained by T1-to-T2-weighted (T1w/T2w) to improve the model fit to structurefunction interactions. The experimental results with multimodal resting state brain measurements demonstrate the proposed method could highlight the local properties of large-scale brain spatio-temporal dynamics and capture the dependence strength between functional connectivity and behaviors. In summary, the proposed method enables the complex brain dynamics explanation for different modal variants.more » « less
-
Abstract Area-level models for small area estimation typically rely on areal random effects to shrink design-based direct estimates towards a model-based predictor. Incorporating the spatial dependence of the random effects into these models can further improve the estimates when there are not enough covariates to fully account for the spatial dependence of the areal means. A number of recent works have investigated models that include random effects for only a subset of areas, in order to improve the precision of estimates. However, such models do not readily handle spatial dependence. In this paper, we introduce a model that accounts for spatial dependence in both the random effects as well as the latent process that selects the effects. We show how this model can significantly improve predictive accuracy via an empirical simulation study based on data from the American Community Survey, and illustrate its properties via an application to estimate county-level median rent burden.more » « less
-
Jonathan R. Whitlock (Ed.)IntroductionUnderstanding the neural code has been one of the central aims of neuroscience research for decades. Spikes are commonly referred to as the units of information transfer, but multi-unit activity (MUA) recordings are routinely analyzed in aggregate forms such as binned spike counts, peri-stimulus time histograms, firing rates, or population codes. Various forms of averaging also occur in the brain, from the spatial averaging of spikes within dendritic trees to their temporal averaging through synaptic dynamics. However, how these forms of averaging are related to each other or to the spatial and temporal units of information representation within the neural code has remained poorly understood. Materials and methodsIn this work we developed NeuroPixelHD, a symbolic hyperdimensional model of MUA, and used it to decode the spatial location and identity of static images shown ton= 9 mice in the Allen Institute Visual Coding—NeuroPixels dataset from large-scale MUA recordings. We parametrically varied the spatial and temporal resolutions of the MUA data provided to the model, and compared its resulting decoding accuracy. ResultsFor almost all subjects, we found 125ms temporal resolution to maximize decoding accuracy for both the spatial location of Gabor patches (81 classes for patches presented over a 9×9 grid) as well as the identity of natural images (118 classes corresponding to 118 images) across the whole brain. This optimal temporal resolution nevertheless varied greatly between different regions, followed a sensory-associate hierarchy, and was significantly modulated by the central frequency of theta-band oscillations across different regions. Spatially, the optimal resolution was at either of two mesoscale levels for almost all mice: the area level, where the spiking activity of all neurons within each brain area are combined, and the population level, where neuronal spikes within each area are combined across fast spiking (putatively inhibitory) and regular spiking (putatively excitatory) neurons, respectively. We also observed an expected interplay between optimal spatial and temporal resolutions, whereby increasing the amount of averaging across one dimension (space or time) decreases the amount of averaging that is optimal across the other dimension, and vice versa. DiscussionOur findings corroborate existing empirical practices of spatiotemporal binning and averaging in MUA data analysis, and provide a rigorous computational framework for optimizing the level of such aggregations. Our findings can also synthesize these empirical practices with existing knowledge of the various sources of biological averaging in the brain into a new theory of neural information processing in which theunit of informationvaries dynamically based on neuronal signal and noise correlations across space and time.more » « less
-
Dense time-series remote sensing data with detailed spatial information are highly desired for the monitoring of dynamic earth systems. Due to the sensor tradeoff, most remote sensing systems cannot provide images with both high spatial and temporal resolutions. Spatiotemporal image fusion models provide a feasible solution to generate such a type of satellite imagery, yet existing fusion methods are limited in predicting rapid and/or transient phenological changes. Additionally, a systematic approach to assessing and understanding how varying levels of temporal phenological changes affect fusion results is lacking in spatiotemporal fusion research. The objective of this study is to develop an innovative hybrid deep learning model that can effectively and robustly fuse the satellite imagery of various spatial and temporal resolutions. The proposed model integrates two types of network models: super-resolution convolutional neural network (SRCNN) and long short-term memory (LSTM). SRCNN can enhance the coarse images by restoring degraded spatial details, while LSTM can learn and extract the temporal changing patterns from the time-series images. To systematically assess the effects of varying levels of phenological changes, we identify image phenological transition dates and design three temporal phenological change scenarios representing rapid, moderate, and minimal phenological changes. The hybrid deep learning model, alongside three benchmark fusion models, is assessed in different scenarios of phenological changes. Results indicate the hybrid deep learning model yields significantly better results when rapid or moderate phenological changes are present. It holds great potential in generating high-quality time-series datasets of both high spatial and temporal resolutions, which can further benefit terrestrial system dynamic studies. The innovative approach to understanding phenological changes’ effect will help us better comprehend the strengths and weaknesses of current and future fusion models.more » « less
An official website of the United States government

