skip to main content


Title: Dynamic Neural Relational Inference for Forecasting Trajectories
Understanding interactions between entities, e.g., joints of the human body, team sports players, etc., is crucial for tasks like forecasting. However, interactions between entities are commonly not observed and often hard to quantify. To address this challenge, recently, ‘Neural Relational Inference’ was introduced. It predicts static relations between entities in a system and provides an interpretable representation of the underlying system dynamics that are used for better trajectory forecasting. However, generally, relations between entities change as time progresses. Hence, static relations improperly model the data. In response to this, we develop Dynamic Neural Relational Inference (dNRI), which incorporates insights from sequential latent variable models to predict separate relation graphs for every time-step. We demonstrate on several real-world datasets that modeling dynamic relations improves forecasting of complex trajectories.  more » « less
Award ID(s):
1725729
NSF-PAR ID:
10190064
Author(s) / Creator(s):
;
Date Published:
Journal Name:
IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops
Page Range / eLocation ID:
4383 to 4392
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    One fundamental problem in causal inference is to learn the individual treatment effects (ITE) -- assessing the causal effects of a certain treatment (e.g., prescription of medicine) on an important outcome (e.g., cure of a disease) for each data instance, but the effectiveness of most existing methods is often limited due to the existence of hidden confounders. Recent studies have shown that the auxiliary relational information among data can be utilized to mitigate the confounding bias. However, these works assume that the observational data and the relations among them are static, while in reality, both of them will continuously evolve over time and we refer such data as time-evolving networked observational data. In this paper, we make an initial investigation of ITE estimation on such data. The problem remains difficult due to the following challenges: (1) modeling the evolution patterns of time-evolving networked observational data; (2) controlling the hidden confounders with current data and historical information; (3) alleviating the discrepancy between the control group and the treated group. To tackle these challenges, we propose a novel ITE estimation framework Dynamic Networked Observational Data Deconfounder (\mymodel) which aims to learn representations of hidden confounders over time by leveraging both current networked observational data and historical information. Additionally, a novel adversarial learning based representation balancing method is incorporated toward unbiased ITE estimation. Extensive experiments validate the superiority of our framework when measured against state-of-the-art baselines. The implementation can be accessed in https://github.com/jma712/DNDC https://github.com/jma712/DNDC. 
    more » « less
  2. We consider the problem of analyzing timestamped relational events between a set of entities, such as messages between users of an on-line social network. Such data are often analyzed using static or discrete-time network models, which discard a significant amount of information by aggregating events over time to form network snapshots. In this paper, we introduce a block point process model (BPPM) for continuous-time event-based dynamic networks. The BPPM is inspired by the well-known stochastic block model (SBM) for static networks. We show that networks generated by the BPPM follow an SBM in the limit of a growing number of nodes. We use this property to develop principled and efficient local search and variational inference procedures initialized by regularized spectral clustering. We fit BPPMs with exponential Hawkes processes to analyze several real network data sets, including a Facebook wall post network with over 3,500 nodes and 130,000 events. 
    more » « less
  3. Summary Relational arrays represent measures of association between pairs of actors, often in varied contexts or over time. Trade flows between countries, financial transactions between individuals, contact frequencies between school children in classrooms and dynamic protein-protein interactions are all examples of relational arrays. Elements of a relational array are often modelled as a linear function of observable covariates. Uncertainty estimates for regression coefficient estimators, and ideally the coefficient estimators themselves, must account for dependence between elements of the array, e.g., relations involving the same actor. Existing estimators of standard errors that recognize such relational dependence rely on estimating extremely complex, heterogeneous structure across actors. This paper develops a new class of parsimonious coefficient and standard error estimators for regressions of relational arrays. We leverage an exchangeability assumption to derive standard error estimators that pool information across actors, and are substantially more accurate than existing estimators in a variety of settings. This exchangeability assumption is pervasive in network and array models in the statistics literature, but not previously considered when adjusting for dependence in a regression setting with relational data. We demonstrate improvements in inference theoretically, via a simulation study, and by analysis of a dataset involving international trade. 
    more » « less
  4. What can we learn about the functional organization of cortical microcircuits from large-scale recordings of neural activity? To obtain an explicit and interpretable model of time-dependent functional connections between neurons and to establish the dynamics of the cortical information flow, we develop ‘dynamic neural relational inference’ (dNRI). We study both synthetic and real-world neural spiking data and demonstrate that the developed method is able to uncover the dynamic relations between neurons more reliably than existing baselines. 
    more » « less
  5. Abstract

    A cognitive map is an internal representation of the external world that guides flexible behavior in a complex environment. Cognitive map theory assumes that relationships between entities can be organized using Euclidean‐based coordinates. Previous studies revealed that cognitive map theory can also be generalized to inferences about abstract spaces, such as social spaces. However, it is still unclear whether humans can construct a cognitive map by combining relational knowledge between discrete entities with multiple abstract dimensions in nonsocial spaces. Here we asked subjects to learn to navigate a novel object space defined by two feature dimensions, price and abstraction. The subjects first learned the rank relationships between objects in each feature dimension and then completed a transitive inferences task. We recorded brain activity using functional magnetic resonance imaging (fMRI) while they performed the transitive inference task. By analyzing the behavioral data, we found that the Euclidean distance between objects had a significant effect on response time (RT). The longer the one‐dimensional rank distance and two‐dimensional (2D) Euclidean distance between objects the shorter the RT. The task‐fMRI data were analyzed using both univariate analysis and representational similarity analysis. We found that the hippocampus, entorhinal cortex, and medial orbitofrontal cortex were able to represent the Euclidean distance between objects in 2D space. Our findings suggest that relationship inferences between discrete objects can be made in a 2D nonsocial space and that the neural basis of this inference is related to cognitive maps.

     
    more » « less