skip to main content


Title: Estimation of Dynamic Networks for High-Dimensional Nonstationary Time Series
This paper is concerned with the estimation of time-varying networks for high-dimensional nonstationary time series. Two types of dynamic behaviors are considered: structural breaks (i.e., abrupt change points) and smooth changes. To simultaneously handle these two types of time-varying features, a two-step approach is proposed: multiple change point locations are first identified on the basis of comparing the difference between the localized averages on sample covariance matrices, and then graph supports are recovered on the basis of a kernelized time-varying constrained L 1 -minimization for inverse matrix estimation (CLIME) estimator on each segment. We derive the rates of convergence for estimating the change points and precision matrices under mild moment and dependence conditions. In particular, we show that this two-step approach is consistent in estimating the change points and the piecewise smooth precision matrix function, under a certain high-dimensional scaling limit. The method is applied to the analysis of network structure of the S&P 500 index between 2003 and 2008.  more » « less
Award ID(s):
1752614
PAR ID:
10142561
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Entropy
Volume:
22
Issue:
1
ISSN:
1099-4300
Page Range / eLocation ID:
55
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Summary

    Modern statistical methods for multivariate time series rely on the eigendecomposition of matrix-valued functions such as time-varying covariance and spectral density matrices. The curse of indeterminacy or misidentification of smooth eigenvector functions has not received much attention. We resolve this important problem and recover smooth trajectories by examining the distance between the eigenvectors of the same matrix-valued function evaluated at two consecutive points. We change the sign of the next eigenvector if its distance with the current one is larger than the square root of 2. In the case of distinct eigenvalues, this simple method delivers smooth eigenvectors. For coalescing eigenvalues, we match the corresponding eigenvectors and apply an additional signing around the coalescing points. We establish consistency and rates of convergence for the proposed smooth eigenvector estimators. Simulation results and applications to real data confirm that our approach is needed to obtain smooth eigenvectors.

     
    more » « less
  2. We consider the problem of estimating differences in two Gaussian graphical models (GGMs) which are known to have similar structure. The GGM structure is encoded in its precision (inverse covariance) matrix. In many applications one is interested in estimating the difference in two precision matrices to characterize underlying changes in conditional dependencies of two sets of data. Existing methods for differential graph estimation are based on single-attribute models where one associates a scalar random variable with each node. In multi-attribute graphical models, each node represents a random vector. In this paper, we analyze a group lasso penalized D-trace loss function approach for differential graph learning from multi-attribute data. An alternating direction method of multipliers (ADMM) algorithm is presented to optimize the objective function. Theoretical analysis establishing consistency in support recovery and estimation in high-dimensional settings is provided. We illustrate our approach using a numerical example where the multi-attribute approach is shown to outperform a single-attribute approach. 
    more » « less
  3. We consider the problem of estimating differences in two Gaussian graphical models (GGMs) which are known to have similar structure. The GGM structure is encoded in its precision (inverse covariance) matrix. In many applications one is interested in estimating the difference in two precision matrices to characterize underlying changes in conditional dependencies of two sets of data. Most existing methods for differential graph estimation are based on a lasso penalized loss function. In this paper, we analyze a log-sum penalized D-trace loss function approach for differential graph learning. An alternating direction method of multipliers (ADMM) algorithm is presented to optimize the objective function. Theoretical analysis establishing consistency in estimation in high-dimensional settings is provided. We illustrate our approach using a numerical example where log-sum penalized D-trace loss significantly outperforms lasso-penalized D-trace loss as well as smoothly clipped absolute deviation (SCAD) penalized D-trace loss. 
    more » « less
  4. We consider the problem of estimating differences in two Gaussian graphical models (GGMs) which are known to have similar structure. The GGM structure is encoded in its precision (inverse covariance) matrix. In many applications one is interested in estimating the difference in two precision matrices to characterize underlying changes in conditional dependencies of two sets of data. Existing methods for differential graph estimation are based on single-attribute (SA) models where one associates a scalar random variable with each node. In multi-attribute (MA) graphical models, each node represents a random vector. In this paper, we analyze a group lasso penalized D-trace loss function approach for differential graph learning from multi-attribute data. An alternating direction method of multipliers (ADMM) algorithm is presented to optimize the objective function. Theoretical analysis establishing consistency in support recovery and estimation in high-dimensional settings is provided. Numerical results based on synthetic as well as real data are presented. 
    more » « less
  5. We consider the problem of estimating differences in two time series Gaussian graphical models (TSGGMs) which are known to have similar structure. The TSGGM structure is encoded in its inverse power spectral density (IPSD) just as the vector GGM structure is encoded in its precision (inverse covariance) matrix. Motivated by many applications, in existing works one is interested in estimating the difference in two precision matrices to characterize underlying changes in conditional dependencies of two sets of data comprised of independent and identically distributed observations. In this paper we consider estimation of the difference in two IPSD's to char-acterize underlying changes in conditional dependencies of two sets of time-dependent data. We analyze a group lasso penalized D-trace loss function approach in the frequency domain for differential graph learning, using Wirtinger calculus. An alternating direction method of multipliers (ADMM) algorithm is presented to optimize the objective function. Theoretical analysis establishing consistency of IPSD difference estimator in high-dimensional settings is presented. We illustrate our approach using a numerical example. 
    more » « less