skip to main content

Attention:

The NSF Public Access Repository (PAR) system and access will be unavailable from 8:00 PM ET on Friday, March 21 until 8:00 AM ET on Saturday, March 22 due to maintenance. We apologize for the inconvenience.


Title: A domain decomposition method for the time-dependent Navier-Stokes-Darcy model with Beavers-Joseph interface condition and defective boundary condition
Award ID(s):
1722647 1418624
PAR ID:
10176516
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
Journal of Computational Physics
Volume:
411
Issue:
C
ISSN:
0021-9991
Page Range / eLocation ID:
109400
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Summary

    Estimation of high dimensional covariance matrices is known to be a difficult problem, has many applications and is of current interest to the larger statistics community. In many applications including the so-called ‘large p, small n’ setting, the estimate of the covariance matrix is required to be not only invertible but also well conditioned. Although many regularization schemes attempt to do this, none of them address the ill conditioning problem directly. We propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumptions on either the covariance matrix or its inverse are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision theoretic comparisons and in the financial portfolio optimization setting. The approach proposed has desirable properties and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required.

     
    more » « less
  2. An important achievement in the field of causal inference was a complete characterization of when a causal effect, in a system modeled by a causal graph, can be determined uniquely from purely observational data. The identification algorithms resulting from this work produce exact symbolic expressions for causal effects, in terms of the observational probabilities. More recent work has looked at the numerical properties of these expressions, in particular using the classical notion of the condition number. In its classical interpretation, the condition number quantifies the sensitivity of the output values of the expressions to small numerical perturbations in the input observational probabilities. In the context of causal identification, the condition number has also been shown to be related to the effect of certain kinds of uncertainties in the structure of the causal graphical model. In this paper, we first give an upper bound on the condition number for the interesting case of causal graphical models with small “confounded components”. We then develop a tight characterization of the condition number of any given causal identification problem. Finally, we use our tight characterization to give a specific example where the condition number can be much lower than that obtained via generic bounds on the condition number, and to show that even “equivalent” expressions for causal identification can behave very differently with respect to their numerical stability properties. 
    more » « less
  3. Statistical learning theory has largely focused on learning and generalization given inde-pendent and identically distributed (i.i.d.) samples. Motivated by applications involving time-series data, there has been a growing literature on learning and generalization in settings where data is sampled from an ergodic process. This work has also developed complexity measures,which appropriately extend the notion of Rademacher complexity to bound the generalization error and learning rates of hypothesis classes in this setting. Rather than time-series data, our work is motivated by settings where data is sampled on a network or a spatial domain, and thus do not fit well within the framework of prior work. We provide learning and generaliza-tion bounds for data that are complexly dependent, yet their distribution satisfies the standardDobrushin’s condition. Indeed, we show that the standard complexity measures of Gaussian and Rademacher complexities and VC dimension are sufficient measures of complexity for the purposes of bounding the generalization error and learning rates of hypothesis classes in our setting. Moreover, our generalization bounds only degrade by constant factors compared to their i.i.d. analogs, and our learnability bounds degrade by log factors in the size of the trainingset. 
    more » « less