skip to main content

Attention:

The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 5:00 PM ET until 11:00 PM ET on Friday, June 21 due to maintenance. We apologize for the inconvenience.


Search for: All records

Award ID contains: 1913364

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract

    Gradient‐type iterative methods for solving Hermitian eigenvalue problems can be accelerated by using preconditioning and deflation techniques. A preconditioned steepest descent iteration with implicit deflation (PSD‐id) is one of such methods. The convergence behavior of the PSD‐id is recently investigated based on the pioneering work of Samokish on the preconditioned steepest descent method (PSD). The resulting non‐asymptotic estimates indicate a superlinear convergence of the PSD‐id under strong assumptions on the initial guess. The present paper utilizes an alternative convergence analysis of the PSD by Neymeyr under much weaker assumptions. We embed Neymeyr's approach into the analysis of the PSD‐id using a restricted formulation of the PSD‐id. More importantly, we extend the new convergence analysis of the PSD‐id to a practically preferred block version of the PSD‐id, or BPSD‐id, and show the cluster robustness of the BPSD‐id. Numerical examples are provided to validate the theoretical estimates.

     
    more » « less
    Free, publicly-accessible full text available October 1, 2024
  2. Abstract

    We consider the problem of extracting a few desired eigenpairs of the buckling eigenvalue problem , whereKis symmetric positive semi‐definite,KGis symmetric indefinite, and the pencil is singular, namely,KandKGshare a nontrivial common nullspace. Moreover, in practical buckling analysis of structures, bases for the nullspace ofKand the common nullspace ofKandKGare available. There are two open issues for developing an industrial strength shift‐invert Lanczos method: (1) the shift‐invert operator does not exist or is extremely ill‐conditioned, and (2) the use of the semi‐inner product induced byKdrives the Lanczos vectors rapidly toward the nullspace ofK, which leads to a rapid growth of the Lanczos vectors in norms and causes permanent loss of information and the failure of the method. In this paper, we address these two issues by proposing a generalized buckling spectral transformation of the singular pencil and a regularization of the inner product via a low‐rank updating of the semi‐positive definiteness ofK. The efficacy of our approach is demonstrated by numerical examples, including one from industrial buckling analysis.

     
    more » « less
  3. Ruiz, Francisco ; Dy, Jennife ; van de Meent, Jan-Willem (Ed.)
    There are synergies of research interests and industrial efforts in modeling fairness and correcting algorithmic bias in machine learning. In this paper, we present a scalable algorithm for spectral clustering (SC) with group fairness constraints. Group fairness is also known as statistical parity where in each cluster, each protected group is represented with the same proportion as in the entirety. While FairSC algorithm (Kleindessner et al., 2019) is able to find the fairer clustering, it is compromised by high computational costs due to the algorithm’s kernels of computing nullspaces and the square roots of dense matrices explicitly. We present a new formulation of the underlying spectral computation of FairSC by incorporating nullspace projection and Hotelling’s deflation such that the resulting algorithm, called s-FairSC, only involves the sparse matrix-vector products and is able to fully exploit the sparsity of the fair SC model. The experimental results on the modified stochastic block model demonstrate that while it is comparable with FairSC in recovering fair clustering, s-FairSC is 12× faster than FairSC for moderate model sizes. s-FairSC is further demonstrated to be scalable in the sense that the computational costs of s-FairSC only increase marginally compared to the SC without fairness constraints. 
    more » « less