In this paper, we propose an efficient numerical scheme for solving some large‐scale ill‐posed linear inverse problems arising from image restoration. In order to accelerate the computation, two different hidden structures are exploited. First, the coefficient matrix is approximated as the sum of a small number of Kronecker products. This procedure not only introduces one more level of parallelism into the computation but also enables the usage of computationally intensive matrix–matrix multiplications in the subsequent optimization procedure. We then derive the corresponding Tikhonov regularized minimization model and extend the fast iterative shrinkage‐thresholding algorithm (FISTA) to solve the resulting optimization problem. Because the matrices appearing in the Kronecker product approximation are all structured matrices (Toeplitz, Hankel, etc.), we can further exploit their fast matrix–vector multiplication algorithms at each iteration. The proposed algorithm is thus called
 Award ID(s):
 1819042
 NSFPAR ID:
 10453564
 Publisher / Repository:
 Wiley Blackwell (John Wiley & Sons)
 Date Published:
 Journal Name:
 Numerical Linear Algebra with Applications
 Volume:
 27
 Issue:
 2
 ISSN:
 10705325
 Format(s):
 Medium: X
 Sponsoring Org:
 National Science Foundation
More Like this

We consider the problem of matrix approximation and denoising induced by the Kronecker product decomposition. Specifically, we propose to approximate a given matrix by the sum of a few Kronecker products of matrices, which we refer to as the Kronecker product approximation (KoPA). Because the Kronecker product is an extensions of the outer product from vectors to matrices, KoPA extends the low rank matrix approximation, and includes it as a special case. Comparing with the latter, KoPA also offers a greater flexibility, since it allows the user to choose the configuration, which are the dimensions of the two smaller matrices forming the Kronecker product. On the other hand, the configuration to be used is usually unknown, and needs to be determined from the data in order to achieve the optimal balance between accuracy and parsimony. We propose to use extended information criteria to select the configuration. Under the paradigm of high dimensional analysis, we show that the proposed procedure is able to select the true configuration with probability tending to one, under suitable conditions on the signaltonoise ratio. We demonstrate the superiority of KoPA over the low rank approximations through numerical studies, and several benchmark image examples.more » « less

Summary This paper presents an efficient method to perform structured matrix approximation by separation and hierarchy (SMASH), when the original dense matrix is associated with a kernel function. Given the points in a domain, a tree structure is first constructed based on an adaptive partition of the computational domain to facilitate subsequent approximation procedures. In contrast to existing schemes based on either analytic or purely algebraic approximations, SMASH takes advantage of both approaches and greatly improves efficiency. The algorithm follows a bottom‐up traversal of the tree and is able to perform the operations associated with each node on the same level in parallel. A strong rank‐revealing factorization is applied to the initial analytic approximation in the
separation regime so that a special structure is incorporated into the final nested bases. As a consequence, the storage is significantly reduced on one hand and a hierarchy of the original grid is constructed on the other hand. Due to this hierarchy, nested bases at upper levels can be computed in a similar way as the leaf level operations but on coarser grids. The main advantages of SMASH include its simplicity of implementation, its flexibility to construct various hierarchical rank structures, and a low storage cost. The efficiency and robustness of SMASH are demonstrated through various test problems arising from integral equations, structured matrices, etc. 
Abstract The Loewner framework is one of the most successful datadriven model order reduction techniques. If
N is the cardinality of a given data set, the socalled Loewner and shifted Loewner matrices and$${\mathbb {L}}\in {\mathbb {C}}^{N\times N}$$ $L\in {C}^{N\times N}$ can be defined by solely relying on information encoded in the considered data set and they play a crucial role in the computation of the sought rational model approximation.In particular, the singular value decomposition of a linear combination of$${\mathbb {S}}\in {\mathbb {C}}^{N\times N}$$ $S\in {C}^{N\times N}$ and$${\mathbb {S}}$$ $S$ provides the tools needed to construct accurate models which fulfill important approximation properties with respect to the original data set. However, for highlysampled data sets, the dense nature of$${\mathbb {L}}$$ $L$ and$${\mathbb {L}}$$ $L$ leads to numerical difficulties, namely the failure to allocate these matrices in certain memorylimited environments or excessive computational costs. Even though they do not possess any sparsity pattern, the Loewner and shifted Loewner matrices are extremely structured and, in this paper, we show how to fully exploit their Cauchylike structure to reduce the cost of computing accurate rational models while avoiding the explicit allocation of$${\mathbb {S}}$$ $S$ and$${\mathbb {L}}$$ $L$ . In particular, the use of the$${\mathbb {S}}$$ $S$hierarchically semiseparable format allows us to remarkably lower both the computational cost and the memory requirements of the Loewner framework obtaining a novel scheme whose costs scale with .$$N \log N$$ $NlogN$ 
Abstract Censored quantile regression models, which offer great flexibility in assessing covariate effects on event times, have attracted considerable research interest. In this study, we consider flexible estimation and inference procedures for competing risks quantile regression, which not only provides meaningful interpretations by using cumulative incidence quantiles but also extends the conventional accelerated failure time model by relaxing some of the stringent model assumptions, such as global linearity and unconditional independence. Current method for censored quantile regressions often involves the minimization of the
L _{1}‐type convex function or solving the nonsmoothed estimating equations. This approach could lead to multiple roots in practical settings, particularly with multiple covariates. Moreover, variance estimation involves an unknown error distribution and most methods rely on computationally intensive resampling techniques such as bootstrapping. We consider the induced smoothing procedure for censored quantile regressions to the competing risks setting. The proposed procedure permits the fast and accurate computation of quantile regression parameter estimates and standard variances by using conventional numerical methods such as the Newton–Raphson algorithm. Numerical studies show that the proposed estimators perform well and the resulting inference is reliable in practical settings. The method is finally applied to data from a soft tissue sarcoma study. 
Summary We propose and analyze an efficient ensemble algorithm with artificial compressibility (AC) for fast decoupled computation of multiple realizations of the stochastic Stokes‐Darcy model with random hydraulic conductivity (including the one in the interface conditions), source terms, and initial conditions. The solutions are found by solving three smaller decoupled subproblems with two common time‐independent coefficient matrices for all realizations, which significantly improves the efficiency for both assembling and solving the matrix systems. The fully coupled Stokes‐Darcy system can be first decoupled into two smaller subphysics problems by the idea of the partitioned time stepping, which reduces the size of the linear systems and allows parallel computing for each subphysics problem. The AC further decouples the velocity and pressure which further reduces storage requirements and improves computational efficiency. We prove the long time stability and the convergence for this new ensemble method. Three numerical examples are presented to support the theoretical results and illustrate the features of the algorithm, including the convergence, stability, efficiency, and applicability.