skip to main content


Title: Combining Uneliminated Algebraic Formulations With Sparse Linear Solvers to Increase the Speed and Accuracy of Homotopy Path Tracking for Kinematic Synthesis
Abstract The method of kinematic synthesis requires finding the solution set of a system of polynomials. Parameter homotopy continuation is used to solve these systems and requires repeatedly solving systems of linear equations. For kinematic synthesis, the associated linear systems become ill-conditioned, resulting in a marked decrease in the number of solutions found due to path tracking failures. This unavoidable ill-conditioning places a premium on accurate function and matrix evaluations. Traditionally, variables are eliminated to reduce the dimension of the problem. However, this greatly increases the computational cost of evaluating the resulting functions and matrices and introduces numerical instability. We propose avoiding the elimination of variables to reduce required computations, increasing the dimension of the linear systems, but resulting in matrices that are quite sparse. We then solve these systems with sparse solvers to save memory and increase speed. We found that this combination resulted in a speedup of up to 250 × over traditional methods while maintaining the same accuracy.  more » « less
Award ID(s):
2144732
NSF-PAR ID:
10390975
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Journal of Computing and Information Science in Engineering
Volume:
22
Issue:
6
ISSN:
1530-9827
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Electrical Impedance Tomography (EIT) is a well-known imaging technique for detecting the electrical properties of an object in order to detect anomalies, such as conductive or resistive targets. More specifically, EIT has many applications in medical imaging for the detection and location of bodily tumors since it is an affordable and non-invasive method, which aims to recover the internal conductivity of a body using voltage measurements resulting from applying low frequency current at electrodes placed at its surface. Mathematically, the reconstruction of the internal conductivity is a severely ill-posed inverse problem and yields a poor quality image reconstruction. To remedy this difficulty, at least in part, we regularize and solve the nonlinear minimization problem by the aid of a Krylov subspace-type method for the linear sub problem during each iteration. In EIT, a tumor or general anomaly can be modeled as a piecewise constant perturbation of a smooth background, hence, we solve the regularized problem on a subspace of relatively small dimension by the Flexible Golub-Kahan process that provides solutions that have sparse representation. For comparison, we use a well-known modified Gauss–Newton algorithm as a benchmark. Using simulations, we demonstrate the effectiveness of the proposed method. The obtained reconstructions indicate that the Krylov subspace method is better adapted to solve the ill-posed EIT problem and results in higher resolution images and faster convergence compared to reconstructions using the modified Gauss–Newton algorithm. 
    more » « less
  2. Coded spectral X-ray computed tomography (CT) based on K-edge filtered illumination is a cost-effective approach to acquire both 3-dimensional structure of objects and their material composition. This approach allows sets of incomplete rays from sparse views or sparse rays with both spatial and spectral encoding to effectively reduce the inspection duration or radiation dose, which is of significance in biological imaging and medical diagnostics. However, reconstruction of spectral CT images from compressed measurements is a nonlinear and ill-posed problem. This paper proposes a material-decomposition-based approach to directly solve the reconstruction problem, without estimating the energy-binned sinograms. This approach assumes that the linear attenuation coefficient map of objects can be decomposed into a few basis materials that are separable in the spectral and space domains. The nonlinear problem is then converted to the reconstruction of the mass density maps of the basis materials. The dimensionality of the optimization variables is thus effectively reduced to overcome the ill-posedness. An alternating minimization scheme is used to solve the reconstruction with regularizations of weighted nuclear norm and total variation. Compared to the state-of-the-art reconstruction method for coded spectral CT, the proposed method can significantly improve the reconstruction quality. It is also capable of reconstructing the spectral CT images at two additional energy bins from the same set of measurements, thus providing more spectral information of the object.

     
    more » « less
  3. Modern neural network architectures use structured linear transformations, such as low-rank matrices, sparse matrices, permutations, and the Fourier transform, to improve inference speed and reduce memory usage compared to general linear maps. However, choosing which of the myriad structured transformations to use (and its associated parameterization) is a laborious task that requires trading off speed, space, and accuracy. We consider a different approach: we introduce a family of matrices called kaleidoscope matrices (K-matrices) that provably capture any structured matrix with near-optimal space (parameter) and time (arithmetic operation) complexity. We empirically validate that K-matrices can be automatically learned within end-to-end pipelines to replace hand-crafted procedures, in order to improve model quality. For example, replacing channel shuffles in ShuffleNet improves classification accuracy on ImageNet by up to 5%. K-matrices can also simplify hand-engineered pipelines---we replace filter bank feature computation in speech data preprocessing with a learnable kaleidoscope layer, resulting in only 0.4% loss in accuracy on the TIMIT speech recognition task. In addition, K-matrices can capture latent structure in models: for a challenging permuted image classification task, adding a K-matrix to a standard convolutional architecture can enable learning the latent permutation and improve accuracy by over 8 points. We provide a practically efficient implementation of our approach, and use K-matrices in a Transformer network to attain 36% faster end-to-end inference speed on a language translation task. 
    more » « less
  4. null (Ed.)
    Many inverse problems involve two or more sets of variables that represent different physical quantities but are tightly coupled with each other. For example, image super-resolution requires joint estimation of the image and motion parameters from noisy measurements. Exploiting this structure is key for efficiently solving these large-scale optimization problems, which are often ill-conditioned. In this paper, we present a new method called Linearize And Project (LAP) that offers a flexible framework for solving inverse problems with coupled variables. LAP is most promising for cases when the subproblem corresponding to one of the variables is considerably easier to solve than the other. LAP is based on a Gauss–Newton method, and thus after linearizing the residual, it eliminates one block of variables through projection. Due to the linearization, this block can be chosen freely. Further, LAP supports direct, iterative, and hybrid regularization as well as constraints. Therefore LAP is attractive, e.g., for ill-posed imaging problems. These traits differentiate LAP from common alternatives for this type of problem such as variable projection (VarPro) and block coordinate descent (BCD). Our numerical experiments compare the performance of LAP to BCD and VarPro using three coupled problems whose forward operators are linear with respect to one block and nonlinear for the other set of variables. 
    more » « less
  5. Summary

    In this paper, we propose an efficient numerical scheme for solving some large‐scale ill‐posed linear inverse problems arising from image restoration. In order to accelerate the computation, two different hidden structures are exploited. First, the coefficient matrix is approximated as the sum of a small number of Kronecker products. This procedure not only introduces one more level of parallelism into the computation but also enables the usage of computationally intensive matrix–matrix multiplications in the subsequent optimization procedure. We then derive the corresponding Tikhonov regularized minimization model and extend the fast iterative shrinkage‐thresholding algorithm (FISTA) to solve the resulting optimization problem. Because the matrices appearing in the Kronecker product approximation are all structured matrices (Toeplitz, Hankel, etc.), we can further exploit their fast matrix–vector multiplication algorithms at each iteration. The proposed algorithm is thus calledstructuredFISTA (sFISTA). In particular, we show that the approximation error introduced by sFISTA is well under control and sFISTA can reach the same image restoration accuracy level as FISTA. Finally, both the theoretical complexity analysis and some numerical results are provided to demonstrate the efficiency of sFISTA.

     
    more » « less