skip to main content


This content will become publicly available on December 1, 2024

Title: Fixed-point iterative linear inverse solver with extended precision
Abstract Solving linear systems, often accomplished by iterative algorithms, is a ubiquitous task in science and engineering. To accommodate the dynamic range and precision requirements, these iterative solvers are carried out on floating-point processing units, which are not efficient in handling large-scale matrix multiplications and inversions. Low-precision, fixed-point digital or analog processors consume only a fraction of the energy per operation than their floating-point counterparts, yet their current usages exclude iterative solvers due to the cumulative computational errors arising from fixed-point arithmetic. In this work, we show that for a simple iterative algorithm, such as Richardson iteration, using a fixed-point processor can provide the same convergence rate and achieve solutions beyond its native precision when combined with residual iteration. These results indicate that power-efficient computing platforms consisting of analog computing devices can be used to solve a broad range of problems without compromising the speed or precision.  more » « less
Award ID(s):
1932858
NSF-PAR ID:
10451774
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
Scientific Reports
Volume:
13
Issue:
1
ISSN:
2045-2322
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Computational imaging systems with embedded processing have potential advantages in power consumption, computing speed, and cost. However, common processors in embedded vision systems have limited computing capacity and low level of parallelism. The widely used iterative algorithms for image reconstruction rely on floating-point processors to ensure calculation precision, which require more computing resources than fixed-point processors. Here we present a regularized Landweber fixed-point iterative solver for image reconstruction, implemented on a field programmable gated array (FPGA). Compared with floating-point embedded uniprocessors, iterative solvers implemented on the fixed-point FPGA gain 1 to 2 orders of magnitude acceleration, while achieving the same reconstruction accuracy in comparable number of effective iterations. Specifically, we have demonstrated the proposed fixed-point iterative solver in fiber borescope image reconstruction, successfully correcting the artifacts introduced by the lenses and fiber bundle.

     
    more » « less
  2. Posit is a recently proposed representation for approximating real numbers using a finite number of bits. In contrast to the floating point (FP) representation, posit provides variable precision with a fixed number of total bits (i.e., tapered accuracy). Posit can represent a set of numbers with higher precision than FP and has garnered significant interest in various domains. The posit ecosystem currently does not have a native general-purpose math library. This paper presents our results in developing a math library for posits using the CORDIC method. CORDIC is an iterative algorithm to approximate trigonometric functions by rotating a vector with different angles in each iteration. This paper proposes two extensions to the CORDIC algorithm to account for tapered accuracy with posits that improves precision: (1) fast-forwarding of iterations to start the CORDIC algorithm at a later iteration and (2) the use of a wide accumulator (i.e., the quire data type) to minimize precision loss with accumulation. Our results show that a 32-bit posit implementation of trigonometric functions with our extensions is more accurate than a 32-bit FP implementation. 
    more » « less
  3. Photonic computing has potential advantages in speed and energy consumption yet is subject to inaccuracy due to the limited equivalent bitwidth of the analog signal. In this Letter, we demonstrate a configurable, fixed-point coherent photonic iterative solver for numerical eigenvalue problems using shifted inverse iteration. The photonic primitive can accommodate arbitrarily sized sparse matrix–vector multiplication and is deployed to solve eigenmodes in a photonic waveguide structure. The photonic iterative eigensolver does not accumulate errors from each iteration, providing a path toward implementing scientific computing applications on photonic primitives.

     
    more » « less
  4. With ever-increasing volumes of scientific floating-point data being produced by high-performance computing applications, significantly reducing scientific floating-point data size is critical, and error-controlled lossy compressors have been developed for years. None of the existing scientific floating-point lossy data compressors, however, support effective fixed-ratio lossy compression. Yet fixed-ratio lossy compression for scientific floating-point data not only compresses to the requested ratio but also respects a user-specified error bound with higher fidelity. In this paper, we present FRaZ: a generic fixed-ratio lossy compression framework respecting user-specified error constraints. The contribution is twofold. (1) We develop an efficient iterative approach to accurately determine the appropriate error settings for different lossy compressors based on target compression ratios. (2) We perform a thorough performance and accuracy evaluation for our proposed fixed-ratio compression framework with multiple state-of-the-art error-controlled lossy compressors, using several real-world scientific floating-point datasets from different domains. Experiments show that FRaZ effectively identifies the optimum error setting in the entire error setting space of any given lossy compressor. While fixed-ratio lossy compression is slower than fixed-error compression, it provides an important new lossy compression technique for users of very large scientific floating-point datasets. 
    more » « less
  5. null (Ed.)
    With ever-increasing volumes of scientific floating-point data being produced by high-performance computing applications, significantly reducing scientific floating-point data size is critical, and error-controlled lossy compressors have been developed for years. None of the existing scientific floating-point lossy data compressors, however, support effective fixed-ratio lossy compression. Yet fixed-ratio lossy compression for scientific floating-point data not only compresses to the requested ratio but also respects a user-specified error bound with higher fidelity. In this paper, we present FRaZ: a generic fixed-ratio lossy compression framework respecting user-specified error constraints. The contribution is twofold. (1) We develop an efficient iterative approach to accurately determine the appropriate error settings for different lossy compressors based on target compression ratios. (2) We perform a thorough performance 
    more » « less