skip to main content


This content will become publicly available on January 1, 2025

Title: Iterative eigensolver using fixed-point photonic primitive

Photonic computing has potential advantages in speed and energy consumption yet is subject to inaccuracy due to the limited equivalent bitwidth of the analog signal. In this Letter, we demonstrate a configurable, fixed-point coherent photonic iterative solver for numerical eigenvalue problems using shifted inverse iteration. The photonic primitive can accommodate arbitrarily sized sparse matrix–vector multiplication and is deployed to solve eigenmodes in a photonic waveguide structure. The photonic iterative eigensolver does not accumulate errors from each iteration, providing a path toward implementing scientific computing applications on photonic primitives.

 
more » « less
NSF-PAR ID:
10483601
Author(s) / Creator(s):
; ; ; ;
Publisher / Repository:
Optical Society of America
Date Published:
Journal Name:
Optics Letters
Volume:
49
Issue:
2
ISSN:
0146-9592; OPLEDP
Format(s):
Medium: X Size: Article No. 194
Size(s):
["Article No. 194"]
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Solving linear systems, often accomplished by iterative algorithms, is a ubiquitous task in science and engineering. To accommodate the dynamic range and precision requirements, these iterative solvers are carried out on floating-point processing units, which are not efficient in handling large-scale matrix multiplications and inversions. Low-precision, fixed-point digital or analog processors consume only a fraction of the energy per operation than their floating-point counterparts, yet their current usages exclude iterative solvers due to the cumulative computational errors arising from fixed-point arithmetic. In this work, we show that for a simple iterative algorithm, such as Richardson iteration, using a fixed-point processor can provide the same convergence rate and achieve solutions beyond its native precision when combined with residual iteration. These results indicate that power-efficient computing platforms consisting of analog computing devices can be used to solve a broad range of problems without compromising the speed or precision. 
    more » « less
  2. Conventional computing architectures have no known efficient algorithms for combinatorial optimization tasks such as the Ising problem, which requires finding the ground state spin configuration of an arbitrary Ising graph. Physical Ising machines have recently been developed as an alternative to conventional exact and heuristic solvers; however, these machines typically suffer from decreased ground state convergence probability or universality for high edge-density graphs or arbitrary graph weights, respectively. We experimentally demonstrate a proof-of-principle integrated nanophotonic recurrent Ising sampler (INPRIS), using a hybrid scheme combining electronics and silicon-on-insulator photonics, that is capable of converging to the ground state of various four-spin graphs with high probability. The INPRIS results indicate that noise may be used as a resource to speed up the ground state search and to explore larger regions of the phase space, thus allowing one to probe noise-dependent physical observables. Since the recurrent photonic transformation that our machine imparts is a fixed function of the graph problem and therefore compatible with optoelectronic architectures that support GHz clock rates (such as passive or non-volatile photonic circuits that do not require reprogramming at each iteration), this work suggests the potential for future systems that could achieve orders-of-magnitude speedups in exploring the solution space of combinatorially hard problems.

     
    more » « less
  3. One of the key advantages of visual analytics is its capability to leverage both humans's visual perception and the power of computing. A big obstacle in integrating machine learning with visual analytics is its high computing cost. To tackle this problem, this paper presents PIVE (Per-Iteration Visualization Environment) that supports real-time interactive visualization with machine learning. By immediately visualizing the intermediate results from algorithm iterations, PIVE enables users to quickly grasp insights and interact with the intermediate output, which then affects subsequent algorithm iterations. In addition, we propose a widely-applicable interaction methodology that allows efficient incorporation of user feedback into virtually any iterative computational method without introducing additional computational cost. We demonstrate the application of PIVE for various dimension reduction algorithms such as multidimensional scaling and t-SNE and clustering and topic modeling algorithms such as k-means and latent Dirichlet allocation. 
    more » « less
  4. In this paper, we consider iterative methods based on sampling for computing solutions to separable nonlinear inverse problems where the entire dataset cannot be accessed or is not available all-at-once. In such scenarios (e.g., when massive amounts of data exceed memory capabilities or when data is being streamed), solving inverse problems, especially nonlinear ones, can be very challenging. We focus on separable nonlinear problems, where the objective function is nonlinear in one (typically small) set of parameters and linear in another (larger) set of parameters. For the linear problem, we describe a limited-memory sampled Tikhonov method, and for the nonlinear problem, we describe an approach to integrate the limited-memory sampled Tikhonov method within a nonlinear optimization framework. The proposed method is computationally efficient in that it only uses available data at any iteration to update both sets of parameters. Numerical experiments applied to massive super-resolution image reconstruction problems show the power of these methods. 
    more » « less
  5. In second-order optimization, a potential bottleneck can be computing the Hessian matrix of the optimized function at every iteration. Randomized sketching has emerged as a powerful technique for constructing estimates of the Hessian which can be used to perform approximate Newton steps. This involves multiplication by a random sketching matrix, which introduces a trade-off between the computational cost of sketching and the convergence rate of the optimization algorithm. A theoretically desirable but practically much too expensive choice is to use a dense Gaussian sketching matrix, which produces unbiased estimates of the exact Newton step and which offers strong problem-independent convergence guarantees. We show that the Gaussian sketching matrix can be drastically sparsified, significantly reducing the computational cost of sketching, without substantially affecting its convergence properties. This approach, called Newton LESS, is based on a recently introduced sketching technique: LEverage Score Sparsified (LESS) embeddings. We prove that Newton-LESS enjoys nearly the same problem-independent local convergence rate as Gaussian embeddings, not just up to constant factors but even down to lower order terms, for a large class of optimization tasks. In particular, this leads to a new state-of-the-art convergence result for an iterative least squares solver. Finally, we extend LESS embeddings to include uniformly sparsified random sign matrices which can be implemented efficiently and which perform well in numerical experiments. 
    more » « less