This content will become publicly available on January 1, 2025
Photonic computing has potential advantages in speed and energy consumption yet is subject to inaccuracy due to the limited equivalent bitwidth of the analog signal. In this Letter, we demonstrate a configurable, fixedpoint coherent photonic iterative solver for numerical eigenvalue problems using shifted inverse iteration. The photonic primitive can accommodate arbitrarily sized sparse matrix–vector multiplication and is deployed to solve eigenmodes in a photonic waveguide structure. The photonic iterative eigensolver does not accumulate errors from each iteration, providing a path toward implementing scientific computing applications on photonic primitives.
more » « less NSFPAR ID:
 10483601
 Publisher / Repository:
 Optical Society of America
 Date Published:
 Journal Name:
 Optics Letters
 Volume:
 49
 Issue:
 2
 ISSN:
 01469592; OPLEDP
 Format(s):
 Medium: X Size: Article No. 194
 Size(s):
 ["Article No. 194"]
 Sponsoring Org:
 National Science Foundation
More Like this

Abstract Solving linear systems, often accomplished by iterative algorithms, is a ubiquitous task in science and engineering. To accommodate the dynamic range and precision requirements, these iterative solvers are carried out on floatingpoint processing units, which are not efficient in handling largescale matrix multiplications and inversions. Lowprecision, fixedpoint digital or analog processors consume only a fraction of the energy per operation than their floatingpoint counterparts, yet their current usages exclude iterative solvers due to the cumulative computational errors arising from fixedpoint arithmetic. In this work, we show that for a simple iterative algorithm, such as Richardson iteration, using a fixedpoint processor can provide the same convergence rate and achieve solutions beyond its native precision when combined with residual iteration. These results indicate that powerefficient computing platforms consisting of analog computing devices can be used to solve a broad range of problems without compromising the speed or precision.more » « less

Conventional computing architectures have no known efficient algorithms for combinatorial optimization tasks such as the Ising problem, which requires finding the ground state spin configuration of an arbitrary Ising graph. Physical Ising machines have recently been developed as an alternative to conventional exact and heuristic solvers; however, these machines typically suffer from decreased ground state convergence probability or universality for high edgedensity graphs or arbitrary graph weights, respectively. We experimentally demonstrate a proofofprinciple integrated nanophotonic recurrent Ising sampler (INPRIS), using a hybrid scheme combining electronics and silicononinsulator photonics, that is capable of converging to the ground state of various fourspin graphs with high probability. The INPRIS results indicate that noise may be used as a resource to speed up the ground state search and to explore larger regions of the phase space, thus allowing one to probe noisedependent physical observables. Since the recurrent photonic transformation that our machine imparts is a fixed function of the graph problem and therefore compatible with optoelectronic architectures that support GHz clock rates (such as passive or nonvolatile photonic circuits that do not require reprogramming at each iteration), this work suggests the potential for future systems that could achieve ordersofmagnitude speedups in exploring the solution space of combinatorially hard problems.

One of the key advantages of visual analytics is its capability to leverage both humans's visual perception and the power of computing. A big obstacle in integrating machine learning with visual analytics is its high computing cost. To tackle this problem, this paper presents PIVE (PerIteration Visualization Environment) that supports realtime interactive visualization with machine learning. By immediately visualizing the intermediate results from algorithm iterations, PIVE enables users to quickly grasp insights and interact with the intermediate output, which then affects subsequent algorithm iterations. In addition, we propose a widelyapplicable interaction methodology that allows efficient incorporation of user feedback into virtually any iterative computational method without introducing additional computational cost. We demonstrate the application of PIVE for various dimension reduction algorithms such as multidimensional scaling and tSNE and clustering and topic modeling algorithms such as kmeans and latent Dirichlet allocation.more » « less

In this paper, we consider iterative methods based on sampling for computing solutions to separable nonlinear inverse problems where the entire dataset cannot be accessed or is not available allatonce. In such scenarios (e.g., when massive amounts of data exceed memory capabilities or when data is being streamed), solving inverse problems, especially nonlinear ones, can be very challenging. We focus on separable nonlinear problems, where the objective function is nonlinear in one (typically small) set of parameters and linear in another (larger) set of parameters. For the linear problem, we describe a limitedmemory sampled Tikhonov method, and for the nonlinear problem, we describe an approach to integrate the limitedmemory sampled Tikhonov method within a nonlinear optimization framework. The proposed method is computationally efficient in that it only uses available data at any iteration to update both sets of parameters. Numerical experiments applied to massive superresolution image reconstruction problems show the power of these methods.more » « less

In secondorder optimization, a potential bottleneck can be computing the Hessian matrix of the optimized function at every iteration. Randomized sketching has emerged as a powerful technique for constructing estimates of the Hessian which can be used to perform approximate Newton steps. This involves multiplication by a random sketching matrix, which introduces a tradeoff between the computational cost of sketching and the convergence rate of the optimization algorithm. A theoretically desirable but practically much too expensive choice is to use a dense Gaussian sketching matrix, which produces unbiased estimates of the exact Newton step and which offers strong problemindependent convergence guarantees. We show that the Gaussian sketching matrix can be drastically sparsified, significantly reducing the computational cost of sketching, without substantially affecting its convergence properties. This approach, called Newton LESS, is based on a recently introduced sketching technique: LEverage Score Sparsified (LESS) embeddings. We prove that NewtonLESS enjoys nearly the same problemindependent local convergence rate as Gaussian embeddings, not just up to constant factors but even down to lower order terms, for a large class of optimization tasks. In particular, this leads to a new stateoftheart convergence result for an iterative least squares solver. Finally, we extend LESS embeddings to include uniformly sparsified random sign matrices which can be implemented efficiently and which perform well in numerical experiments.more » « less