skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: A High-Speed Photonic Tensor Accelerator
We propose a coherent multi-dimensional (wavelength, spatial mode, polarization, etc.) photonic tensor accelerator capable of matrix-vector, matrix-matrix, and batch matrix multiplications in a single clock cycle. A proof-of-concept 2x2 matrix-matrix multiplication at 25GBd with 4.67 bit precision was experimentally demonstrated.  more » « less
Award ID(s):
1932858
PAR ID:
10451777
Author(s) / Creator(s):
; ; ; ; ;
Date Published:
Journal Name:
IEEE Photonics Conference
Page Range / eLocation ID:
1 to 2
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Can one recover a matrix efficiently from only matrix‐vector products? If so, how many are needed? This article describes algorithms to recover matrices with known structures, such as tridiagonal, Toeplitz, Toeplitz‐like, and hierarchical low‐rank, from matrix‐vector products. In particular, we derive a randomized algorithm for recovering an unknown hierarchical low‐rank matrix from only matrix‐vector products with high probability, where is the rank of the off‐diagonal blocks, and is a small oversampling parameter. We do this by carefully constructing randomized input vectors for our matrix‐vector products that exploit the hierarchical structure of the matrix. While existing algorithms for hierarchical matrix recovery use a recursive “peeling” procedure based on elimination, our approach uses a recursive projection procedure. 
    more » « less
  2. Depending on the node ordering, an adjacency matrix can highlight distinct characteristics of a graph. Deriving a "proper" node ordering is thus a critical step in visualizing a graph as an adjacency matrix. Users often try multiple matrix reorderings using different methods until they find one that meets the analysis goal. However, this trial-and-error approach is laborious and disorganized, which is especially challenging for novices. This paper presents a technique that enables users to effortlessly find a matrix reordering they want. Specifically, we design a generative model that learns a latent space of diverse matrix reorderings of the given graph. We also construct an intuitive user interface from the learned latent space by creating a map of various matrix reorderings. We demonstrate our approach through quantitative and qualitative evaluations of the generated reorderings and learned latent spaces. The results show that our model is capable of learning a latent space of diverse matrix reorderings. Most existing research in this area generally focused on developing algorithms that can compute "better" matrix reorderings for particular circumstances. This paper introduces a fundamentally new approach to matrix visualization of a graph, where a machine learning model learns to generate diverse matrix reorderings of a graph. 
    more » « less
  3. The Flow matrix is a novel method to describe and extrapolate transitions among categories. The Flow matrix extrapolates a constant transition size per unit of time on a time continuum with a maximum of one incident per observation during the extrapolation. The Flow matrix extrapolates linearly until the persistence of a category shrinks to zero. The Flow matrix has concepts and mathematics that are more straightforward than the Markov matrix. However, many scientists apply the Markov matrix by default because popular software packages offer no alternative to the Markov matrix, despite the conceptual and mathematical challenges that the Markov matrix poses. The Markov matrix extrapolates a constant transition proportion per time interval during whole-number multiples of the duration of the calibration time interval. The Markov extrapolation allows at most one incident per observation during each time interval but allows repeated incidents per observation through sequential time intervals. Many Markov extrapolations approach a steady state asymptotically through time as each category size approaches a constant. We use case studies concerning land change to illustrate the characteristics of the Flow and Markov matrices. The Flow and Markov extrapolations both deviate from the reference data during a validation time interval, implying there is no reason to prefer one matrix to the other in terms of correspondence with the processes that we analyzed. The two matrices differ substantially in terms of their underlying concepts and mathematical behaviors. Scientists should consider the ease of use and interpretation for each matrix when extrapolating transitions among categories. 
    more » « less
  4. Abstract We present a general class of machine learning algorithms called parametric matrix models. In contrast with most existing machine learning models that imitate the biology of neurons, parametric matrix models use matrix equations that emulate physical systems. Similar to how physics problems are usually solved, parametric matrix models learn the governing equations that lead to the desired outputs. Parametric matrix models can be efficiently trained from empirical data, and the equations may use algebraic, differential, or integral relations. While originally designed for scientific computing, we prove that parametric matrix models are universal function approximators that can be applied to general machine learning problems. After introducing the underlying theory, we apply parametric matrix models to a series of different challenges that show their performance for a wide range of problems. For all the challenges tested here, parametric matrix models produce accurate results within an efficient and interpretable computational framework that allows for input feature extrapolation. 
    more » « less
  5. We describe a randomized algorithm for producing a near-optimal hierarchical off-diagonal low-rank (HODLR) approximation to an n × n matrix A, accessible only though matrix-vector products with A and AT. We prove that, for the rank-k HODLR approximation problem, our method achieves a (1 + β )log(n )-optimal approximation in expected Frobenius norm using O (k log(n )/β3) matrix-vector products. In particular, the algorithm obtains a (1 + ∈ )-optimal approximation with O (k log4(n )/∈3) matrix-vector products, and for any constant c, an nc-optimal approximation with O (k log(n )) matrix-vector products. Apart from matrix-vector products, the additional computational cost of our method is just O (n poly(log(n ), k, β )). We complement the upper bound with a lower bound, which shows that any matrix-vector query algorithm requires at least Ω(k log(n ) + k/ε ) queries to obtain a (1 + ε )-optimal approximation. Our algorithm can be viewed as a robust version of widely used “peeling” methods for recovering HODLR matrices and is, to the best of our knowledge, the first matrix-vector query algorithm to enjoy theoretical worst- case guarantees for approximation by any hierarchical matrix class. To control the propagation of error between levels of hierarchical approximation, we introduce a new perturbation bound for low-rank approximation, which shows that the widely used Generalized Nyström method enjoys inherent stability when implemented with noisy matrix-vector products. We also introduce a novel randomly perforated matrix sketching method to further control the error in the peeling algorithm. 
    more » « less