skip to main content


Title: A High-Speed Photonic Tensor Accelerator
We propose a coherent multi-dimensional (wavelength, spatial mode, polarization, etc.) photonic tensor accelerator capable of matrix-vector, matrix-matrix, and batch matrix multiplications in a single clock cycle. A proof-of-concept 2x2 matrix-matrix multiplication at 25GBd with 4.67 bit precision was experimentally demonstrated.  more » « less
Award ID(s):
1932858
NSF-PAR ID:
10451777
Author(s) / Creator(s):
; ; ; ; ;
Date Published:
Journal Name:
IEEE Photonics Conference
Page Range / eLocation ID:
1 to 2
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Depending on the node ordering, an adjacency matrix can highlight distinct characteristics of a graph. Deriving a "proper" node ordering is thus a critical step in visualizing a graph as an adjacency matrix. Users often try multiple matrix reorderings using different methods until they find one that meets the analysis goal. However, this trial-and-error approach is laborious and disorganized, which is especially challenging for novices. This paper presents a technique that enables users to effortlessly find a matrix reordering they want. Specifically, we design a generative model that learns a latent space of diverse matrix reorderings of the given graph. We also construct an intuitive user interface from the learned latent space by creating a map of various matrix reorderings. We demonstrate our approach through quantitative and qualitative evaluations of the generated reorderings and learned latent spaces. The results show that our model is capable of learning a latent space of diverse matrix reorderings. Most existing research in this area generally focused on developing algorithms that can compute "better" matrix reorderings for particular circumstances. This paper introduces a fundamentally new approach to matrix visualization of a graph, where a machine learning model learns to generate diverse matrix reorderings of a graph. 
    more » « less
  2. Abstract

    Can one recover a matrix efficiently from only matrix‐vector products? If so, how many are needed? This article describes algorithms to recover matrices with known structures, such as tridiagonal, Toeplitz, Toeplitz‐like, and hierarchical low‐rank, from matrix‐vector products. In particular, we derive a randomized algorithm for recovering an unknown hierarchical low‐rank matrix from only matrix‐vector products with high probability, where is the rank of the off‐diagonal blocks, and is a small oversampling parameter. We do this by carefully constructing randomized input vectors for our matrix‐vector products that exploit the hierarchical structure of the matrix. While existing algorithms for hierarchical matrix recovery use a recursive “peeling” procedure based on elimination, our approach uses a recursive projection procedure.

     
    more » « less
  3. The Flow matrix is a novel method to describe and extrapolate transitions among categories. The Flow matrix extrapolates a constant transition size per unit of time on a time continuum with a maximum of one incident per observation during the extrapolation. The Flow matrix extrapolates linearly until the persistence of a category shrinks to zero. The Flow matrix has concepts and mathematics that are more straightforward than the Markov matrix. However, many scientists apply the Markov matrix by default because popular software packages offer no alternative to the Markov matrix, despite the conceptual and mathematical challenges that the Markov matrix poses. The Markov matrix extrapolates a constant transition proportion per time interval during whole-number multiples of the duration of the calibration time interval. The Markov extrapolation allows at most one incident per observation during each time interval but allows repeated incidents per observation through sequential time intervals. Many Markov extrapolations approach a steady state asymptotically through time as each category size approaches a constant. We use case studies concerning land change to illustrate the characteristics of the Flow and Markov matrices. The Flow and Markov extrapolations both deviate from the reference data during a validation time interval, implying there is no reason to prefer one matrix to the other in terms of correspondence with the processes that we analyzed. The two matrices differ substantially in terms of their underlying concepts and mathematical behaviors. Scientists should consider the ease of use and interpretation for each matrix when extrapolating transitions among categories.

     
    more » « less
  4. Abstract

    A separable covariance model can describe the among-row and among-column correlations of a random matrix and permits likelihood-based inference with a very small sample size. However, if the assumption of separability is not met, data analysis with a separable model may misrepresent important dependence patterns in the data. As a compromise between separable and unstructured covariance estimation, we decompose a covariance matrix into a separable component and a complementary ‘core’ covariance matrix. This decomposition defines a new covariance matrix decomposition that makes use of the parsimony and interpretability of a separable covariance model, yet fully describes covariance matrices that are non-separable. This decomposition motivates a new type of shrinkage estimator, obtained by appropriately shrinking the core of the sample covariance matrix, that adapts to the degree of separability of the population covariance matrix.

     
    more » « less
  5. Techniques of matrix completion aim to impute a large portion of missing entries in a data matrix through a small portion of observed ones. In practice, prior information and special structures are usually employed in order to improve the accuracy of matrix completion. In this paper, we propose a unified nonconvex optimization framework for matrix completion with linearly parameterized factors. In particular, by introducing a condition referred to as Correlated Parametric Factorization, we conduct a unified geometric analysis for the nonconvex objective by establishing uniform upper bounds for low-rank estimation resulting from any local minimizer. Perhaps surprisingly, the condition of Correlated Parametric Factorization holds for important examples including subspace-constrained matrix completion and skew-symmetric matrix completion. The effectiveness of our unified nonconvex optimization method is also empirically illustrated by extensive numerical simulations. 
    more » « less