skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Sub-Riemannian Geodesics in $SU(n)/S(U(n-1) \times U(1))$ and Optimal Control of Three Level Quantum Systems
Award ID(s):
1710558
PAR ID:
10158447
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
IEEE Transactions on Automatic Control
Volume:
65
Issue:
3
ISSN:
0018-9286
Page Range / eLocation ID:
1176 to 1191
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. A<sc>bstract</sc> In this paper we develop a Young diagram approach to constructing higher dimensional operators formed from massless superfields and their superderivatives in$$ \mathcal{N} $$ N = 1 supersymmetry. These operators are in one-to-one correspondence with non-factorizable terms in on-shell superamplitudes, which can be studied with massless spinor helicity techniques. By relating all spin-helicity variables to certain representations under a hidden U(N) symmetry behind the theory, we show each non-factorizable superamplitude can be identified with a specific Young tableau. The desired tableau is picked out of a more general set of U(N) tensor products by enforcing the supersymmetric Ward identities. We then relate these Young tableaux to higher dimensional superfield operators and list the rules to read operators directly from Young tableau. Using this method, we present several illustrative examples. 
    more » « less
  2. Sparse deep neural networks (DNNs) have the potential to deliver compelling performance and energy efficiency without significant accuracy loss. However, their benefits can quickly diminish if their training is oblivious to the target hardware. For example, fewer critical connections can have a significant overhead if they translate into long-distance communication on the target hardware. Therefore, hardware-aware sparse training is needed to leverage the full potential of sparse DNNs. To this end, we propose a novel and comprehensive communication-aware sparse DNN optimization framework for tile-based in-memory computing (IMC) architectures. The proposed technique, CANNON first maps the DNN layers onto the tiles of the target architecture. Then, it replaces the fully connected and convolutional layers with communication-aware sparse connections. After that, CANNON optimizes the communication cost with minimal impact on the DNN accuracy. Extensive experimental evaluations with a wide range of DNNs and datasets show up to 3.0× lower communication energy, 3.1× lower communication latency, and 6.8× lower energy-delay product compared to state-of-the-art pruning approaches with a negligible impact on the classification accuracy on IMC-based machine learning accelerators. 
    more » « less
  3. In this work, we first propose a deep depthwise Convolutional Neural Network (CNN) structure, called Add-Net, which uses binarized depthwise separable convolution to replace conventional spatial-convolution. In Add-Net, the computationally expensive convolution operations (i.e. Multiplication and Accumulation) are converted into hardware-friendly Addition operations. We meticulously investigate and analyze the Add-Net's performance (i.e. accuracy, parameter size and computational cost) in object recognition application compared to traditional baseline CNN using the most popular large scale ImageNet dataset. Accordingly, we propose a Depthwise CNN In-Memory Accelerator (DIMA) based on SOT-MRAM computational sub-arrays to efficiently accelerate Add-Net within non-volatile MRAM. Our device-to-architecture co-simulation results show that, with almost the same inference accuracy to the baseline CNN on different data-sets, DIMA can obtain ~1.4× better energy-efficiency and 15.7× speedup compared to ASICs, and, ~1.6× better energy-efficiency and 5.6× speedup over the best processing-in-DRAM accelerators. 
    more » « less