skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Quantized Neural Network via Synaptic Segregation Based on Ternary Charge‐Trap Transistors
Abstract Artificial neural networks (ANNs) are widely used in numerous artificial intelligence‐based applications. However, the significant amount of data transferred between computing units and storage has limited the widespread deployment of ANN for the artificial intelligence of things (AIoT) and power‐constrained device applications. Therefore, among various ANN algorithms, quantized neural networks (QNNs) have garnered considerable attention because they require fewer computational resources with minimal energy consumption. Herein, an oxide‐based ternary charge‐trap transistor (CTT) that provides three discrete states and non‐volatile memory characteristics are introduced, which are desirable for QNN computing. By employing a differential pair of ternary CTTs, an artificial synaptic segregation with multilevel quantized values for QNNs is demostrated. The approach establishes a platform that combines the advantages of multiple states and robustness to noise for in‐memory computing to achieve reliable QNN performance in hardware, thereby facilitating the development of energy‐efficient AIoT.  more » « less
Award ID(s):
1942868
PAR ID:
10452908
Author(s) / Creator(s):
 ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  
Publisher / Repository:
Wiley Blackwell (John Wiley & Sons)
Date Published:
Journal Name:
Advanced Electronic Materials
Volume:
9
Issue:
11
ISSN:
2199-160X
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Quantum Neural Networks (QNNs), or the so-called variational quantum circuits, are important quantum applications both because of their similar promises as classical neural networks and because of the feasibility of their implementation on near-term intermediate-size noisy quantum machines (NISQ). However, the training task of QNNs is challenging and much less understood. We conduct a quantitative investigation on the landscape of loss functions of QNNs and identify a class of simple yet extremely hard QNN instances for training. Specifically, we show for typical under-parameterized QNNs, there exists a dataset that induces a loss function with the number of spurious local minima depending exponentially on the number of parameters. Moreover, we show the optimality of our construction by providing an almost matching upper bound on such dependence. While local minima in classical neural networks are due to non-linear activations, in quantum neural networks local minima appear as a result of the quantum interference phenomenon. Finally, we empirically confirm that our constructions can indeed be hard instances in practice with typical gradient-based optimizers, which demonstrates the practical value of our findings. 
    more » « less
  2. Abstract Recent breakthroughs in artificial neural networks (ANNs) have spurred interest in efficient computational paradigms where the energy and time costs for training and inference are reduced. One promising contender for efficient ANN implementation is crossbar arrays of resistive memory elements that emulate the synaptic strength between neurons within the ANN. Organic nonvolatile redox memory has recently been demonstrated as a promising device for neuromorphic computing, offering a continuous range of linearly programmable resistance states and tunable electronic and electrochemical properties, opening a path toward massively parallel and energy efficient ANN implementation. However, one of the key issues with implementations relying on electrochemical gating of organic materials is the state‐retention time and device stability. Here, revealed are the mechanisms leading to state loss and cycling instability in redox‐gated neuromorphic devices: parasitic redox reactions and out‐diffusion of reducing additives. The results of this study are used to design an encapsulation structure which shows an order of magnitude improvement in state retention and cycling stability for poly(3,4‐ethylenedioxythiophene)/polyethyleneimine:poly(styrene sulfonate) devices by tuning the concentration of additives, implementing a solid‐state electrolyte, and encapsulating devices in an inert environment. Finally, a comparison is made between programming range and state retention to optimize device operation. 
    more » « less
  3. A quantum neural network (QNN) is a parameterized mapping efficiently implementable on near-term Noisy Intermediate-Scale Quantum (NISQ) computers. It can be used for supervised learning when combined with classical gradient-based optimizers. Despite the existing empirical and theoretical investigations, the convergence of QNN training is not fully understood. Inspired by the success of the neural tangent kernels (NTKs) in probing into the dynamics of classical neural networks, a recent line of works proposes to study over-parameterized QNNs by examining a quantum version of tangent kernels. In this work, we study the dynamics of QNNs and show that contrary to popular belief it is qualitatively different from that of any kernel regression: due to the unitarity of quantum operations, there is a nonnegligible deviation from the tangent kernel regression derived at the random initialization. As a result of the deviation, we prove the at-most sublinear convergence for QNNs with Pauli measurements, which is beyond the explanatory power of any kernel regression dynamics. We then present the actual dynamics of QNNs in the limit of over-parameterization. The new dynamics capture the change of convergence rate during training, and implies that the range of measurements is crucial to the fast QNN convergence. 
    more » « less
  4. The integration of the Internet of Things (IoT) and modern Artificial Intelligence (AI) has given rise to a new paradigm known as the Artificial Intelligence of Things (AIoT). In this survey, we provide a systematic and comprehensive review of AIoT research. We examine AIoT literature related to sensing, computing, and networking & communication, which form the three key components of AIoT. In addition to advancements in these areas, we review domain-specific AIoT systems that are designed for various important application domains. We have also created an accompanying GitHub repository, where we compile the papers included in this survey: https://github.com/AIoT-MLSys-Lab/AIoT-Survey. This repository will be actively maintained and updated with new research as it becomes available. As both IoT and AI become increasingly critical to our society, we believe that AIoT is emerging as an essential research field at the intersection of IoT and modern AI. It is our hope that this survey will serve as a valuable resource for those engaged in AIoT research and act as a catalyst for future explorations to bridge gaps and drive advancements in this exciting field. 
    more » « less
  5. Abstract Neuromorphic computing mimics the organizational principles of the brain in its quest to replicate the brain’s intellectual abilities. An impressive ability of the brain is its adaptive intelligence, which allows the brain to regulate its functions “on the fly” to cope with myriad and ever-changing situations. In particular, the brain displays three adaptive and advanced intelligence abilities of context-awareness, cross frequency coupling, and feature binding. To mimic these adaptive cognitive abilities, we design and simulate a novel, hardware-based adaptive oscillatory neuron using a lattice of magnetic skyrmions. Charge current fed to the neuron reconfigures the skyrmion lattice, thereby modulating the neuron’s state, its dynamics and its transfer function “on the fly.” This adaptive neuron is used to demonstrate the three cognitive abilities, of which context-awareness and cross-frequency coupling have not been previously realized in hardware neurons. Additionally, the neuron is used to construct an adaptive artificial neural network (ANN) and perform context-aware diagnosis of breast cancer. Simulations show that the adaptive ANN diagnoses cancer with higher accuracy while learning faster and using a more compact and energy-efficient network than a nonadaptive ANN. The work further describes how hardware-based adaptive neurons can mitigate several critical challenges facing contemporary ANNs. Modern ANNs require large amounts of training data, energy, and chip area, and are highly task-specific; conversely, hardware-based ANNs built with adaptive neurons show faster learning, compact architectures, energy-efficiency, fault-tolerance, and can lead to the realization of broader artificial intelligence. 
    more » « less