skip to main content


Search for: All records

Creators/Authors contains: "Orlowski, Marius K."

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. null (Ed.)
    Deep Neural Networks (DNNs), a brain-inspired learning methodology, requires tremendous data for training before performing inference tasks. The recent studies demonstrate a strong positive correlation between the inference accuracy and the size of the DNNs and datasets, which leads to an inevitable demand for large DNNs. However, conventional memory techniques are not adequate to deal with the drastic growth of dataset and neural network size. Recently, a resistive memristor has been widely considered as the next generation memory device owing to its high density and low power consumption. Nevertheless, its high switching resistance variations (cycle-tocycle) restrict its feasibility in deep learning. In this work, a novel memristor configuration with the enhanced heat dissipation feature is fabricated and evaluated to address this challenge. Our experimental results demonstrate our memristor reduces the resistance variation by 30% and the inference accuracy increases correspondingly in a similar range. The accuracy increment is evaluated by our Deep Delay-feed-back (Deep-DFR) reservoir computing model. The design area, power consumption, and latency are reduced by 48%, 42%, and 67%, respectively, compared to the conventional SRAM memory technique (6T). The performance of our memristor is improved at various degrees ( 13%-73%) compared to the state-of-the-art memristors. 
    more » « less
  2. To accelerate the training efficiency of neural network-based machine learning, a memristor-based nonlinear computing module is designed and analyzed. Nonlinear computing operation is widely needed in neuromorphic computing and deep learning. The proposed nonlinear computing module can potentially realize a monotonic nonlinear function by successively placing memristors in a series combing with a simple amplifier. The proposed module is evaluated and optimized through the Long Short-term Memory with the digit number recognition application. The proposed nonlinear computing module can reduce the chip area from microscale to nanoscale, and potentially enhance the computing efficiency to O(1) while guaranteeing accuracy. Furthermore, the impact of the resistance variation of memristor switching on the training accuracy is simulated and analyzed using Long Short-term Memory as a benchmark. 
    more » « less