- Home
- Search Results
- Page 1 of 1
Search for: All records
-
Total Resources3
- Resource Type
-
02000010000
- More
- Availability
-
21
- Author / Contributor
- Filter by Author / Creator
-
-
Zhao, Yilong (3)
-
Jiang, Li (2)
-
Boloor, Adith (1)
-
Cao, Weidong (1)
-
Ceze, Luis (1)
-
Chen, Lequn (1)
-
Chen, Tianqi (1)
-
Ding, Caiwen (1)
-
Han, Yinhe (1)
-
Jalali, Zeinab S. (1)
-
Kasikci, Baris (1)
-
Krishnamurthy, Arvind (1)
-
Lin, Chien-Yu (1)
-
Lin, Sheng (1)
-
Ma, Xiaolong (1)
-
Soundarajan, Sucheta (1)
-
Wang, Yanzhi (1)
-
Ye, Zihao (1)
-
Yuan, Geng (1)
-
Zhang, Tianyun (1)
-
- Filter by Editor
-
-
& Spizer, S. M. (0)
-
& . Spizer, S. (0)
-
& Ahn, J. (0)
-
& Bateiha, S. (0)
-
& Bosch, N. (0)
-
& Brennan K. (0)
-
& Brennan, K. (0)
-
& Chen, B. (0)
-
& Chen, Bodong (0)
-
& Drown, S. (0)
-
& Ferretti, F. (0)
-
& Higgins, A. (0)
-
& J. Peters (0)
-
& Kali, Y. (0)
-
& Ruiz-Arias, P.M. (0)
-
& S. Spitzer (0)
-
& Sahin. I. (0)
-
& Spitzer, S. (0)
-
& Spitzer, S.M. (0)
-
(submitted - in Review for IEEE ICASSP-2024) (0)
-
-
Have feedback or suggestions for a way to improve these results?
!
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Free, publicly-accessible full text available May 13, 2025
-
Cao, Weidong ; Zhao, Yilong ; Boloor, Adith ; Han, Yinhe ; Zhang, Xuan ; Jiang, Li ( , IEEE Transactions on Computers)
-
Yuan, Geng ; Ma, Xiaolong ; Ding, Caiwen ; Lin, Sheng ; Zhang, Tianyun ; Jalali, Zeinab S. ; Zhao, Yilong ; Jiang, Li ; Soundarajan, Sucheta ; Wang, Yanzhi ( , IEEE/ACM International Symposium on Low Power Electronics and Design (ISLPED))The high computation and memory storage of large deep neural networks (DNNs) models pose intensive challenges to the conventional Von-Neumann architecture, incurring substantial data movements in the memory hierarchy. The memristor crossbar array has emerged as a promising solution to mitigate the challenges and enable low-power acceleration of DNNs. Memristor-based weight pruning and weight quantization have been separately investigated and proven effectiveness in reducing area and power consumption compared to the original DNN model. However, there has been no systematic investigation of memristor-based neuromorphic computing (NC) systems considering both weight pruning and weight quantization. In this paper, we propose an unified and systematic memristor-based framework considering both structured weight pruning and weight quantization by incorporating alternating direction method of multipliers (ADMM) into DNNs training. We consider hardware constraints such as crossbar blocks pruning, conductance range, and mismatch between weight value and real devices, to achieve high accuracy and low power and small area footprint. Our framework is mainly integrated by three steps, i.e., memristor- based ADMM regularized optimization, masked mapping and retraining. Experimental results show that our proposed frame- work achieves 29.81× (20.88×) weight compression ratio, with 98.38% (96.96%) and 98.29% (97.47%) power and area reduction on VGG-16 (ResNet-18) network where only have 0.5% (0.76%) accuracy loss, compared to the original DNN models. We share our models at anonymous link http://bit.ly/2Jp5LHJ .more » « less