skip to main content


Search for: All records

Creators/Authors contains: "Wen, W."

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Free, publicly-accessible full text available October 29, 2024
  2. Free, publicly-accessible full text available August 9, 2024
  3. Free, publicly-accessible full text available August 9, 2024
  4. Altbach, P.G. ; de Wit, H. ; Schendel, R. ; Blanco, G. ; Glass, C. (Ed.)
    Social networks based on Chi- nese culture, or guanxi, played an important role in scientists’ capacity to produce knowledge, their collaboration experiences, and in navigating the securitized research environment targeting collaboration between the Unit- ed States and China. 
    more » « less
    Free, publicly-accessible full text available July 1, 2024
  5. Free, publicly-accessible full text available July 9, 2024
  6. null (Ed.)
  7. Abstract OC44B-1512 
    more » « less
  8. Model compression is significant for the wide adoption of Recurrent Neural Networks (RNNs) in both user devices possessing limited resources and business clusters requiring quick responses to large-scale service requests. This work aims to learn structurally-sparse Long Short-Term Memory (LSTM) by reducing the sizes of basic structures within LSTM units, including input updates, gates, hidden states, cell states and outputs. Independently reducing the sizes of basic structures can result in inconsistent dimensions among them, and consequently, end up with invalid LSTM units. To overcome the problem, we propose Intrinsic Sparse Structures (ISS) in LSTMs. Removing a component of ISS will simultaneously decrease the sizes of all basic structures by one and thereby always maintain the dimension consistency. By learning ISS within LSTM units, the obtained LSTMs remain regular while having much smaller basic structures. Based on group Lasso regularization, our method achieves 10:59 speedup without losing any perplexity of a language modeling of Penn TreeBank dataset. It is also successfully evaluated through a compact model with only 2:69M weights for machine Question Answering of SQuAD dataset. Our approach is successfully extended to non-LSTM RNNs, like Recurrent Highway Networks (RHNs). Our source code is available. 
    more » « less