skip to main content

Search for: All records

Creators/Authors contains: "Hu, B."

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Free, publicly-accessible full text available October 17, 2023
  2. Techniques for reducing the variance of gradient estimates used in stochastic programming algorithms for convex finite-sum problems have received a great deal of attention in recent years. By leveraging dissipativity theory from control, we provide a new perspective on two important variance-reduction algorithms: SVRG and its direct accelerated variant Katyusha. Our perspective provides a physically intuitive understanding of the behavior of SVRG-like methods via a principle of energy conservation. The tools discussed here allow us to automate the convergence analysis of SVRG-like methods by capturing their essential properties in small semidefinite programs amenable to standard analysis and computational techniques. Our approach recovers existing convergence results for SVRG and Katyusha and generalizes the theory to alternative parameter choices. We also discuss how our approach complements the linear coupling technique. Our combination of perspectives leads to a better understanding of accelerated variance-reduced stochastic methods for finite-sum problems.
  3. Model compression is significant for the wide adoption of Recurrent Neural Networks (RNNs) in both user devices possessing limited resources and business clusters requiring quick responses to large-scale service requests. This work aims to learn structurally-sparse Long Short-Term Memory (LSTM) by reducing the sizes of basic structures within LSTM units, including input updates, gates, hidden states, cell states and outputs. Independently reducing the sizes of basic structures can result in inconsistent dimensions among them, and consequently, end up with invalid LSTM units. To overcome the problem, we propose Intrinsic Sparse Structures (ISS) in LSTMs. Removing a component of ISS will simultaneously decrease the sizes of all basic structures by one and thereby always maintain the dimension consistency. By learning ISS within LSTM units, the obtained LSTMs remain regular while having much smaller basic structures. Based on group Lasso regularization, our method achieves 10:59 speedup without losing any perplexity of a language modeling of Penn TreeBank dataset. It is also successfully evaluated through a compact model with only 2:69M weights for machine Question Answering of SQuAD dataset. Our approach is successfully extended to non-LSTM RNNs, like Recurrent Highway Networks (RHNs). Our source code is available.
  4. Abstract The prediction of reactor antineutrino spectra will play a crucial role as reactor experiments enter the precision era. The positron energy spectrum of 3.5 million antineutrino inverse beta decay reactions observed by the Daya Bay experiment, in combination with the fission rates of fissile isotopes in the reactor, is used to extract the positron energy spectra resulting from the fission of specific isotopes. This information can be used to produce a precise, data-based prediction of the antineutrino energy spectrum in other reactor antineutrino experiments with different fission fractions than Daya Bay. The positron energy spectra are unfolded to obtain the antineutrino energy spectra by removing the contribution from detector response with the Wiener-SVD unfolding method. Consistent results are obtained with other unfolding methods. A technique to construct a data-based prediction of the reactor antineutrino energy spectrum is proposed and investigated. Given the reactor fission fractions, the technique can predict the energy spectrum to a 2% precision. In addition, we illustrate how to perform a rigorous comparison between the unfolded antineutrino spectrum and a theoretical model prediction that avoids the input model bias of the unfolding method.