skip to main content

Search for: All records

Creators/Authors contains: "Song, J."

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Free, publicly-accessible full text available June 1, 2024
  2. The continuous growth of CNN complexity not only intensifies the need for hardware acceleration but also presents a huge challenge. That is, the solution space for CNN hardware design and dataflow mapping becomes enormously large besides the fact that it is discrete and lacks a well behaved structure. Most previous works either are stochastic metaheuristics, such as genetic algorithm, which are typically very slow for solving large problems, or rely on expensive sampling, e.g., Gumbel Softmax-based differentiable optimization and Bayesian optimization. We propose an analytical model for evaluating power and performance of CNN hardware design and dataflow solutions. Based on this model, we introduce a co-optimization method consisting of nonlinear programming and parallel local search. A key innovation in this model is its matrix form, which enables the use of deep learning toolkit for highly efficient computations of power/performance values and gradients in the optimization. In handling power-performance tradeoff, our method can lead to better solutions than minimizing a weighted sum of power and latency. The average relative error of our model compared with Timeloop is as small as 1%. Compared to state-of-the-art methods, our approach achieves solutions with up to 1.7 × shorter inference latency, 37.5% less power consumption, and 3 × less area on ResNet 18. Moreover, it provides a 6.2 × speedup of optimization 
    more » « less
  3. Free, publicly-accessible full text available October 1, 2024
  4. Free, publicly-accessible full text available August 1, 2024
  5. null (Ed.)
    Using a three-wave longitudinal data set of Mexican-origin adolescents (N = 602, Mage = 12.92, SD = 0.91 at Wave 1), this study examines parallel pathways from early exposure to ethnic discrimination and drug-using peers, separately, to underage drinking status by late adolescence. Negative affect was expected to mediate the link from ethnic discrimination to underage drinking status (the stress-induced pathway), whereas social alcohol expectancy was expected to mediate the link from drug-using peers to underage drinking status (the socialization pathway). Our findings lend support to the stress-induced pathway while controlling for the socialization pathway. For the stress-induced pathway, we found that early ethnic discrimination experiences were related to higher likelihood of having engaged in underage drinking by late adolescence through elevated negative affect sustained across adolescence. For the socialization pathway, we found no association between affiliation with drug-using peers in early adolescence and underage drinking status, either directly or indirectly. Present findings highlight the unique role of early ethnic discrimination experiences in underage drinking among Mexican-origin adolescents, over and above the effect of drug-using peers. Alcohol use interventions targeting ethnic minority adolescents should account for adolescents' ethnic discrimination experiences by helping adolescents develop adaptive coping strategies to handle negative affect induced by discrimination (e.g., reappraisal) rather than using alcohol to self-medicate. 
    more » « less
  6. Free, publicly-accessible full text available July 1, 2024
  7. Many sequential decision making tasks can be viewed as combinatorial optimiza- tion problems over a large number of actions. When the cost of evaluating an ac- tion is high, even a greedy algorithm, which iteratively picks the best action given the history, is prohibitive to run. In this paper, we aim to learn a greedy heuris- tic for sequentially selecting actions as a surrogate for invoking the expensive oracle when evaluating an action. In particular, we focus on a class of combinato- rial problems that can be solved via submodular maximization (either directly on the objective function or via submodular surrogates). We introduce a data-driven optimization framework based on the submodular-norm loss, a novel loss func- tion that encourages the resulting objective to exhibit diminishing returns. Our framework outputs a surrogate objective that is efficient to train, approximately submodular, and can be made permutation-invariant. The latter two properties al- low us to prove strong approximation guarantees for the learned greedy heuristic. Furthermore, our model is easily integrated with modern deep imitation learning pipelines for sequential prediction tasks. We demonstrate the performance of our algorithm on a variety of batched and sequential optimization tasks, including set cover, active learning, and data-driven protein engineering. 
    more » « less
  8. null (Ed.)
  9. null (Ed.)
    We study the problem of learning sequential decision-making policies in settings with multiple state-action representations. Such settings naturally arise in many domains, such as planning (e.g., multiple integer programming formulations) and various combinatorial optimization problems (e.g., those with both integer programming and graph-based formulations). Inspired by the classical co-training framework for classification, we study the problem of co-training for policy learning. We present sufficient conditions under which learning from two views can improve upon learning from a single view alone. Motivated by these theoretical insights, we present a meta-algorithm for co-training for sequential decision making. Our framework is compatible with both reinforcement learning and imitation learning. We validate the effectiveness of our approach across a wide range of tasks, including discrete/continuous control and combinatorial optimization. 
    more » « less