skip to main content


Search for: All records

Creators/Authors contains: "Lin, S."

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Bilevel optimization has become a powerful tool in a wide variety of machine learning problems. However, the current nonconvex bilevel optimization considers an offline dataset and static functions, which may not work well in emerging online applications with streaming data and time-varying functions. In this work, we study online bilevel optimization (OBO) where the functions can be time-varying and the agent continuously updates the decisions with online streaming data. To deal with the function variations and the unavailability of the true hypergradients in OBO, we propose a single-loop online bilevel optimizer with window averaging (SOBOW), which updates the outer-level decision based on a window average of the most recent hypergradient estimations stored in the memory. Compared to existing algorithms, SOBOW is computationally efficient and does not need to know previous functions. To handle the unique technical difficulties rooted in single-loop update and function variations for OBO, we develop a novel analytical technique that disentangles the complex couplings between decision variables, and carefully controls the hypergradient estimation error. We show that SOBOW can achieve a sublinear bilevel local regret under mild conditions. Extensive experiments across multiple domains corroborate the effectiveness of SOBOW. 
    more » « less
    Free, publicly-accessible full text available February 13, 2025
  2. Free, publicly-accessible full text available January 1, 2025
  3. The association between student motivation and learning, and changes in motivation across a course, were evaluated for students enrolled in one-semester foundation-level inorganic chemistry courses at multiple postsecondary institutions across the United States. The Academic Motivation Scale for Chemistry (AMS-Chemistry) and the Foundations of Inorganic Chemistry American Chemical Society Exam (i.e., a content knowledge measure) were used in this study. Evidence of validity, reliability, and longitudinal measurement invariance for data obtained from the AMS-Chemistry instrument with this population were found using methodologies appropriate for ordinal, non-parametric data. Positive and significant associations between intrinsic motivation measures and academic performance corroborate theoretical and empirical investigations; however, a lack of pre/post changes in motivation suggest that motivation may be less malleable in courses primarily populated by chemistry majors. Implications for inorganic chemistry instructors include paths for incorporating engaging pedagogies known to promote intrinsic motivation and methods for incorporating affect measures into assessment practices. Implications for researchers include a need for more work that disaggregates chemistry majors when evaluating relationships between affect and learning, and when making pre/post comparisons. Additionally, this work provides an example of how to implement more appropriate methods for treating data in studies using Likert-type responses and nested data. 
    more » « less
  4. null (Ed.)
  5. Graph Neural Networks (GNNs) are based on repeated aggregations of information from nodes’ neighbors in a graph. However, because nodes share many neighbors, a naive implementation leads to repeated and inefficient aggregations and represents significant computational overhead. Here we propose Hierarchically Aggregated computation Graphs (HAGs), a new GNN representation technique that explicitly avoids redundancy by managing intermediate aggregation results hierarchically and eliminates repeated computations and unnecessary data transfers in GNN training and inference. HAGs perform the same computations and give the same models/accuracy as traditional GNNs, but in a much shorter time due to optimized computations. To identify redundant computations, we introduce an accurate cost function and use a novel search algorithm to find optimized HAGs. Experiments show that the HAG representation significantly outperforms the standard GNN by increasing the end-to-end training throughput by up to 2.8× and reducing the aggregations and data transfers in GNN training by up to 6.3× and 5.6×, with only 0.1% memory overhead. Overall, our results represent an important advancement in speeding-up and scaling-up GNNs without any loss in model predictive performance. 
    more » « less