skip to main content


Search for: All records

Creators/Authors contains: "Huang, Wen"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Solving Rank Constrained Least Squares via Recursive Importance Sketching

    In statistics and machine learning, we sometimes run into the rank-constrained least squares problems, for which we need to find the best low-rank fit between sets of data, such as trying to figure out what factors are affecting the data, filling in missing information, or finding connections between different sets of data. This paper introduces a new method for solving this problem called the recursive importance sketching algorithm (RISRO), in which the central idea is to break the problem down into smaller, easier parts using a unique technique called “recursive importance sketching.” This new method is not only easy to use, but it is also very efficient and gives accurate results. We prove that RISRO converges in a local quadratic-linear and quadratic rate under some mild conditions. Simulation studies also demonstrate the superior performance of RISRO.

     
    more » « less
    Free, publicly-accessible full text available January 1, 2025
  2. Free, publicly-accessible full text available December 15, 2024
  3. In online recommendation, customers arrive in a sequential and stochastic manner from an underlying distribution and the online decision model recommends a chosen item for each arriving individual based on some strategy. We study how to recommend an item at each step to maximize the expected reward while achieving user-side fairness for customers, i.e., customers who share similar profiles will receive a similar reward regardless of their sensitive attributes and items being recommended. By incorporating causal inference into bandits and adopting soft intervention to model the arm selection strategy, we first propose the d-separation based UCB algorithm (D-UCB) to explore the utilization of the d-separation set in reducing the amount of exploration needed to achieve low cumulative regret. Based on that, we then propose the fair causal bandit (F-UCB) for achieving the counterfactual individual fairness. Both theoretical analysis and empirical evaluation demonstrate effectiveness of our algorithms. 
    more » « less
  4. Virtual reality (VR) has a high potential to facilitate education. However, the design of many VR learning applications was criticized for lacking the guidance of explicit and appropriate learning theories. To advance the use of VR in effective instruction, this study proposed a model that extended the cognitive-affective theory of learning with media (CATLM) into a VR learning context and evaluated this model using a structural equation modeling (SEM) approach. Undergraduate students ( n = 77) learned about the solar system in a VR environment over three sessions. Overall, the results supported the core principles and assumptions of CATLM in a VR context (CATLM-VR). In addition, the CATLM-VR model illustrated how immersive VR may impact learning. Specifically, immersion had an overall positive impact on user experience and motivation. However, the impact of immersion on cognitive load was uncertain, and that uncertainty made the final learning outcomes less predictable. Enhancing students’ motivation and cognitive engagement may more directly increase learning achievement than increasing the level of immersion and may be more universally applicable in VR instruction. 
    more » « less
  5. Personalized recommendation based on multi-arm bandit (MAB) algorithms has shown to lead to high utility and efficiency as it can dynamically adapt the recommendation strategy based on feedback. However, unfairness could incur in personalized recommendation. In this paper, we study how to achieve user-side fairness in personalized recommendation. We formulate our fair personalized recommendation as a modified contextual bandit and focus on achieving fairness on the individual whom is being recommended an item as opposed to achieving fairness on the items that are being recommended. We introduce and define a metric that captures the fairness in terms of rewards received for both the privileged and protected groups. We develop a fair contextual bandit algorithm, Fair-LinUCB, that improves upon the traditional LinUCB algorithm to achieve group-level fairness of users. Our algorithm detects and monitors unfairness while it learns to recommend personalized videos to students to achieve high efficiency. We provide a theoretical regret analysis and show that our algorithm has a slightly higher regret bound than LinUCB. We conduct numerous experimental evaluations to compare the performances of our fair contextual bandit to that of LinUCB and show that our approach achieves group-level fairness while maintaining a high utility. 
    more » « less