skip to main content


Search for: All records

Creators/Authors contains: "Blanchet, Jose H."

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. The utility of a match in a two-sided matching market often depends on a variety of characteristics of the two agents (e.g., a buyer and a seller) to be matched. In contrast to the matching market literature, this utility may best be modeled by a general matching utility distribution. In “Asymptotically Optimal Control of a Centralized Dynamic Matching Market with General Utilities,” Blanchet, Reiman, Shah, Wein, and Wu consider general matching utilities in the context of a centralized dynamic matching market. To analyze this difficult problem, they combine two asymptotic techniques: extreme value theory (and regularly varying functions) and fluid asymptotics of queueing systems. A key trade-off in this problem is market thickness: Do we myopically make the best match that is currently available, or do we allow the market to thicken in the hope of making a better match in the future while avoiding agent abandonment? Their asymptotic analysis derives quite explicit results for this problem and reveals how the optimal amount of market thickness increases with the right tail of the matching utility distribution and the amount of market imbalance. Their use of regularly varying functions also allows them to consider correlated matching utilities (e.g., buyers have positively correlated utilities with a given seller), which is ubiquitous in matching markets.

     
    more » « less
  2. Distributionally robust optimization (DRO) has been shown to offer a principled way to regularize learning models. In this paper, we find that Tikhonov regularization is distributionally robust in an optimal transport sense (i.e. if an adversary chooses distributions in a suitable optimal transport neighborhood of the empirical measure), provided that suitable martingale constraints are also imposed. Further, we introduce a relaxation of the martingale constraints which not only provide a unified viewpoint to a class of existing robust methods but also lead to new regularization tools. To realize these novel tools, provably efficient computational algorithms are proposed. As a byproduct, the strong duality theorem proved in this paper can be potentially applied to other problems of independent interest. 
    more » « less
  3. We study the problem of transfer learning, observing that previous efforts to understand its information-theoretic limits do not fully exploit the geometric structure of the source and target domains. In contrast, our study first illustrates the benefits of incorporating a natural geometric structure within a linear regression model, which corresponds to the generalized eigenvalue problem formed by the Gram matrices of both domains. We next establish a finite-sample minimax lower bound, propose a refined model interpolation estimator that enjoys a matching upper bound, and then extend our framework to multiple source domains and generalized linear models. Surprisingly, as long as information is available on the distance between the source and target parameters, negative-transfer does not occur. Simulation studies show that our proposed interpolation estimator outperforms state-of-the-art transfer learning methods in both moderate- and high-dimensional settings. 
    more » « less
  4. Chaudhuri, Kamalika ; Jegelka, Stefanie ; Song, Le ; Szepesvari, Csaba ; Niu, Gang ; Sabato, Sivan (Ed.)
    Reinforcement learning (RL) has demonstrated remarkable achievements in simulated environments. However, carrying this success to real environments requires the important attribute of robustness, which the existing RL algorithms often lack as they assume that the future deployment environment is the same as the training environment (i.e. simulator) in which the policy is learned. This assumption often does not hold due to the discrepancy between the simulator and the real environment and, as a result, and hence renders the learned policy fragile when deployed. In this paper, we propose a novel distributionally robust Q-learning algorithm that learns the best policy in the worst distributional perturbation of the environment. Our algorithm first transforms the infinite-dimensional learning problem (since the environment MDP perturbation lies in an infinite-dimensional space) into a finite-dimensional dual problem and subsequently uses a multi-level Monte-Carlo scheme to approximate the dual value using samples from the simulator. Despite the complexity, we show that the resulting distributionally robust Q-learning algorithm asymptotically converges to optimal worst-case policy, thus making it robust to future environment changes. Simulation results further demonstrate its strong empirical robustness. 
    more » « less