Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Free, publicly-accessible full text available December 25, 2025
-
Promoting External and Internal Equities under Ex-Ante/Ex-Post Metrics in Online Resource AllocationFree, publicly-accessible full text available July 21, 2025
-
Bipartite-matching markets pair agents on one side of a market with agents, items, or contracts on the opposing side. Prior work addresses online bipartite-matching markets, where agents arrive over time and are dynamically matched to a known set of disposable resources. In this article, we propose a new model, Online Matching with (offline) Reusable Resources under Known Adversarial Distributions ( OM-RR-KAD ) , in which resources on the offline side are reusable instead of disposable; that is, once matched, resources become available again at some point in the future. We show that our model is tractable by presenting an LP-based non-adaptive algorithm that achieves an online competitive ratio of ½-ϵ for any given constant ϵ > 0. We also show that no adaptive algorithm can achieve a ratio of ½ + o (1) based on the same benchmark LP. Through a data-driven analysis on a massive openly available dataset, we show our model is robust enough to capture the application of taxi dispatching services and ride-sharing systems. We also present heuristics that perform well in practice.more » « less
-
This paper studies how neural network architecture affects the speed of training. We introduce a simple concept called gradient confusion to help formally analyze this. When gradient confusion is high, stochastic gradients produced by different data samples may be negatively correlated, slowing down convergence. But when gradient confusion is low, data samples interact harmoniously, and training proceeds quickly. Through theoretical and experimental results, we demonstrate how the neural network architecture affects gradient confusion, and thus the efficiency of training. Our results show that, for popular initialization techniques, increasing the width of neural networks leads to lower gradient confusion, and thus faster model training. On the other hand, increasing the depth of neural networks has the opposite effect. Our results indicate that alternate initialization techniques or networks using both batch normalization and skip connections help reduce the training burden of very deep networks.more » « less