Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
We devise an online learning algorithm -- titled Switching via Monotone Adapted Regret Traces (SMART) -- that adapts to the data and achieves regret that is instance optimal, i.e., simultaneously competitive on every input sequence compared to the performance of the follow-the-leader (FTL) policy and the worst case guarantee of any other input policy. We show that the regret of the SMART policy on any input sequence is within a multiplicative factor e/(eā1)ā1.58 of the smaller of: 1) the regret obtained by FTL on the sequence, and 2) the upper bound on regret guaranteed by the given worst-case policy. This implies a strictly stronger guarantee than typical `best-of-both-worlds' bounds as the guarantee holds for every input sequence regardless of how it is generated. SMART is simple to implement as it begins by playing FTL and switches at most once during the time horizon to the worst-case algorithm. Our approach and results follow from an operational reduction of instance optimal online learning to competitive analysis for the ski-rental problem. We complement our competitive ratio upper bounds with a fundamental lower bound showing that over all input sequences, no algorithm can get better than a 1.43-fraction of the minimum regret achieved by FTL and the minimax-optimal policy. We also present a modification of SMART that combines FTL with a ``small-loss" algorithm to achieve instance optimality between the regret of FTL and the small loss regret bound.more » « lessFree, publicly-accessible full text available July 1, 2025
-
We initiate the study of smoothed analysis for the sequential probability assignment problem with contexts. We study information-theoretically optimal minmax rates as well as a framework for algorithmic reduction involving the maximum likelihood estimator oracle. Our approach establishes a general-purpose reduction from minimax rates for sequential probability assignment for smoothed adversaries to minimax rates for transductive learning. This leads to optimal (logarithmic) fast rates for parametric classes and classes with finite VC dimension. On the algorithmic front, we develop an algorithm that efficiently taps into the MLE oracle, for general classes of functions. We show that under general conditions this algorithmic approach yields sublinear regret.more » « less
-
null (Ed.)Abstract From smart work scheduling to optimal drug timing, there is enormous potential in translating circadian rhythms research results for precision medicine in the real world. However, the pursuit of such effort requires the ability to accurately estimate circadian phase outside of the laboratory. One approach is to predict circadian phase noninvasively using light and activity measurements and mathematical models of the human circadian clock. Most mathematical models take light as an input and predict the effect of light on the human circadian system. However, consumer-grade wearables that are already owned by millions of individuals record activity instead of light, which prompts an evaluation of the accuracy of predicting circadian phase using motion alone. Here, we evaluate the ability of four different models of the human circadian clock to estimate circadian phase from data acquired by wrist-worn wearable devices. Multiple datasets across populations with varying degrees of circadian disruption were used for generalizability. Though the models we test yield similar predictions, analysis of data from 27 shift workers with high levels of circadian disruption shows that activity, which is recorded in almost every wearable device, is better at predicting circadian phase than measured light levels from wrist-worn devices when processed by mathematical models. In those living under normal living conditions, circadian phase can typically be predicted to within 1 h, even with data from a widely available commercial device (the Apple Watch). These results show that circadian phase can be predicted using existing data passively collected by millions of individuals with comparable accuracy to much more invasive and expensive methods.more » « less