Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to nonfederal websites. Their policies may differ from this site.

We show that any memoryconstrained, firstorder algorithm which minimizes ddimensional, 1Lipschitz convex functions over the unit ball to 1/ poly(d) accuracy using at most d^(1.25delta) bits of memory must make at least d^(1+ 4 delta / 3) firstorder queries (for any constant delta in (0,1/4). Consequently, the performance of such memoryconstrained algorithms are a polynomial factor worse than the optimal O(d polylog d) query bound for this problem obtained by cutting plane methods that use >d^2 memory. This resolves one of the open problems in the COLT 2019 open problem publication of Woodworth and Srebro.

We provide new gradientbased methods for efficiently solving a broad class of illconditioned optimization problems. We consider the problem of minimizing a function f : R d > R which is implicitly decomposable as the sum of m unknown noninteracting smooth, strongly convex functions and provide a method which solves this problem with a number of gradient evaluations that scales (up to logarithmic factors) as the product of the squareroot of the condition numbers of the components. This complexity bound (which we prove is nearly optimal) can improve almost exponentially on that of accelerated gradient methods, which grow as the square root of the condition number of f. Additionally, we provide efficient methods for solving stochastic, quadratic variants of this multiscale optimization problem. Rather than learn the decomposition of f (which would be prohibitively expensive), our methods apply a clean recursive “BigStepLittleStep” interleaving of standard methods. The resulting algorithms use O˜(dm) space, are numerically stable, and open the door to a more finegrained understanding of the complexity of convex optimization beyond condition number.

Understanding cellular stress response pathways is challenging because of the complexity of regulatory mechanisms and response dynamics, which can vary with both time and the type of stress. We developed a reverse genetic method called ReporterSeq to comprehensively identify genes regulating a stressinduced transcription factor under multiple conditions in a timeresolved manner. ReporterSeq links RNAencoded barcode levels to pathwayspecific output under genetic perturbations, allowing pooled pathway activity measurements via DNA sequencing alone and without cell enrichment or singlecell isolation. We used ReporterSeq to identify regulators of the heat shock response (HSR), a conserved, poorly understood transcriptional program that protects cells from proteotoxicity and is misregulated in disease. Genomewide HSR regulation in budding yeast was assessed across 15 stress conditions, uncovering novel stressspecific, timespecific, and constitutive regulators. ReporterSeq can assess the genetic regulators of any transcriptional pathway with the scale of pooled genetic screens and the precision of pathwayspecific readouts.

We consider an online binary prediction setting where a forecaster observes a sequence of T bits one by one. Before each bit is revealed, the forecaster predicts the probability that the bit is 1. The forecaster is called wellcalibrated if for each p in [0,1], among the n_p bits for which the forecaster predicts probability p, the actual number of ones, m_p, is indeed equal to p*n_p. The calibration error, defined as \sum_p m_p  p n_p, quantifies the extent to which the forecaster deviates from being wellcalibrated. It has long been known that an O(T^(2/3)) calibration error is achievable even when the bits are chosen adversarially, and possibly based on the previous predictions. However, little is known on the lower bound side, except an sqrt(T) bound that follows from the trivial example of independent fair coin flips. In this paper, we prove an T^(0.528) bound on the calibration error, which is the first bound above the trivial sqrt(T) lowerbound for this setting. The technical contributions of our work include two lower bound techniques, early stopping and sidestepping, which circumvent the obstacles that have previously hindered strong calibration lower bounds. We also propose an abstraction of the prediction setting, termedmore »

The famous “laurel/yanny” phenomenon references an audio clip that elicits dramatically different responses from different listeners. For the original clip, roughly half the population hears the word “laurel,” while the other half hears “yanny.” How common are such ``polyperceivable'' audio clips? In this paper we apply ML techniques to study the prevalence of polyperceivability in spoken language. We devise a metric that correlates with polyperceivability of audio clips, use it to efficiently find new “laurel/yanny”type examples, and validate these results with human experiments. Our results suggest that polyperceivable examples are surprisingly prevalent, existing for >2% of English words.

We study probabilistic prediction games when the underlying model is misspecified, investigating the consequences of predicting using an incorrect parametric model. We show that for a broad class of loss functions and parametric families of distributions, the regret of playing a ``proper'' predictorone from the putative model classrelative to the best predictor in the same model class has lower bound scaling at least as sqrt(c n), where c is a measure of the model misspecification to the true distribution in terms of total variation distance. In contrast, using an aggregationbased (improper) learner, one can obtain regret d log n for any underlying generating distribution, where d is the dimension of the parameter; we exhibit instances in which this is unimprovable even over the family of all learners that may play distributions in the convex hull of the parametric family. These results suggest that simple strategies for aggregating multiple learners together should be more robust, and several experiments conform to this hypothesis.