skip to main content

Attention:

The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 11:00 PM ET on Thursday, October 10 until 2:00 AM ET on Friday, October 11 due to maintenance. We apologize for the inconvenience.


Search for: All records

Award ID contains: 2006737

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Mining pools decrease the variance in the income of cryptocurrency miners (compared to solo mining) by distributing rewards to participating miners according to the shares submitted over a period of time. The most common definition of a “share” is a proof-of-work for a difficulty level lower than that required for block authorization—for example, a hash with at least 65 leading zeroes (in binary) rather than at least 75. The first contribution of this paper is to investigate more sophisticated approaches to pool reward distribution that use multiple classes of shares—for example, corresponding to differing numbers of leading zeroes—and assign different rewards to shares from different classes. What’s the best way to use such finer-grained information, and how much can it help? We prove that the answer is not at all: using the additional information can only increase the variance in rewards experienced by every miner. Our second contribution is to identify variance-optimal reward-sharing schemes. Here, we first prove that pay-per-share rewards simultaneously minimize the variance of all miners over all reward-sharing schemes with long-run rewards proportional to miners’ hash rates. We then show that, if we impose natural restrictions including a no-deficit condition on reward-sharing schemes, then the pay-per-last-N-shares method is optimal. 
    more » « less
  2. Practical and pervasive needs for robustness and privacy in algorithms have inspired the design of online adversarial and differentially private learning algorithms. The primary quantity that characterizes learnability in these settings is the Littlestone dimension of the class of hypotheses [Alon et al., 2019, Ben-David et al., 2009]. This characterization is often interpreted as an impossibility result because classes such as linear thresholds and neural networks have infinite Littlestone dimension. In this paper, we apply the framework of smoothed analysis [Spielman and Teng, 2004], in which adversarially chosen inputs are perturbed slightly by nature. We show that fundamentally stronger regret and error guarantees are possible with smoothed adversaries than with worst-case adversaries. In particular, we obtain regret and privacy error bounds that depend only on the VC dimension and the bracketing number of a hypothesis class, and on the magnitudes of the perturbations. 
    more » « less
  3. Most results in revenue-maximizing mechanism design hinge on “getting the price right”—selling goods to bidders at prices low enough to encourage a sale but high enough to garner nontrivial revenue. This approach is difficult to implement when the seller has little or no a priori information about bidder valuations or when the setting is sufficiently complex, such as matching markets with heterogeneous goods. In this paper, we apply a robust approach to designing auctions for revenue. Instead of relying on prior knowledge regarding bidder valuations, we “let the market do the work” and let prices emerge from competition for scarce goods. We analyze the revenue guarantees of one of the simplest imaginable implementations of this idea: first, we enhance competition in the market by increasing demand (or alternatively, by limiting supply), and second, we run a standard second price (Vickrey) auction. In their renowned work from 1996 , Bulow and Klemperer [Bulow J, Klemperer P (1996) Auctions vs. negotiations. Amer. Econom. Rev. 86(1):180–194.] apply this method to markets with single goods. As our main result, we give the first application beyond single-parameter settings, proving that, simultaneously for many valuation distributions, this method achieves expected revenue at least as good as the optimal revenue in the original market. Our robust and simple approach provides a handle on the elusive optimal revenue in multiitem matching markets and shows when the use of welfare-maximizing Vickrey auctions is justified, even if revenue is a priority. By establishing quantitative tradeoffs, our work provides guidelines for a seller in choosing among two different revenue-extracting strategies: sophisticated pricing based on market research or advertising to draw additional bidders. 
    more » « less
  4. The best algorithm for a computational problem generally depends on the "relevant inputs," a concept that depends on the application domain and often defies formal articulation. Although there is a large literature on empirical approaches to selecting the best algorithm for a given application domain, there has been surprisingly little theoretical analysis of the problem. We model the problem of identifying a good algorithm from data as a statistical learning problem. Our framework captures several state-of-the-art empirical and theoretical approaches to the problem, and our results identify conditions under which these approaches are guaranteed to perform well. We interpret our results in the contexts of learning greedy heuristics, instance feature-based algorithm selection, and parameter tuning in machine learning. 
    more » « less
  5. Kraus, Andreas (Ed.)
    In this paper we study the fundamental problems of maximizing a continuous nonmonotone submodular function over the hypercube, both with and without coordinate-wise concavity. This family of optimization problems has several applications in machine learning, economics, and communication systems. Our main result is the first 1 2 -approximation algorithm for continuous submodular function maximization; this approximation factor of 1 2 is the best possible for algorithms that only query the objective function at polynomially many points. For the special case of DR-submodular maximization, i.e. when the submodular function is also coordinate-wise concave along all coordinates, we provide a different 1 2 -approximation algorithm that runs in quasi-linear time. Both these results improve upon prior work (Bian et al., 2017a,b; Soma and Yoshida, 2017). Our first algorithm uses novel ideas such as reducing the guaranteed approximation problem to analyzing a zero-sum game for each coordinate, and incorporates the geometry of this zero-sum game to fix the value at this coordinate. Our second algorithm exploits coordinate-wise concavity to identify a monotone equilibrium condition sufficient for getting the required approximation guarantee, and hunts for the equilibrium point using binary search. We further run experiments to verify the performance of our proposed algorithms in related machine learning applications. 
    more » « less