skip to main content

Title: From Monetary to Nonmonetary Mechanism Design via Artificial Currencies
Nonmonetary mechanisms for repeated allocation and decision making are gaining widespread use in many real-world settings. Our aim in this work is to study the performance and incentive properties of simple mechanisms based on artificial currencies in such settings. To this end, we make the following contributions: For a general allocation setting, we provide two black-box approaches to convert any one-shot monetary mechanism to a dynamic nonmonetary mechanism using an artificial currency that simultaneously guarantees vanishing gains from nontruthful reporting over time and vanishing losses in performance. The two mechanisms trade off between their applicability and their computational and informational requirements. Furthermore, for settings with two agents, we show that a particular artificial currency mechanism also results in a vanishing price of anarchy.  more » « less
Award ID(s):
1847393 2002156 1633920
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Mathematics of Operations Research
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Non-monetary mechanisms for repeated resource allocation are gaining widespread use in many real-world settings. Our aim in this work is to study the allocative efficiency and incentive properties of simple repeated mechanisms based on artificial currencies. Within this framework, we make three main contributions: We provide a general black-box technique to convert any static monetary mechanism to a dynamic mechanism with artificial currency, that simultaneously guarantees vanishing loss in efficiency, and vanishing gains from non-truthful bidding over time. On a computational front, we show how such a mechanism can be implemented using only sample-access to the agents' type distributions, and requires roughly twice the amount of computation as needed to run the monetary mechanism alone. For settings with two agents, we show that a particular artificial currency mechanism also results in a vanishing price of anarchy. This provides additional justification for the use of artificial currency mechanisms in practice. Moreover, we show how to leverage this result to demonstrate the existence of a Bayesian incentive-compatible mechanism with vanishing efficiency loss in this setting. Our work takes a significant step towards bridging the gap between monetary and non-monetary mechanisms, and also points to several open problems. 
    more » « less
  2. null (Ed.)
    Public goods are often either over-consumed in the absence of regulatory mechanisms, or remain completely unused, as in the Covid-19 pandemic, where social distance constraints are enforced to limit the number of people who can share public spaces. In this work, we plug this gap through market based mechanisms designed to efficiently allocate capacity constrained public goods. To design these mechanisms, we leverage the theory of Fisher markets, wherein each agent in the economy is endowed with an artificial currency budget that they can spend to avail public goods. While Fisher markets provide a strong methodological backbone to model resource allocation problems, their applicability is limited to settings involving two types of constraints - budgets of individual buyers and capacities of goods. Thus, we introduce a modified Fisher market, where each individual may have additional physical constraints, characterize its solution properties and establish the existence of a market equilibrium. Furthermore, to account for additional constraints we introduce a social convex optimization problem where we perturb the budgets of agents such that the KKT conditions of the perturbed social problem establishes equilibrium prices. Finally, to compute the budget perturbations we present a fixed point scheme and illustrate convergence guarantees through numerical experiments. Thus, our mechanism, both theoretically and computationally, overcomes a fundamental limitation of classical Fisher markets, which only consider capacity and budget constraints. 
    more » « less
  3. We study gains from trade in multi-dimensional two-sided markets. Specifically, we focus on a setting with n heterogeneous items, where each item is owned by a different seller i, and there is a constrained-additive buyer with feasibility constraint ℱ. Multi-dimensional settings in one-sided markets, e.g. where a seller owns multiple heterogeneous items but also is the mechanism designer, are well-understood. In addition, single-dimensional settings in two-sided markets, e.g. where a buyer and seller each seek or own a single item, are also well-understood. Multi-dimensional two-sided markets, however, encapsulate the major challenges of both lines of work: optimizing the sale of heterogeneous items, ensuring incentive-compatibility among both sides of the market, and enforcing budget balance. We present, to the best of our knowledge, the first worst-case approximation guarantee for gains from trade in a multi-dimensional two-sided market. Our first result provides an O(log(1/r))-approximation to the first-best gains from trade for a broad class of downward-closed feasibility constraints (such as matroid, matching, knapsack, or the intersection of these). Here r is the minimum probability over all items that a buyer's value for the item exceeds the seller's cost. Our second result removes the dependence on r and provides an unconditional O(log n)-approximation to the second-best gains from trade. We extend both results for a general constrained-additive buyer, losing another O(log n)-factor en-route. The first result is achieved using a fixed posted price mechanism, and the analysis involves a novel application of the prophet inequality or a new concentration inequality. Our second result follows from a stitching lemma that allows us to upper bound the second-best gains from trade by the first-best gains from trade from the “likely to trade” items (items with trade probability at least 1/n) and the optimal profit from selling the “unlikely to trade” items. We can obtain an O(log n)-approximation to the first term by invoking our O(log(1/r))-approximation on the “likely to trade” items. We introduce a generalization of the fixed posted price mechanism—seller adjusted posted price—to obtain an O(log n)-approximation to the optimal profit for the “unlikely to trade” items. Unlike fixed posted price mechanisms, not all seller adjusted posted price mechanisms are incentive compatible and budget balanced. We develop a new argument based on “allocation coupling” to show the seller adjusted posted price mechanism used in our approximation is indeed budget balanced and incentive-compatible. 
    more » « less
  4. Despite extensive investigation of job scheduling in data-intensive computation frameworks, less consideration has been given to optimizing job partitioning for resource utilization and efficient processing. Instead, partitioning and job sizing are a form of dark art, typically left to developer intuition and trial-and-error style experimentation. In this work, we propose that just as job scheduling and resource allocation are out-sourced to a trusted mechanism external to the workload, so too should be the responsibility for partitioning data as a determinant for task size. Job partitioning essentially involves determining the partition sizes to match the resource allocation at the finest granularity. This is a complex, multi-dimensional problem that is highly application specific: resource allocation, computational runtime, shuffle and reduce communication requirements, and task startup overheads all have strong influence on the most effective task size for efficient processing. Depending on the partition size, the job completion time can differ by as much as 10 times! Fortunately, we observe a general trend underlying the tradeoff between full resource utilization and system overhead across different settings. The optimal job partition size balances these two conflicting forces. Given this trend, we design Libra to automate job partitioning as a framework extension. We integrate Libra with Spark and evaluate its performance on EC2. Compared to state-of-the-art techniques, Libra can reduce the individual job execution time by 25% to 70%. 
    more » « less
  5. Obeid, Iyad ; Selesnick, Ivan ; Picone, Joseph (Ed.)
    The goal of this work was to design a low-cost computing facility that can support the development of an open source digital pathology corpus containing 1M images [1]. A single image from a clinical-grade digital pathology scanner can range in size from hundreds of megabytes to five gigabytes. A 1M image database requires over a petabyte (PB) of disk space. To do meaningful work in this problem space requires a significant allocation of computing resources. The improvements and expansions to our HPC (highperformance computing) cluster, known as Neuronix [2], required to support working with digital pathology fall into two broad categories: computation and storage. To handle the increased computational burden and increase job throughput, we are using Slurm [3] as our scheduler and resource manager. For storage, we have designed and implemented a multi-layer filesystem architecture to distribute a filesystem across multiple machines. These enhancements, which are entirely based on open source software, have extended the capabilities of our cluster and increased its cost-effectiveness. Slurm has numerous features that allow it to generalize to a number of different scenarios. Among the most notable is its support for GPU (graphics processing unit) scheduling. GPUs can offer a tremendous performance increase in machine learning applications [4] and Slurm’s built-in mechanisms for handling them was a key factor in making this choice. Slurm has a general resource (GRES) mechanism that can be used to configure and enable support for resources beyond the ones provided by the traditional HPC scheduler (e.g. memory, wall-clock time), and GPUs are among the GRES types that can be supported by Slurm [5]. In addition to being able to track resources, Slurm does strict enforcement of resource allocation. This becomes very important as the computational demands of the jobs increase, so that they have all the resources they need, and that they don’t take resources from other jobs. It is a common practice among GPU-enabled frameworks to query the CUDA runtime library/drivers and iterate over the list of GPUs, attempting to establish a context on all of them. Slurm is able to affect the hardware discovery process of these jobs, which enables a number of these jobs to run alongside each other, even if the GPUs are in exclusive-process mode. To store large quantities of digital pathology slides, we developed a robust, extensible distributed storage solution. We utilized a number of open source tools to create a single filesystem, which can be mounted by any machine on the network. At the lowest layer of abstraction are the hard drives, which were split into 4 60-disk chassis, using 8TB drives. To support these disks, we have two server units, each equipped with Intel Xeon CPUs and 128GB of RAM. At the filesystem level, we have implemented a multi-layer solution that: (1) connects the disks together into a single filesystem/mountpoint using the ZFS (Zettabyte File System) [6], and (2) connects filesystems on multiple machines together to form a single mountpoint using Gluster [7]. ZFS, initially developed by Sun Microsystems, provides disk-level awareness and a filesystem which takes advantage of that awareness to provide fault tolerance. At the filesystem level, ZFS protects against data corruption and the infamous RAID write-hole bug by implementing a journaling scheme (the ZFS intent log, or ZIL) and copy-on-write functionality. Each machine (1 controller + 2 disk chassis) has its own separate ZFS filesystem. Gluster, essentially a meta-filesystem, takes each of these, and provides the means to connect them together over the network and using distributed (similar to RAID 0 but without striping individual files), and mirrored (similar to RAID 1) configurations [8]. By implementing these improvements, it has been possible to expand the storage and computational power of the Neuronix cluster arbitrarily to support the most computationally-intensive endeavors by scaling horizontally. We have greatly improved the scalability of the cluster while maintaining its excellent price/performance ratio [1]. 
    more » « less