skip to main content

Attention:

The NSF Public Access Repository (PAR) system and access will be unavailable from 8:00 PM ET on Friday, March 21 until 8:00 AM ET on Saturday, March 22 due to maintenance. We apologize for the inconvenience.


Title: Empowering Collective Impact: Introducing SWAP for Resource Sharing
Nonprofit organizations (NPOs) lack resources, hindering the quality and quantity of service they can deliver. Meanwhile, NPOs at times have underutilized or even spare resources due to the inability to scale expertise in staffing and tangible resources to meet temporally shifting service demands. These observations motivate us to propose a novel resource sharing system, SWAP, which to the best of our knowledge, is the first resource sharing system that facilitates resource exchanges where NPOs can obtain resources by offering their own. SWAP consists of four elements: a collaborative auction-based sharing process, complete with an offering mechanism, a bidding mechanism, and the virtual currency, SWAPcredit, to facilitate liquidity in exchange; a central technology that represents the award determination problem with a multilateral exchange optimization model, generating resource exchange outcomes; an online platform, the SWAP Hub, where NPOs can offer and bid on available resources, and receive exchange results; and human-centric co-design, shaping the understanding and design decisions of a research collective, that includes the authors and NPO professionals. We conduct a series of experiments using both empirical and simulated data to illustrate the benefits and potential of SWAP. Our results demonstrate that SWAP can address temporal resource needs in practice; show that optimal exchange outcomes can be generated even for large-scale SWAP markets; and provide strong evidence in support of guidance to inform the progression for future versions of SWAP. The SWAP system is presently implemented in Howard County, MD, USA, with ongoing enhancements and potential for future expansion.  more » « less
Award ID(s):
2222697
PAR ID:
10542911
Author(s) / Creator(s):
; ; ; ; ;
Publisher / Repository:
ACM Equity and Access in Algorithms, Mechanisms, and Optimization
Date Published:
ISBN:
9798400703812
Format(s):
Medium: X
Location:
Boston MA USA
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Although mutualisms are often studied as simple pairwise interactions, they typically involve complex networks of interacting species. How multiple mutualistic partners that provide the same service and compete for resources are maintained in mutualistic networks is an open question. We use a model bacterial community in which multiple ‘partner strains’ ofEscherichia colicompete for a carbon source and exchange resources with a ‘shared mutualist’ strain ofSalmonella enterica. In laboratory experiments, competingE. colistrains readily coexist in the presence ofS. enterica, despite differences in their competitive abilities. We use ecological modeling to demonstrate that a shared mutualist can create temporary resource niche partitioning by limiting growth rates, even if yield is set by a resource external to a mutualism. This mechanism can extend to maintain multiple competing partner species. Our results improve our understanding of complex mutualistic communities and aid efforts to design stable microbial communities.

     
    more » « less
  2. Resource sharing is fundamental to the design of telecommunication networks. The technology, economic and policy forces shaping the transition to next-generation digital networking infrastructure—characterized here as “5G+” (for 5G and beyond)—make new and evolved forms of edge sharing a necessity. Despite this necessity, most of the economic and policy research on Network Sharing Agreements (NSAs) has focused on sharing among service providers offering retail services via networks owned and operated by legacy fixed and mobile network operators (MNOs). In this essay, we make the case for why increased and more dynamic options for sharing, in particular of end-user owned network infrastructure, should be embraced for the future of NSAs. Furthermore, we explain how such a novel sharing paradigm must be matched by appropriate regulatory policies.

     
    more » « less
  3. Containerization is becoming increasingly popular, but unfortunately, containers often fail to deliver the anticipated performance with the allocated resources. In this paper, we first demonstrate the performance variance and degradation are significant (by up to 5x) in a multi-tenant environment where containers are co-located. We then investigate the root cause of such performance degradation. Contrary to the common belief that such degradation is caused by resource contention and interference, we find that there is a gap between the amount of CPU a container reserves and actually gets. The root cause lies in the design choices of today's Linux scheduling mechanism, which we call Forced Runqueue Sharing and Phantom CPU Time. In fact, there are fundamental conflicts between the need to reserve CPU resources and Completely Fair Scheduler's work-conserving nature, and this contradiction prevents a container from fully utilizing its requested CPU resources. As a proof-of-concept, we implement a new resource configuration mechanism atop the widely used Kubernetes and Linux to demonstrate its potential benefits and shed light on future scheduler redesign. Our proof-of-concept, compared to the existing scheduler, improves the performance of both batch and interactive containerized apps by up to 5.6x and 13.7x. 
    more » « less
  4. In the Internet of Things (loT) era, edge computing is a promising paradigm to improve the quality of service for latency sensitive applications by filling gaps between the loT devices and the cloud infrastructure. Highly geo-distributed edge computing resources that are managed by independent and competing service providers pose new challenges in terms of resource allocation and effective resource sharing to achieve a globally efficient resource allocation. In this paper, we propose a novel blockchain-based model for allocating computing resources in an edge computing platform that allows service providers to establish resource sharing contracts with edge infrastructure providers apriori using smart contracts in Ethereum. The smart contract in the proposed model acts as the auctioneer and replaces the trusted third-party to handle the auction. The blockchain-based auctioning protocol increases the transparency of the auction-based resource allocation for the participating edge service and infrastructure providers. The design of sealed bids and bid revealing methods in the proposed protocol make it possible for the participating bidders to place their bids without revealing their true valuation of the goods. The truthful auction design and the utility-aware bidding strategies incorporated in the proposed model enables the edge service providers and edge infrastructure providers to maximize their utilities. We implement a prototype of the model on a real blockchain test bed and our extensive experiments demonstrate the effectiveness, scalability and performance efficiency of the proposed approach. 
    more » « less
  5. We consider a large-scale service system where incoming tasks have to be instantaneously dispatched to one out of many parallel server pools. The user-perceived performance degrades with the number of concurrent tasks and the dispatcher aims at maximizing the overall quality of service by balancing the load through a simple threshold policy. We demonstrate that such a policy is optimal on the fluid and diffusion scales, while only involving a small communication overhead, which is crucial for large-scale deployments. In order to set the threshold optimally, it is important, however, to learn the load of the system, which may be unknown. For that purpose, we design a control rule for tuning the threshold in an online manner. We derive conditions that guarantee that this adaptive threshold settles at the optimal value, along with estimates for the time until this happens. In addition, we provide numerical experiments that support the theoretical results and further indicate that our policy copes effectively with time-varying demand patterns. Summary of Contribution: Data centers and cloud computing platforms are the digital factories of the world, and managing resources and workloads in these systems involves operations research challenges of an unprecedented scale. Due to the massive size, complex dynamics, and wide range of time scales, the design and implementation of optimal resource-allocation strategies is prohibitively demanding from a computation and communication perspective. These resource-allocation strategies are essential for certain interactive applications, for which the available computing resources need to be distributed optimally among users in order to provide the best overall experienced performance. This is the subject of the present article, which considers the problem of distributing tasks among the various server pools of a large-scale service system, with the objective of optimizing the overall quality of service provided to users. A solution to this load-balancing problem cannot rely on maintaining complete state information at the gateway of the system, since this is computationally unfeasible, due to the magnitude and complexity of modern data centers and cloud computing platforms. Therefore, we examine a computationally light load-balancing algorithm that is yet asymptotically optimal in a regime where the size of the system approaches infinity. The analysis is based on a Markovian stochastic model, which is studied through fluid and diffusion limits in the aforementioned large-scale regime. The article analyzes the load-balancing algorithm theoretically and provides numerical experiments that support and extend the theoretical results. 
    more » « less