skip to main content


Title: You Get What You Share: Incentives for a Sharing Economy
In recent years, a range of online applications have facilitated resource sharing among users, resulting in a significant increase in resource utilization. In all such applications, sharing one’s resources or skills with other agents increases social welfare. In general, each agent will look for other agents whose available resources complement hers, thereby forming natural sharing groups. In this paper, we study settings where a large population self-organizes into sharing groups. In many cases, centralized optimization approaches for creating an optimal partition of the user population are infeasible because either the central authority does not have the necessary information to compute an optimal partition, or it does not have the power to enforce a partition. Instead, the central authority puts in place an incentive structure in the form of a utility sharing method, before letting the participants form the sharing groups by themselves. We first analyze a simple equal-sharing method, which is the one most typically encountered in practice and show that it can lead to highly inefficient equilibria. We then propose a Shapley-sharing method and show that it significantly improves overall social welfare.  more » « less
Award ID(s):
1750140
NSF-PAR ID:
10139472
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Proceedings of the AAAI Conference on Artificial Intelligence
Volume:
33
ISSN:
2159-5399
Page Range / eLocation ID:
2004 to 2011
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Machines powered by artificial intelligence increasingly permeate social networks with control over resources. However, machine allocation behavior might offer little benefit to human welfare over networks when it ignores the specific network mechanism of social exchange. Here, we perform an online experiment involving simple networks of humans (496 participants in 120 networks) playing a resource-sharing game to which we sometimes add artificial agents (bots). The experiment examines two opposite policies of machine allocation behavior:reciprocal bots, which share all resources reciprocally; andstingy bots, which share no resources at all. We also manipulate the bot’s network position. We show that reciprocal bots make little changes in unequal resource distribution among people. On the other hand, stingy bots balance structural power and improve collective welfare in human groups when placed in a specific network position, although they bestow no wealth on people. Our findings highlight the need to incorporate the human nature of reciprocity and relational interdependence in designing machine behavior in sharing networks. Conscientious machines do not always work for human welfare, depending on the network structure where they interact.

     
    more » « less
  2. We consider information design in spatial resource competition, motivated by ride sharing platforms sharing information with drivers about rider demand. Each of N co-located agents (drivers) decides whether to move to another location with an uncertain and possibly higher resource level (rider demand), where the utility for moving increases in the resource level and decreases in the number of other agents that move. A principal who can observe the resource level wishes to share this information in a way that ensures a welfare-maximizing number of agents move. Analyzing the principal’s information design problem using the Bayesian persuasion framework, we study both private signaling mechanisms, where the principal sends personalized signals to each agent, and public signaling mechanisms, where the principal sends the same information to all agents. We show: 1) For private signaling, computing the optimal mechanism using the standard approach leads to a linear program with 2 N variables, rendering the computation challenging. We instead describe a computationally efficient two-step approach to finding the optimal private signaling mechanism. First, we perform a change of variables to solve a linear program with O(N^2) variables that provides the marginal probabilities of recommending each agent move. Second, we describe an efficient sampling procedure over sets of agents consistent with these optimal marginal probabilities; the optimal private mechanism then asks the sampled set of agents to move and the rest to stay. 2) For public signaling, we first show the welfare-maximizing equilibrium given any common belief has a threshold structure. Using this, we show that the optimal public mechanism with respect to the sender-preferred equilibrium can be computed in polynomial time. 3) We support our analytical results with numerical computations that show the optimal private and public signaling mechanisms achieve substantially higher social welfare when compared with no-information and full-information benchmarks. 
    more » « less
  3. We consider an InterDependent Security (IDS) game with networked agents and positive externality where each agent chooses an effort/investment level for securing itself. The agents are interdependent in that the state of security of one agent depends not only on its own investment but also on the other agents' effort/investment. Due to the positive externality, the agents under-invest in security which leads to an inefficient Nash equilibrium (NE). While much has been analyzed in the literature on the under-investment issue, in this study we take a different angle. Specifically, we consider the possibility of allowing agents to pool their resources, i.e., allowing agents to have the ability to both invest in themselves as well as in other agents. We show that the interaction of strategic and selfish agents under resource pooling (RP) improves the agents' effort/investment level as well as their utility as compared to a scenario without resource pooling. We show that the social welfare (total utility) at the NE of the game with resource pooling is higher than the maximum social welfare attainable in a game without resource pooling but by using an optimal incentive mechanism. Furthermore, we show that while voluntary participation in this latter scenario is not generally true, it is guaranteed under resource pooling. 
    more » « less
  4. In this work we consider online decision-making in settings where players want to guard against possible adversarial attacks or other catastrophic failures. To address this, we propose a solution concept in which players have an additional constraint that at each time step they must play a diversified mixed strategy: one that does not put too much weight on any one action. This constraint is motivated by applications such as finance, routing, and resource allocation, where one would like to limit one’s exposure to adversarial or catastrophic events while still performing well in typical cases. We explore properties of diversified strategies in both zero-sum and general-sum games, and provide algorithms for minimizing regret within the family of diversified strategies as well as methods for using taxes or fees to guide standard regret-minimizing players towards diversified strategies. We also analyze equilibria produced by diversified strategies in general-sum games. We show that surprisingly, requiring diversification can actually lead to higher-welfare equilibria, and give strong guarantees on both price of anarchy and the social welfare produced by regret-minimizing diversified agents. We additionally give algorithms for finding optimal diversified strategies in distributed settings where one must limit communication overhead. 
    more » « less
  5. Shared/buy-in computing systems offer users the option to select between buy-in and shared services. In such systems, idle buy-in resources are made available to other users for sharing. With strategic users, resource purchase and allocation in such systems can be cast as a non-cooperative game, whose corresponding Nash equilibrium does not necessarily result in the optimal social cost. In this study, we first derive the optimal social cost of the game in closed form, by casting it as a convex optimization problem and establishing related properties. Next, we derive a closed-form expression for the social cost at the Nash equilibrium, and show that it can be computed in linear time. We further show that the strategy profiles of users at the optimum and the Nash equilibrium are directly proportional. We measure the inefficiency of the Nash equilibrium through the price of anarchy, and show that it can be quite large in certain cases, e.g., when the operating expense ratio is low or when the distribution of user workloads is relatively homogeneous. To improve the efficiency of the system, we propose and analyze two subsidy policies, which are shown to converge using best-response dynamics. 
    more » « less