skip to main content


Title: Scalable Target Marketing: Distributed Markov Chain Monte Carlo for Bayesian Hierarchical Models

Many problems in marketing and economics require firms to make targeted consumer-specific decisions, but current estimation methods are not designed to scale to the size of modern data sets. In this article, the authors propose a new algorithm to close that gap. They develop a distributed Markov chain Monte Carlo (MCMC) algorithm for estimating Bayesian hierarchical models when the number of consumers is very large and the objects of interest are the consumer-level parameters. The two-stage and embarrassingly parallel algorithm is asymptotically unbiased in the number of consumers, retains the flexibility of a standard MCMC algorithm, and is easy to implement. The authors show that the distributed MCMC algorithm is faster and more efficient than a single-machine algorithm by at least an order of magnitude. They illustrate the approach with simulations with up to 100 million consumers, and with data on 1,088,310 donors to a charitable organization. The algorithm enables an increase of between $1.6 million and $4.6 million in additional donations when applied to a large modern-size data set compared with a typical-size data set.

 
more » « less
NSF-PAR ID:
10195896
Author(s) / Creator(s):
 ; ;
Publisher / Repository:
SAGE Publications
Date Published:
Journal Name:
Journal of Marketing Research
Volume:
57
Issue:
6
ISSN:
0022-2437
Page Range / eLocation ID:
p. 999-1018
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. We consider a large-scale service system where incoming tasks have to be instantaneously dispatched to one out of many parallel server pools. The user-perceived performance degrades with the number of concurrent tasks and the dispatcher aims at maximizing the overall quality of service by balancing the load through a simple threshold policy. We demonstrate that such a policy is optimal on the fluid and diffusion scales, while only involving a small communication overhead, which is crucial for large-scale deployments. In order to set the threshold optimally, it is important, however, to learn the load of the system, which may be unknown. For that purpose, we design a control rule for tuning the threshold in an online manner. We derive conditions that guarantee that this adaptive threshold settles at the optimal value, along with estimates for the time until this happens. In addition, we provide numerical experiments that support the theoretical results and further indicate that our policy copes effectively with time-varying demand patterns. Summary of Contribution: Data centers and cloud computing platforms are the digital factories of the world, and managing resources and workloads in these systems involves operations research challenges of an unprecedented scale. Due to the massive size, complex dynamics, and wide range of time scales, the design and implementation of optimal resource-allocation strategies is prohibitively demanding from a computation and communication perspective. These resource-allocation strategies are essential for certain interactive applications, for which the available computing resources need to be distributed optimally among users in order to provide the best overall experienced performance. This is the subject of the present article, which considers the problem of distributing tasks among the various server pools of a large-scale service system, with the objective of optimizing the overall quality of service provided to users. A solution to this load-balancing problem cannot rely on maintaining complete state information at the gateway of the system, since this is computationally unfeasible, due to the magnitude and complexity of modern data centers and cloud computing platforms. Therefore, we examine a computationally light load-balancing algorithm that is yet asymptotically optimal in a regime where the size of the system approaches infinity. The analysis is based on a Markovian stochastic model, which is studied through fluid and diffusion limits in the aforementioned large-scale regime. The article analyzes the load-balancing algorithm theoretically and provides numerical experiments that support and extend the theoretical results. 
    more » « less
  2. Abstract. Point density is an important property that dictates the usability of a point cloud data set. This paper introduces an efficient, scalable, parallel algorithm for computing the local point density index, a sophisticated point cloud density metric. Computing the local point density index is non-trivial, because this computation involves a neighbour search that is required for each, individual point in the potentially large, input point cloud. Most existing algorithms and software are incapable of computing point density at scale. Therefore, the algorithm introduced in this paper aims to address both the needed computational efficiency and scalability for considering this factor in large, modern point clouds such as those collected in national or regional scans. The proposed algorithm is composed of two stages. In stage 1, a point-level, parallel processing step is performed to partition an unstructured input point cloud into partially overlapping, buffered tiles. A buffer is provided around each tile so that the data partitioning does not introduce spatial discontinuity into the final results. In stage 2, the buffered tiles are distributed to different processors for computing the local point density index in parallel. That tile-level parallel processing step is performed using a conventional algorithm with an R-tree data structure. While straight-forward, the proposed algorithm is efficient and particularly suitable for processing large point clouds. Experiments conducted using a 1.4 billion point data set acquired over part of Dublin, Ireland demonstrated an efficiency factor of up to 14.8/16. More specifically, the computational time was reduced by 14.8 times when the number of processes (i.e. executors) increased by 16 times. Computing the local point density index for the 1.4 billion point data set took just over 5 minutes with 16 executors and 8 cores per executor. The reduction in computational time was nearly 70 times compared to the 6 hours required without parallelism. 
    more » « less
  3. We consider a variation on the classical finance problem of optimal portfolio design. In our setting, a large population of consumers is drawn from some distribution over risk tolerances, and each consumer must be assigned to a portfolio of lower risk than her tolerance. The consumers may also belong to underlying groups (for instance, of demographic properties or wealth), and the goal is to design a small number of portfolios that are fair across groups in a particular and natural technical sense. Our main results are algorithms for optimal and near-optimal portfolio design for both social welfare and fairness objectives, both with and without assumptions on the underlying group structure. We describe an efficient algorithm based on an internal two-player zero-sum game that learns near-optimal fair portfolios ex ante and show experimentally that it can be used to obtain a small set of fair portfolios ex post as well. For the special but natural case in which group structure coincides with risk tolerances (which models the reality that wealthy consumers generally tolerate greater risk), we give an efficient and optimal fair algorithm. We also provide generalization guarantees for the underlying risk distribution that has no dependence on the number of portfolios and illustrate the theory with simulation results. 
    more » « less
  4. Yang, Junyuan (Ed.)
    In this work, we develop a new set of Bayesian models to perform registration of real-valued functions. A Gaussian process prior is assigned to the parameter space of time warping functions, and a Markov chain Monte Carlo (MCMC) algorithm is utilized to explore the posterior distribution. While the proposed model can be defined on the infinite-dimensional function space in theory, dimension reduction is needed in practice because one cannot store an infinite-dimensional function on the computer. Existing Bayesian models often rely on some pre-specified, fixed truncation rule to achieve dimension reduction, either by fixing the grid size or the number of basis functions used to represent a functional object. In comparison, the new models in this paper randomize the truncation rule. Benefits of the new models include the ability to make inference on the smoothness of the functional parameters, a data-informative feature of the truncation rule, and the flexibility to control the amount of shape-alteration in the registration process. For instance, using both simulated and real data, we show that when the observed functions exhibit more local features, the posterior distribution on the warping functions automatically concentrates on a larger number of basis functions. Supporting materials including code and data to perform registration and reproduce some of the results presented herein are available online. 
    more » « less
  5. Abstract

    Functional responses describe how consumer foraging rates change with resource density. Despite extensive research looking at the factors underlying foraging interactions, there remains ongoing controversy about how temperature and body size control the functional response parameters space clearance (or attack) rate and handling time. Here, we investigate the effects of temperature, consumer mass, and resource mass using the largest compilation of functional responses yet assembled. This compilation contains 2,083 functional response curves covering a wide range of foragers and prey types, environmental conditions, and habitats. After accounting for experimental arena size, dimensionality of the foraging interaction, and consumer taxon, we find that both space clearance rate and handling time are optimized at intermediate temperatures (a unimodal rather than monotonic response), suggesting that the response to global climate change depends on the location of the consumer’s current temperature relative to the optimum. We further confirm that functional responses are higher and steeper for large consumers and small resources, and models using consumer and resource masses separately outperformed models using consumer:resource mass ratios, suggesting that consumer and resource body mass act independently to set interaction strengths. Lastly, we show that the extent to which foraging is affected by temperature or mass depends on the taxonomic identity of the consumer and the dimensionality of the consumer–resource interaction. We thus argue that although overall body size and temperature effects can be identified, they are not universal, and therefore food web and community modeling approaches could be improved by considering taxonomic identity along with body size and unimodal temperature effects.

     
    more » « less