Given a set of facilities and clients, and costs to open facilities, the classic facility location problem seeks to
open a set of facilities and assign each client to one open facility to minimize the cost of opening the chosen
facilities and the total distance of the clients to their assigned open facilities. Such an objective may induce an
unequal cost over certain socioeconomic groups of clients (i.e., total distance traveled by clients in such a
group). This is important when planning the location of socially relevant facilities such as emergency rooms.
In this work, we consider a fair version of the problem where we are given π clients groups that partition
the set of clients, and the distance of a given group is defined as the average distance of clients in the group
to their respective open facilities. The objective is to minimize the Minkowski πnorm of vector of group
distances, to penalize high access costs to open facilities across π groups of clients. This generalizes classic
facility location (π = 1) and the minimization of the maximum group distance (π = β). However, in practice,
fairness criteria may not be explicit or even known to a decision maker, and it is often unclear how to select a
specific "π" to model the cost of unfairness. To get around this, we study the notion of solution portfolios where
for a fixed problem instance, we seek a small portfolio of solutions such that for any Minkowski norm π, one
of these solutions is an π(1)approximation. Using the geometric relationship between various πnorms, we
show the existence of a portfolio of cardinality π(log π), and a lower bound of (\sqrt{log r}).
There may not be common structure across different solutions in this portfolio, which can make planning
difficult if the notion of fairness changes over time or if the budget to open facilities is disbursed over time. For
example, small changes in π could lead to a completely different set of open facilities in the portfolio. Inspired
by this, we introduce the notion of refinement, which is a family of solutions for each πnorm satisfying a
combinatorial property. This property requires that (1) the set of facilities open for a higher πnorm must be
a subset of the facilities open for a lower πnorm, and (2) all clients assigned to an open facility for a lower
πnorm must be assigned to the same open facility for any higher πnorm. A refinement is πΌapproximate if
the solution for each πnorm problem is an πΌapproximation for it. We show that it is sufficient to consider
only π(log π) norms instead of all πnorms, π β [1, β] to construct refinements. A natural greedy algorithm
for the problem gives a poly(π)approximate refinement, which we improve to poly(r^1/\sqrt{log π})approximate
using a recursive algorithm. We improve this ratio to π(log π) for the special case of tree metric for uniform
facility open cost. Our recursive algorithm extends to other settings, including to a hierarchical facility location
problem that models facility location problems at several levels, such as public works departments and schools.
A full version of this paper can be found at https://arxiv.org/abs/2211.14873.
more »
« less
Algorithms and Learning for Fair Portfolio Design
We consider a variation on the classical finance problem of optimal portfolio design. In our setting, a large population of consumers is drawn from some distribution over risk tolerances, and each consumer must be assigned to a portfolio of lower risk than her tolerance. The consumers may also belong to underlying groups (for instance, of demographic properties or wealth), and the goal is to design a small number of portfolios that are fair across groups in a particular and natural technical sense.
Our main results are algorithms for optimal and nearoptimal portfolio design for both social welfare and fairness objectives, both with and without assumptions on the underlying group structure. We describe an efficient algorithm based on an internal twoplayer zerosum game that learns nearoptimal fair portfolios ex ante and show experimentally that it can be used to obtain a small set of fair portfolios ex post as well. For the special but natural case in which group structure coincides with risk tolerances (which models the reality that wealthy consumers generally tolerate greater risk), we give an efficient and optimal fair algorithm. We also provide generalization guarantees for the underlying risk distribution that has no dependence on the number of portfolios and illustrate the theory with simulation results.
more »
« less
 NSFPAR ID:
 10267306
 Date Published:
 Journal Name:
 Economics and Computation (EC) 2021
 Format(s):
 Medium: X
 Sponsoring Org:
 National Science Foundation
More Like this


This paper constructs optimal brokerage contracts for multiple (heterogeneous) clients trading a single asset whose price follows the AlmgrenChriss model. The distinctive features of this work are as follows: (i) the reservation values of the clients are determined endogenously, and (ii) the broker is allowed to not offer a contract to some of the potential clients, thus choosing her portfolio of clients strategically. We find a computationally tractable characterization of the optimal portfolios of clients (up to a digital optimization problem, which can be solved efficiently if the number of potential clients is small) and conduct numerical experiments which illustrate how these portfolios, as well as the equilibrium profits of all market participants, depend on the price impact coefficients.more » « less

In the problem of online portfolio selection as formulated by Cover (1991), the trader repeatedly distributes her capital over d assets in each of T>1 rounds, with the goal of maximizing the total return. Cover proposed an algorithm, termed Universal Portfolios, that performs nearly as well as the best (in hindsight) static assignment of a portfolio, with an O(dlog(T)) regret in terms of the logarithmic return. Without imposing any restrictions on the market this guarantee is known to be worstcase optimal, and no other algorithm attaining it has been discovered so far. Unfortunately, Cover's algorithm crucially relies on computing certain ddimensional integral which must be approximated in any implementation; this results in a prohibitive O(d^4(T+d)^14) perround runtime for the fastest known implementation due to Kalai and Vempala (2002). We propose an algorithm for online portfolio selection that admits essentially the same regret guarantee as Universal Portfolios  up to a constant factor and replacement of log(T) with log(T+d)  yet has a drastically reduced runtime of O(d^(T+d)) per round. The selected portfolio minimizes the current logarithmic loss regularized by the logdeterminant of its Hessian  equivalently, the hybrid logarithmicvolumetric barrier of the polytope specified by the asset return vectors. As such, our work reveals surprising connections of online portfolio selection with two classical topics in optimization theory: cuttingplane and interiorpoint algorithms.more » « less

Accurate detection of infected individuals is one of the critical steps in stopping any pandemic. When the underlying infection rate of the disease is low, testing people in groups, instead of testing each individual in the population, can be more efficient. In this work, we consider noisy adaptive group testing design with specific test sensitivity and specificity that select the optimal group given previous test results based on preselected utility function. As in prior studies on group testing, we model this problem as a sequential Bayesian Optimal Experimental Design (BOED) to adaptively design the groups for each test. We analyze the required number of group tests when using the updated posterior on the infection status and the corresponding Mutual Information (MI) as our utility function for selecting new groups. More importantly, we study how the potential bias on the groundtruth noise of group tests may affect the group testing sample complexity.more » « less

Kunal Talwar (Ed.)This paper studies grading algorithms for randomized exams. In a randomized exam, each student is asked a small number of random questions from a large question bank. The predominant grading rule is simple averaging, i.e., calculating grades by averaging scores on the questions each student is asked, which is fair exante, over the randomized questions, but not fair expost, on the realized questions. The fair grading problem is to estimate the average grade of each student on the full question bank. The maximumlikelihood estimator for the BradleyTerryLuce model on the bipartite studentquestion graph is shown to be consistent with high probability when the number of questions asked to each student is at least the cubedlogarithm of the number of students. In an empirical study on exam data and in simulations, our algorithm based on the maximumlikelihood estimator significantly outperforms simple averaging in prediction accuracy and expost fairness even with a small class and exam size.more » « less