<?xml-model href='http://www.tei-c.org/release/xml/tei/custom/schema/relaxng/tei_all.rng' schematypens='http://relaxng.org/ns/structure/1.0'?><TEI xmlns="http://www.tei-c.org/ns/1.0">
	<teiHeader>
		<fileDesc>
			<titleStmt><title level='a'>Strategyproof Mechanisms for Group-Fair Obnoxious Facility Location Problems</title></titleStmt>
			<publicationStmt>
				<publisher>AAAI Conference on Artificial Intelligence</publisher>
				<date>03/25/2024</date>
			</publicationStmt>
			<sourceDesc>
				<bibl> 
					<idno type="par_id">10538444</idno>
					<idno type="doi">10.1609/aaai.v38i9.28843</idno>
					<title level='j'>Proceedings of the AAAI Conference on Artificial Intelligence</title>
<idno>2159-5399</idno>
<biblScope unit="volume">38</biblScope>
<biblScope unit="issue">9</biblScope>					

					<author>Jiaqian Li</author><author>Minming Li</author><author>Hau Chan</author>
				</bibl>
			</sourceDesc>
		</fileDesc>
		<profileDesc>
			<abstract><ab><![CDATA[<p>We study the group-fair obnoxious facility location problems from the mechanism design perspective where agents belong to different groups and have private location preferences on the undesirable locations of the facility. Our main goal is to design strategyproof mechanisms that elicit the true location preferences from the agents and determine a facility location that approximately optimizes several group-fair objectives. We first consider the maximum total and average group cost (group-fair) objectives. For these objectives, we propose deterministic mechanisms that achieve 3-approximation ratios and provide matching lower bounds. We then provide the characterization of 2-candidate strategyproof randomized mechanisms. Leveraging the characterization, we design randomized mechanisms with improved approximation ratios of 2 for both objectives. We also provide randomized lower bounds of 5/4 for both objectives. Moreover, we investigate intergroup and intragroup fairness (IIF) objectives, addressing fairness between groups and within each group. We present a mechanism that achieves a 4-approximation for the IIF objectives and provide tight lower bounds.</p>]]></ab></abstract>
		</profileDesc>
	</teiHeader>
	<text><body xmlns="http://www.tei-c.org/ns/1.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xlink="http://www.w3.org/1999/xlink">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Introduction</head><p>In recent decades, facility location problems have received considerable attention in operations research, computer science, and economic communities due to their applicability to address real-world challenges and theoretical interests (see, e.g., <ref type="bibr">(Hakimi 1964;</ref><ref type="bibr">Church and ReVelle 1976;</ref><ref type="bibr">Jain et al. 2003)</ref>). In a typical facility location problem, a social planner is tasked with identifying a suitable geographical location for a facility to best serve the targeted agent population according to a given optimization objective, defined based on the distances of agents' locations or preferences to the facility location <ref type="bibr">(Hakimi 1964)</ref>. These problems can be used to address locating a public school to provide education to students living in the surrounding area, locating a public park to provide recreational space for local residents, and locating a hospital to provide health services to citizens in the communities.</p><p>In artificial intelligence and economics, the studies of facility location problems have largely focused on the mech-anism design perspective where the facility location preferences of the agents are private <ref type="bibr">(Chan et al. 2021)</ref>. The primary goal of these studies aims to design (strategyproof ) mechanisms that elicit agent location truthfully and determine a facility location to (approximately) optimize certain objectives <ref type="bibr">(Moulin 1980;</ref><ref type="bibr">Procaccia and Tennenholtz 2013)</ref>.</p><p>While the majority of the related mechanism design studies consider facilities that are pleasant for the agents, a few studies <ref type="bibr">(Cheng, Yu, and Zhang 2013;</ref><ref type="bibr">Ibara and Nagamochi 2012;</ref><ref type="bibr">Ye, Mei, and Zhang 2015)</ref> consider locating obnoxious facilities which are undesirable for agents. Indeed, facilities such as nuclear power reactors, landfill sites, and chemical plants are often unpleasant and do not provide service directly to the agents. For instance, landfill sites are unpleasant for the nearby agents because of the potential exposure to odors/gases/chemicals (e.g., lead, ammonia, and hydrogen sulfide) and other health risks (e.g., cancers and developmental disabilities). Similarly, exposure to radiation from nuclear power plants can cause cancer and leukemia. As a result, agents have preferences on the undesirable locations of the obnoxious facility. Therefore, existing studies design mechanisms to locate the facility far away from the agents' undesirable location preferences.</p><p>Group Fairness and Obnoxious Facilities. In real-world situations, it is well-documented that groups of agents (e.g., environmental and community activists) fight to advocate for the location of an undesirable facility (e.g., a coal ash landfill, nuclear plant, or petrochemical plant) through public and legal channels. Moreover, when locating obnoxious facilities, unfairness can unexpectedly occur for groups of agents. For instance, for any obnoxious facility location, we can create a problem instance that is good for one group of agents and bad for another group of agents who dislike locations near the obnoxious facility. Therefore, obnoxious facility location problems should carefully consider groups of agents and the potential unfairness incurred by these groups.</p><p>While recent studies propose to enforce fairness in facility location by designing strategyproof mechanisms subject to group fairness objectives <ref type="bibr">(Zhou, Li, and Chan 2022)</ref> or criteria/constraints <ref type="bibr">(Aziz et al. 2022a</ref><ref type="bibr">(Aziz et al. ,b, 2023))</ref>, no previous studies consider group fairness for obnoxious facility location problems. Therefore, following the work of <ref type="bibr">Zhou, Li, and Chan (2022)</ref>, our focus is to consider natural group-fair Objective Deterministic Randomized mtgc UB: 3 (Thm. 1) UB: 2 (Thm. 4) LB: 3 (Thm. 2) LB: 5 4 (Thm. 5) magc UB: 3 (Thm. 7) UB: 2 (Thm. 9) LB: 3 (Thm. 8) LB: 5 4 (Thm. 10) Mechanism UB (IIF 1 &amp; IIF 2 ) LB BGMV 4 (Thm. 11) 4 (Thm. 12) Table 1: Summary of the approximation ratios and lower bounds for group-fair objectives (Section ) and IIF objectives (Section ). UB and LB stand for the upper bound and lower bound, respectively.</p><p>objectives that qualify the unfairness of groups of agents and design strategyproof mechanisms that approximately optimize the objectives in obnoxious facility location problems.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Our Contributions</head><p>We investigate the group-fair obnoxious facility location problems where agents are located on a closed interval. We design strategyproof mechanisms which (approximately) optimize the maximum total and average group cost (groupfair) objectives, along with the intergroup and intragroup fairness (IIF) objectives. Our results are summarized in Table 1. More specifically:</p><p>&#8226; We introduce deterministic mechanisms (LHGV and BGMV) for minimizing the maximum total group cost (mtgc) or the maximum average group cost (magc). These mechanisms achieve 3-approximation ratios. We also provide tight lower bounds for these objectives.</p><p>&#8226; We provide a characterization of any (group) strategyproof 2-candidate randomized mechanisms. We define monotonicity and establish the equivalence between strategyproofness and group strategyproofness within the context of a 2-candidate mechanism. Building on this insight, we propose optimal randomized 2-candidate mechanisms (PBPM and NPBPM) for both group-fair objectives, effectively improving the approximation ratio to 2.</p><p>&#8226; Lastly, we investigate two objectives that capture intergroup and intragroup fairness (IIF), considering group fairness within and among groups. We show that the BGMV mechanism achieves a 4-approximation for the IIF objectives. Additionally, we establish a tight lower bound that applies to all mechanisms for these objectives.</p><p>We note that all the designed mechanisms are group strategyproof. Due to the page limit, we only included some proof sketches.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Related Work</head><p>We discuss related work on facility location problems that consider fairness or obnoxious facilities from the mechanism design perspective.</p><p>Fairness in Facility Location Problems A growing body of research is exploring fairness from the mechanism design perspective. <ref type="bibr">Procaccia and Tennenholtz (2013)</ref> initiate the study on approximate mechanism design without money. They examine the fairness objective of minimizing the maximum cost among agents for facility location problems. More recently, various envy-related concepts are investigated, including minimax envy <ref type="bibr">(Cai, Filos-Ratsikas, and Tang 2016)</ref> and envy ratio <ref type="bibr">(Ding et al. 2020)</ref>. These concepts aim to minimize the maximum normalized cost difference between any two agents and the maximum ratios of utility for any pair of agents. A very recent study <ref type="bibr">(Zhou, Li, and Chan 2022)</ref> incorporates group fairness into the facility location problems while adopting the approximate mechanism design approach. Beyond the objective-centric mechanism design, <ref type="bibr">Aziz et al. (2022a</ref><ref type="bibr">Aziz et al. ( ,b, 2023) )</ref> investigate mechanism design for the proportional fairness notions (Individual Fair Share and Unanimous Fair Share) for both standard and obnoxious facility location problems.</p><p>Obnoxious Facility Location Problems In certain situations, agents may express displease towards the presence of a facility, preferring greater distance from it. These scenarios align with obnoxious facility location problems, where the objective shifts to maximizing the sum of distances from agents to the facility <ref type="bibr">(Church and Garfinkel 1978)</ref>. From the mechanism design perspective, the first work is by <ref type="bibr">Cheng, Yu, and Zhang (2013)</ref>, following Procaccia and Tennenholtz's seminal work <ref type="bibr">(Procaccia and Tennenholtz 2013)</ref>. In their context, the distance between agents and the facility is defined as the utility, and the objective becomes maximizing social welfare. Later on, <ref type="bibr">Ibara and Nagamochi (2012)</ref> provide characterizations of deterministic 2-candidate strategyproof mechanisms for the obnoxious facility game and <ref type="bibr">Ye, Mei, and Zhang (2015)</ref> study some other objectives (e.g., the sum of squares of agents' utility). A recent survey <ref type="bibr">(Chan et al. 2021</ref>) offers a comprehensive overview of recent progress from the mechanism design perspective.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Preliminaries</head><p>In this section, we provide necessary notations, objectives, and definitions for the group-fair obnoxious facility location problems and the corresponding mechanism design settings.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Notations</head><p>Consider a set of n agents denoted as N = {1, 2, ..., n}, located along the interval I = [0, 1]. Each agent i &#8712; N belongs to a specific group based on their affiliation (e.g., different groups of activists). Let G = {G 1 , G 2 , ..., G m } represent the set of mutually exclusive groups of agents, with |G j | indicating the number of agents in group G j .</p><p>The profile of each agent i is denoted as r i = (x i , g i ), where x i &#8712; I is the location of agent i, and g i &#8712; {1, 2, ..., m} corresponds to the group affiliation of agent i. The full profile r = {r 1 , r 2 , ..., r n } represents the location and group information for all agents, and x = {x 1 , x 2 , . . . , x n } is the associated location profile. We refer the set x g = {x i : i &#8712; G g } to be the location profiles of agents within group G g .</p><p>A deterministic mechanism is defined as a function f that maps a profile r to a facility location y &#8712; I. Alternatively, a randomized mechanism is a function f that assigns a profile r to a probability distribution Y over I, where the facility location y is a potential realization of Y . The distance between two points a, b &#8712; I is calculated as d(a, b) = |a -b|. Given a deterministic (or randomized) mechanism f and a profile r, the utility of an agent i &#8712; N is represented as u(f (r),</p><p>In the context of this obnoxious facility location problem, each agent aims to increase their distance from the facility to enhance their utility and reduce their cost.</p><p>As each agent is aware of the mechanism, they may be incentivized to falsely report their location in an attempt to manipulate the outcome to benefit themselves. The challenge is to design mechanisms that promote truthful reporting while (approximately) optimizing a specific cost objective.</p><p>In our setting, we assume an agent can only misreport their location. The group information is known publicly.</p><p>Definition 1 (Strategyproofness). A mechanism f is strategyproof (SP) if no agent can gain an advantage by misreporting their location, regardless of the locations reported by other agents. Formally, given any profile r = {r 1 , r 2 , ..., r n }, let r -i denote the profile excluding agent i. Then, for every i &#8712; N , let</p><p>for any group of agents, at least one agent cannot benefit from false reporting. Formally, given any profile r = {r 1 , r 2 , ..., r n }, for any S &#8838; N and for any x &#8242; S &#8712; I S , let r -S be the profile excluding the set of agents S, and r &#8242; S = (x &#8242; i , g i ) i&#8712;S . Then, at least one agent i &#8712; S must satisfy c(f (r),</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Optimization Objectives</head><p>We adopt two group-fair objectives proposed by <ref type="bibr">Zhou, Li, and Chan (2022)</ref>. The first objective considers total group cost, which is the cumulative cost of all members of a group. Our goal is to prevent any group from bearing a high total cost. Hence, the first group-fair objective is to minimize the maximum total group cost (mtgc). Formally, the maximum total group cost (mtgc) of a facility location y with respect to the profile r is defined as:</p><p>Table 2: Summary of the approximation ratios and lower bounds for deterministic and randomized mechanisms (mtgc).</p><p>the maximum total group cost of a distribution Y with re-</p><p>The second group-fair objective aims to minimize the maximum average group cost (magc). This objective aims to alleviate the impact of group size and normalize each total group cost by their group size. It is defined as follows:</p><p>Let y * &#8712; argmin y mtgc(y, r) denote the optimal location for minimizing the objective mtgc. We measure the performance of a mechanism f by comparing the objective achieved by f and the optimal solution. Specifically, f is an &#945;-approximation if for any profile r, it holds that mtgc(f (r), r) &#8804; &#945;&#8226;mtgc(y * , r). The approximation performance of a mechanism for magc can be defined similarly.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Group-Fair Cost Objectives</head><p>In this section, we develop both deterministic and randomized strategyproof mechanisms that approximately optimize the group-fair objectives defined in Section . Additionally, we provide lower bounds applicable to all strategyproof mechanisms.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>The Maximum Total Group Cost</head><p>In this subsection, we focus on minimizing the maximum total group cost (mtgc). Table <ref type="table">2</ref> summarizes our results. We begin by examining the mechanism proposed by <ref type="bibr">Cheng, Yu, and Zhang (2013)</ref>, which investigates the classical obnoxious facility location problems. This mechanism aims to maximize social welfare (the sum of each agent's utility) without considering any group-fair objective.</p><p>Mechanism 1 (Majority Vote (MV)). Given a location profile x on I, let n 1 be the number of agents located on [0, 1 2 ] and n 2 be the number of agents on ( 1 2 , 1]. If n 1 &#8804; n 2 , the mechanism returns the left endpoint 0; otherwise, it returns the right endpoint 1.</p><p>Since Mechanism 1 does not leverage the group information, the strategyproofness still holds in our model. However, for the mtgc objective, it may perform poorly, as demonstrated in the following example.</p><p>Example 1. There are (2m -3) agents from m groups. For 1 &#8804; j &#8804; m -1, group G j only contains one agent located at 0, while all other (m -2) agents belong to group G m and are located at 1. In this case, the optimal solution places the facility at 1 m-1 with a total group cost of m-2 m-1 in each group. However, Mechanism 1 outputs 1 with a total group cost of (m -2) in group G m , which implies an (m -1)approximation for the objective mtgc.</p><p>We propose the following deterministic mechanism, which incorporates the group information. Mechanism 2. (Largest Half-Group Vote (LHGV)) For each group G j , let n j,1 and n j,2 be the number of agents on [0, 1 2 ] and ( 1 2 , 1], respectively. We further denote the larger one of n j,1 or n j,2 by n j . Let g &#8712; argmax 1&#8804;j&#8804;m n j , which represents the group containing the most agents on half of the interval among all groups (break ties by choosing the smallest index). The mechanism places the facility y at 1 if n g,1 &gt; n g,2 . Otherwise, it places the facility at 0.</p><p>Notice that Mechanism 2 first selects a group and then performs Mechanism 1 on this subset of agents. We summarize this as a two-step mechanism design model which will be frequently used later:</p><p>1. Select a group G g . 2. Design a mechanism for x g . Recall that x g is the location profile of agents in G g .</p><p>The following proposition and theorem show that Mechanism 2 is GSP and has a good constant approximation ratio. Proposition 1. Mechanism 2 is group strategyproof. Theorem 1. Mechanism 2 is a 3-approximation for the objective mtgc.</p><p>Proof Sketch. For any profile r, let ALG and OP T denote the maximum total group cost given by LHGV and the optimal solution, respectively. Recall that in Mechanism 2, G g represents the group with the largest half-group.</p><p>Without loss of generality, we assume that LHGV places the facility at 1. When y * &#8712; ( 1 2 , 1], y * is closer to 1, we prove that a slight movement from y * to 1 will not make a big difference between OP T and ALG as the cost of an agent can be reduced by at most |1 -y * |. When y * &#8712; [0, 1 2 ], we prove that OP T &#8805; 1 2 n g,1 and ALG &#8804; 3 2 n g,1 . Next, we provide a tight lower bound to complement our upper bound result. Theorem 2. Any deterministic strategyproof mechanism has an approximation ratio of at least 3 for the object mtgc.</p><p>Before the proof, we introduce a lemma based on Ibara and Nagamochi (2012)'s result. Lemma 1. Any deterministic strategyproof mechanism is a 2-candidate mechanism.</p><p>Ibara and Nagamochi prove Lemma 1 for the classical obnoxious facility problem (single-group case), and the idea can be extended to the multiple-group case, which is used to prove two later theorems (Theorem 12 and 13). Based on Lemma 1, we provide the following proof for the theorem.</p><p>Proof of Theorem 2. In the proof, we always consider the profiles with two agents from a single group. Hence, we use the location profile to represent the agent profile since the agent's group identity is always 1.</p><p>For the profile (0, 0) (two agents located at 0), the maximum total group cost is 0 when the facility is placed at 1. Hence, the mechanism must place the facility at 1 to ensure a finite approximation ratio. Similarly, for the profile (1, 1), the mechanism must place the facility at 0. Therefore, the candidate set C f = {0, 1} and there is no space to include other candidates due to Lemma 1.</p><p>Next, we show any deterministic mechanism with C f = {0, 1} has an approximation ratio of at least 3. We start with the profile (0, 1). Without loss of generality, we assume that the output is 0. Then, for the profile (0, 1 2 + &#1013;) with sufficiently small &#1013; &gt; 0, the facility needs to stay at 0 with a total group cost of 3 2 -&#1013;. Otherwise, the agent at 1 2 + &#1013; can deviate to 1 to obtain a lower cost. However, the optimal solution places the facility at 1 with a total group cost of 1 2 + &#1013;. This leads to an approximation ratio of 3 as &#1013; goes to 0.</p><p>Next, we study whether randomized mechanisms can help improve the approximation ratio. We define the following monotonicity property. Definition 3 (2-candidate Monotonicity). Let C f = {c 1 , c 2 } &#8712; I 2 be the candidate set of a randomized mechanism f . Without loss of generality, we assume c 1 &lt; c 2 . Let &#945; be the probability of placing the facility at c 1 , and then the probability to place the facility at c 2 is 1 -&#945;. f is monotone if</p><p>&#8226; &#945; is a function with respect to n = (n j,1 , n j,2 , n j,3 ) m j=1 , where n j,1 , n j,2 , n j,3 are numbers of agents in G j located in [0, c1+c2</p><p>2 ), at c1+c2 2 , and in ( c1+c2 2 , 1], respectively.</p><p>&#8226; Let n -j be the input except group G j 's information. We have that</p><p>and</p><p>Inequality (1) relates to the case where an agent misreports either from [0, c1+c2</p><p>2 ) to</p><p>2 ). Inequality (2) relates to the case where an agent misreports from either the left or right half-interval to the midpoint at c1+c2 2 . Theorem 3. For any randomized 2-candidate mechanism f with C f = {c 1 , c 2 }, the following statements are equivalent:</p><p>(1) f is strategyproof. (2) f is group strategyproof. (3) f is monotone.</p><p>Proof Sketch. We prove the equivalence using Fig. <ref type="figure">1</ref>.</p><p>From SP to Monotonicity. By SP, the outcome does not change when agents within [0, c1+c2</p><p>2 ) or [ c1+c2 2 , 1] misreport to any location within their respective intervals. Hence,</p><p>The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24)</p><p>(1) SP we can prove that &#945; depends solely on n. Then, consider any profile r, we prove the validity of Inequality ( <ref type="formula">1</ref>) and ( <ref type="formula">2</ref>) using strategyproofness for the case that an agent misreports across the midpoint c1+c2 2 . From Monotonicity to GSP. Let S be the set of agents that misreport their locations. If every agent in set S is located within the interval [0, c1,c2</p><p>2 ), or alternatively, if they all reside within the interval ( c1+c2 2 , 1], we inductively use Inequality ( <ref type="formula">1</ref>) and ( <ref type="formula">2</ref>) to prove they will not benefit from deviating. Otherwise, let i and j be two agents in S such that</p><p>2 . We can prove either agent i or j will not benefit from deviating.</p><p>From GSP to SP. Consider a single individual as a group of size 1, the agent cannot benefit by misreporting their location due to group strategyproofness. This implies strategyproofness.</p><p>To facilitate our mechanism design, we define the simplified monotonicity property using a simple tie-breaking rule for agents at c1+c2 2 inspired by the above characterization.</p><p>Definition 4 (Simplified Monotonicity). f is simplified monotone if</p><p>&#8226; &#945;, the probability of placing the facility at c 1 , is a function with respect to n = (n j,1 , n j,2 ) m j=1 , where n j,1 , n j,2 are numbers of agents in G j and located in [0, c1+c2 2 ] and in ( c1+c2 2 , 1], respectively. &#8226; Let n -j be the input excluding group G j 's information.</p><p>&#945;(n) &#8804; &#945;((n j,1 -1, n j,2 + 1), n -j ) holds true.</p><p>Corollary 1. Any simplified monontone mechanism is GSP.</p><p>Simplified monotonicity is a stronger concept than monotonicity, although the inequality constraint becomes simpler. Consider the "toy" mechanism that takes a single agent's location, denoted by x, as input. If x &#8712; [0, 1 2 ) or x &#8712; ( 1 2 , 1], the facility is placed at 1 or 0 with probability 1, respectively. Otherwise, the facility is placed at 0 or 1 with equal probability. While this mechanism adheres to monotonicity, it does not satisfy simplified monotonicity.</p><p>Theorem 3 and Corollary 1 indicate how to better utilize the group information. We propose the following 2candidate mechanism.</p><p>Mechanism 3 (Probabilistic Balanced Placement Mechanism (PBPM)). Let C f = {0, 1}. Given an input profile r, the mechanism counts the numbers of agents in [0, 1 2 ] and ( 1 2 , 1] for each group G j , denoted by a count tuple n = (n j,1 , n j,2 ) m j=1 .</p><p>Let</p><p>Specially, set &#945; = 1 when max m j=1 {n j,1 } = 0, and set &#945; = 0 when max m j=1 {n j,2 } = 0. The mechanism places the facility at 0 (resp. 1) with probability &#945; (resp. 1 -&#945;). Proposition 2. Mechanism 3 is group strategyproof.</p><p>The idea behind Mechanism 3 is based on minimizing the worst ratio among profiles where the optimal solution places the facility at one endpoint. Specifically, when the optimal facility location is at 0, placing the facility at 1 results in a ratio of at most</p><p>. Conversely, if the optimal facility location is at 1, placing the facility at 0 incurs a ratio of at most</p><p>. Mechanism 3 identifies the best value of &#945; that achieves a balance in both cases.</p><p>The next theorem shows that Mechanism 3 improves the approximation ratio of the earlier mechanism even for profiles where the optimal location is not at 0 and 1. Theorem 4. Mechanism 3 is a 2-approximation for the objective mtgc.</p><p>Proof Sketch. For any profile r, let ALG and OP T denote the expected maximum total group cost given by PBPM and the optimal solution, respectively. Let c 0 and c 1 be the maximum total group cost when the facility is placed at 0 or 1, respectively. By definition, ALG = &#945; &#8226; c 0 + (1 -&#945;) &#8226; c 1 . Without loss of generality, we assume that y * &#8712; [0, 1 2 ]. We can prove that c 0 &#8804; 2 &#8226; OP T . Thus, we can further assume that c 1 &#8805; 2 &#8226; OP T . Otherwise, the approximation ratio is at most 2. We notice that &#945;, c 0 and c 1 are determined by a constant number of groups. We aim to construct a succinct profile r &#8242; with the property that c &#8242; 0 &#8805; c 0 , c &#8242; 1 &#8805; c 1 and &#945; &#8242; &#8804; &#945;. Then, the approximation ratio of r &#8242; is greater than or equal to that of r as ALG &#8804; ALG &#8242; and OP T &#8805; OP T &#8242; .</p><p>The construction of r &#8242; depends on the range of y * . For example, when y * &#8712; [ 1 4 , 1 2 ], the construction is as follows: &#8226; In profile r, there is a group that achieves the highest total group cost at location 1. Within this group, we relocate agents on [2y * -1 2 , 1 2 ] to the position 1 2 and agents on ( 1 2 , 1] to 1. We denote this group by G g . We introduce group 1 in r &#8242; , which contains n &#8242; 1,1 agents located at 1 2 and n &#8242; 1,2 agents located at 1, where n &#8242; 1,2 is the same as the number of agents in G g on ( 1 2 , 1], and n &#8242; 1,1 is determined by the equation:</p><p>cost at y * incurred by agents at 1 2 in the new group 1</p><p>cost at y * incurred by agents in Gg on [0, 1 2 ] after relocation .</p><p>The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24)</p><p>Now we prove Theorem 9 based on the lemmas above.</p><p>Proof of Theorem 9. Consider r as any feasible profile. We duplicate each group G j exactly k&#824; =j |G k | times, resulting in a new profile, r &#8242; . In this new profile r &#8242; , each group contains an equal number of agents, and it maintains the same approximation ratio as r due to Lemma 2. PBPM and NPBPM yield the same approximation ratio on r &#8242; (Lemma 3), which is at most 2 due to Theorem 4. Consequently, the approximation ratio for the input profile r is also at most 2.</p><p>The proof of Theorem 5 uses profiles consisting only of agents from a single group. In such scenarios, the approximation ratios of mtgc and magc are identical. Hence, the magc objective maintains the lower bound result.</p><p>Theorem 10. Any randomized strategyproof mechanism has an approximation ratio of at least 5 4 for the objective magc.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Intergroup and Intragroup Fairness</head><p>In this section, we investigate Intergroup and Intragroup Fairness (IIF) proposed by <ref type="bibr">Zhou, Li, and Chan (2022)</ref>, which captures not only the fairness between groups but also fairness within groups.</p><p>To facilitate our discussion, given profile r, group G g , and facility location y, let avgc(r, g, y), maxc(r, g, y) and minc(r, g, y) be the average cost, maximum cost and minimum cost among agents in group G g , respectively. We define below our group-fair IIF objectives which measure both intergroup and intragroup fairness: + maxc(r, j, y) -minc(r, j, y)}.</p><p>Our goal is to minimize IIF 1 or IIF 2 . Here, the avgc(r, j, y) term is to measure the intergroup fairness, and the (maxc(r, j, y)-minc(r, j, y)) term is to measure the intragroup fairness. Surprisingly, a mechanism we introduced in Section performs well for the two objectives. Theorem 11. Mechanism 4 is a 4-approximation for minimizing both IIF 1 and IIF 2 . Proof Sketch. Let lm(x) (resp. rm(x)) be the leftmost (resp. rightmost) location of a location profile x. Let y be the solution given by BGMV. We discuss two cases based on the optimal location y * to build the facility.</p><p>Suppose y * is outside of all groups. In this case, we can prove that y * is at either 0 or 1. Comparing y with y * , the intergroup fairness term of each group (maxc(r, j, y)minc(r, j, y)) remains unchanged. Recall that BGMV achieves a 3-approximation ratio for minimizing magc, we have max 1&#8804;j&#8804;m {avgc(r, j, y)} &#8804; 3 max 1&#8804;j&#8804;m {avgc(r, j, y * )}. We can combine these two facts to show the approximation ratio for both IIF 1 and IIF 2 is 4.</p><p>Otherwise, there is a group g, such that lm(x g ) &#8804; y * &#8804; rm(x g ). We can prove avgc(r, g, y * ) + maxc(r, g, y * )minc(r, g, y * ) &#8805; 1 2 , and hence IIF 1 (y, r) &#8805; IIF 2 (y, r) &#8805; 1 2 . We notice that IIF 2 (y, r) &#8804; IIF 1 (y, r) &#8804; 2 always holds. Therefore, the approximation ratio is at most 4.</p><p>Next, we establish lower bounds for the IIF objectives. Theorem 12. Any deterministic strategyproof mechanism has an approximation ratio of at least 4 for minimizing either IIF 1 or IIF 2 .</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Discussion and Future Work</head><p>In this section, we discuss interesting future directions.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Alternative Group-Fair Objectives</head><p>One could alternatively consider the minimum total group utility as a measure of fairness amongst groups. This would translate into group-fair objectives aiming to maximize either the minimum total or average group utility (mtgu or magu). However, when more than one group is present, achieving a finite approximation ratio becomes impossible. Theorem 13. Any deterministic strategyproof mechanism cannot achieve a finite approximation ratio for maximizing the minimum total or average group utility.</p><p>Following a similar approach to that of <ref type="bibr">(Fong et al. 2018)</ref>, we define the cost of an agent as one minus their utility, which then shifts the focus towards minimizing the maximum total or average group cost. However, the randomized strategyproof mechanism design for both mtgu and magu still remains open.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Beyond 2-Candidate Randomized Mechanisms</head><p>This work focuses on 2-candidate randomized mechanisms. By confining the mechanism's output to a 2-candidate set, PBPM and NPBPM deliver the optimal approximation ratio for the mtgc and magc objectives, respectively. Theorem 14. Any randomized 2-candidate strategyproof mechanism has an approximation ratio of at least 2 for minimizing either mtgc or magc.</p><p>The domain of mechanisms involving more candidates remains largely unexplored. The existence of a strategyproof mechanism involving more than two candidates with a constant approximation ratio is an interesting direction.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Conclusion</head><p>We study group-fair obnoxious facility location problems. We developed several (group) strategyproof mechanisms that minimize costs among different groups measured by the mtgc and magc objectives. For the objectives, the designed deterministic and 2-candidate randomized mechanisms obtain approximation ratios of 3 and 2, respectively. We also designed mechanisms for two objectives capturing both intergroup and intragroup fairness that obtain 4-approximation ratios. Finally, we provide lower bounds for these objectives to complement our results.</p></div><note xmlns="http://www.tei-c.org/ns/1.0" place="foot" xml:id="foot_0"><p>The Thirty-Eighth AAAI Conference on Artificial Intelligence </p></note>
		</body>
		</text>
</TEI>
