skip to main content

Attention:

The NSF Public Access Repository (PAR) system and access will be unavailable from 11:00 PM ET on Thursday, January 16 until 2:00 AM ET on Friday, January 17 due to maintenance. We apologize for the inconvenience.


Search for: All records

Creators/Authors contains: "Dixon, Peter"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. In the context of a national movement to defund police departments, many American cities are starting to reimagine public safety, as activists demand new practices that maintain safety while minimizing harm, as well as ensuring accountability when harms occur. Drawing on Everyday Peace Indicators methodologies, we argue that “community-centered” measurement, combined with researcher-practitioner partnerships, can help move both researchers and policymakers toward a more meaningful approach to policy design and evaluation. However, the application of community-centered measurement to the context of American policing raises important theoretical and practical concerns—in particular, the question of how community is defined, and who gets to define it. In this article, we ask: how do we define “community” in participatory research contexts where the concept of community is overlapping and contested? Using the example of a recent study carried out in the City of Oakland, we illustrate the complexities of applying a community-centered measurement process to the case of public safety and, more broadly, to police reform in American cities. We conclude with a discussion of both the benefits and limitations of our own approach, as well as a set of considerations for those engaging in participatory research.

     
    more » « less
    Free, publicly-accessible full text available February 1, 2025
  2. Oh, A ; Naumann, T ; Globerson, A ; Saenko, K ; Hardt, M ; Levine, S (Ed.)
    We investigate replicable learning algorithms. Informally a learning algorithm is replicable if the algorithm outputs the same canonical hypothesis over multiple runs with high probability, even when different runs observe a different set of samples from the unknown data distribution. In general, such a strong notion of replicability is not achievable. Thus we consider two feasible notions of replicability called {\em list replicability} and {\em certificate replicability}. Intuitively, these notions capture the degree of (non) replicability. The goal is to design learning algorithms with optimal list and certificate complexities while minimizing the sample complexity. Our contributions are the following. 1. We first study the learning task of estimating the biases of $d$ coins, up to an additive error of $\varepsilon$, by observing samples. For this task, we design a $(d+1)$-list replicable algorithm. To complement this result, we establish that the list complexity is optimal, i.e there are no learning algorithms with a list size smaller than $d+1$ for this task. We also design learning algorithms with certificate complexity $\tilde{O}(\log d)$. The sample complexity of both these algorithms is $\tilde{O}(\frac{d^2}{\varepsilon^2})$ where $\varepsilon$ is the approximation error parameter (for a constant error probability). 2. In the PAC model, we show that any hypothesis class that is learnable with $d$-nonadaptive statistical queries can be learned via a $(d+1)$-list replicable algorithm and also via a $\tilde{O}(\log d)$-certificate replicable algorithm. The sample complexity of both these algorithms is $\tilde{O}(\frac{d^2}{\nu^2})$ where $\nu$ is the approximation error of the statistical query. We also show that for the concept class \dtep, the list complexity is exactly $d+1$ with respect to the uniform distribution. To establish our upper bound results we use rounding schemes induced by geometric partitions with certain properties. We use Sperner/KKM Lemma to establish the lower bound results. 
    more » « less
  3. Abstract

    In conflicts involving non-state armed groups, individualized approaches to transitional justice and disarmament, demobilization, and reintegration face significant shortcomings for both victims and ex-combatants. Yet, both transitional justice and disarmament, demobilization, and reintegration have had trouble interrogating the tensions between individualized and collective approaches, focusing more on normative debates over the proper balance than on the actual experiences of communities in transition. This paper seeks to advance this debate through empirical research on the local experience of justice and coexistence amid civilian and ex-combatant communities in Colombia. The authors utilized a unique dataset of ‘everyday’ community-based indicators of coexistence and justice in eight civilian and ex-combatant communities in rural Colombia. Using the country’s web of reparations mechanisms, the paper draws a distinction between ‘state-led’ and ‘community-led’ collective justice and proposes that the latter can speak to both civilian and ex-combatant communities’ priorities and lived experiences—and, ultimately, provide a pathway to coexistence. This would stand in contrast to the Colombian state’s current collective reparations projects by speaking directly to the concerns and priorities of both civilian and ex-combatant communities: on the civilian side, to be compensated directly by those who committed acts of violence; and on the ex-combatant side, to be recognized and accepted as a community. Such a community-led model integrating transitional justice and disarmament, demobilization, and reintegration processes would serve as a pragmatic middle ground between the individual and state-led collective approaches that dominate transitional justice.

     
    more » « less
  4. Stefano Leonardi and Anupam Gupta (Ed.)
    A probabilistic algorithm A is pseudodeterministic if, on every input, there exists a canonical value that is output with high probability. If the algorithm outputs one of k canonical values with high probability, then it is called a k-pseudodeterministic algorithm. In the study of pseudodeterminism, the Acceptance Probability Estimation Problem (APEP), which is to additively approximate the acceptance probability of a Boolean circuit, is emerging as a central computational problem. This problem admits a 2-pseudodeterministic algorithm. Recently, it was shown that a pseudodeterministic algorithm for this problem would imply that any multi-valued function that admits a k-pseudodeterministic algorithm for a constant k (including approximation algorithms) also admits a pseudodeterministic algorithm (Dixon, Pavan, Vinodchandran; ITCS 2021). The contribution of the present work is two-fold. First, as our main conceptual contribution, we establish that the existence of a pseudodeterministic algorithm for APEP is fundamentally related to the gap between probabilistic promise classes and the corresponding standard complexity classes. In particular, we show the following equivalence: APEP has a pseudodeterministic approximation algorithm if and only if every promise problem in PromiseBPP has a solution in BPP. A conceptual interpretation of this equivalence is that the algorithmic gap between 2-pseudodeterminism and pseudodeterminism is equivalent to the gap between PromiseBPP and BPP. Based on this connection, we show that designing pseudodeterministic algorithms for APEP leads to the solution of some open problems in complexity theory, including new Boolean circuit lower bounds. This equivalence also explains how multi-pseudodeterminism is connected to problems in SearchBPP. In particular, we show that if APEP has a pseudodeterministic algorithm, then every problem that admits a k(n)-pseudodeterministic algorithm (for any polynomial k) is in SearchBPP and admits a pseudodeterministic algorithm. Motivated by this connection, we also explore its connection to probabilistic search problems and establish that APEP is complete for certain notions of search problems in the context of pseudodeterminism. Our second contribution is establishing query complexity lower bounds for multi-pseudodeterministic computations. We prove that for every k ≥ 1, there exists a problem whose (k+1)-pseudodeterministic query complexity, in the uniform query model, is O(1) but has a k-pseudodeterministic query complexity of Ω(n), even in the more general nonadaptive query model. A key contribution of this part of the work is the utilization of Sperner’s lemma in establishing query complexity lower bounds. 
    more » « less
  5. null (Ed.)
    The {\sc Acceptance Probability Estimation Problem} (APEP) is to additively approximate the acceptance probability of a Boolean circuit. This problem admits a probabilistic approximation scheme. A central question is whether we can design a {\em pseudodeterministic} approximation algorithm for this problem: a probabilistic polynomial-time algorithm that outputs a canonical approximation with high probability. Recently, it was shown that such an algorithm would imply that {\em every approximation algorithm can be made pseudodeterministic} (Dixon, Pavan, Vinodchandran; ITCS 2021). The main conceptual contribution of this work is to establish that the existence of a pseudodeterministic algorithm for APEP is fundamentally connected to the relationship between probabilistic promise classes and the corresponding standard complexity classes. In particular, we show the following equivalence: {\em every promise problem in PromiseBPP has a solution in BPP if and only if APEP has a pseudodeterministic algorithm}. Based on this intuition, we show that pseudodeterministic algorithms for APEP can shed light on a few central topics in complexity theory such as circuit lowerbounds, probabilistic hierarchy theorems, and multi-pseudodeterminism. 
    more » « less
  6. Lee, James (Ed.)
    We exhibit several computational problems that are {\em complete} for multi-pseudodeterministic computations in the following sense: (1) these problems admit $2$-pseudodeterministic algorithms (2) if there exists a pseudodeterministic algorithm for any of these problems, then any multi-valued function that admits a $k$-pseudodeterministic algorithm for a constant $k$, also admits a pseudodeterministic algorithm. We also show that these computational problems are complete for {\em Search-BPP}: a pseudodeterministic algorithm for any of these problems implies a pseudodeterministic algorithm for all problems in Search-BPP. 
    more » « less
  7. Pass, Rafael Pass ; Pietrzak, Krzysztof Pietrzak (Ed.)
    We investigate the complexity of problems that admit perfect zero-knowledge interactive protocols and establish new unconditional upper bounds and oracle separation results. We establish our results by investigating certain {\em distribution testing problems}: computational problems over high-dimensional distributions represented by succinct Boolean circuits. A relatively less-investigated complexity class $\SBP$ emerged as significant in this study. The main results we establish are: 1. A unconditional inclusion that NIPZK is a subset of CoSBP. 2. Construction of a relativized world in which there is a distribution testing problem that lies in NIPZK but not in SBP, thus giving a relativized separation of NIPZK (and hence PZK) from SBP. 3. Construction of a relativized world in which there is a distribution testing problem that lies in PZK but not in CoSBP, thus giving a relativized separation of PZK from CoSBP.. Results (1) and (3) imply an oracle separating PZK from NIPZK. Our results refine the landscape of perfect zero-knowledge classes in relation to traditional complexity classes. 
    more » « less