Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
In New Mexico and many other jurisdictions, judges may detain defendants pretrial if the prosecutor proves, through clear and convincing evidence, that releasing them would pose a danger to the public. However, some policymakers argue that certain classes of defendants should have a “rebuttable presumption” of dangerousness, shifting the burden of proof to the defense. Using data on over 15,000 felony defendants who were released pretrial in a 4‐year period in New Mexico, we measure how many of them would have been detained by various presumptions, and what fraction of these defendants in fact posed a danger in the sense that they were charged with a new crime during pretrial supervision. We consider presumptions based on the current charge, past convictions, past failures to appear, past violations of conditions of release, and combinations of these drawn from recent legislative proposals. We find that for all these criteria, at most 8% of the defendants they identify are charged pretrial with a new violent crime (felony or misdemeanor), and at most 5% are charged with a new violent felony. The false‐positive rate, that is, the fraction of defendants these policies would detain who are not charged with any new crime pretrial, ranges from 71% to 90%. The broadest legislative proposals, such as detaining all defendants charged with a violent felony, are little more accurate than detaining a random sample of defendants released under the current system, and would jail 20 or more people to prevent a single violent felony. We also consider detention recommendations based on risk scores from the Arnold Public Safety Assessment (PSA). Among released defendants with the highest risk score and the “violence flag,” 7% are charged with a new violent felony and 71% are false positives. We conclude that these criteria for rebuttable presumptions do not accurately target dangerous defendants: they cast wide nets and recommend detention for many pretrial defendants who do not pose a danger to the public.more » « less
-
Bojanczyk, M. et (Ed.)Embedding graphs in a geographical or latent space, i.e. inferring locations for vertices in Euclidean space or on a smooth manifold or submanifold, is a common task in network analysis, statistical inference, and graph visualization. We consider the classic model of random geometric graphs where n points are scattered uniformly in a square of area n, and two points have an edge between them if and only if their Euclidean distance is less than r. The reconstruction problem then consists of inferring the vertex positions, up to the symmetries of the square, given only the adjacency matrix of the resulting graph. We give an algorithm that, if r = n^α for α > 0, with high probability reconstructs the vertex positions with a maximum error of O(n^β) where β = 1/2-(4/3)α, until α ≥ 3/8 where β = 0 and the error becomes O(√{log n}). This improves over earlier results, which were unable to reconstruct with error less than r. Our method estimates Euclidean distances using a hybrid of graph distances and short-range estimates based on the number of common neighbors. We extend our results to the surface of the sphere in ℝ³ and to hypercubes in any constant dimension.more » « less
-
null (Ed.)Abstract We study the problem of efficiently refuting the k-colorability of a graph, or equivalently, certifying a lower bound on its chromatic number. We give formal evidence of average-case computational hardness for this problem in sparse random regular graphs, suggesting that there is no polynomial-time algorithm that improves upon a classical spectral algorithm. Our evidence takes the form of a "computationally-quiet planting": we construct a distribution of d-regular graphs that has significantly smaller chromatic number than a typical regular graph drawn uniformly at random, while providing evidence that these two distributions are indistinguishable by a large class of algorithms. We generalize our results to the more general problem of certifying an upper bound on the maximum k-cut. This quiet planting is achieved by minimizing the effect of the planted structure (e.g. colorings or cuts) on the graph spectrum. Specifically, the planted structure corresponds exactly to eigenvectors of the adjacency matrix. This avoids the pushout effect of random matrix theory, and delays the point at which the planting becomes visible in the spectrum or local statistics. To illustrate this further, we give similar results for a Gaussian analogue of this problem: a quiet version of the spiked model, where we plant an eigenspace rather than adding a generic low-rank perturbation. Our evidence for computational hardness of distinguishing two distributions is based on three different heuristics: stability of belief propagation, the local statistics hierarchy, and the low-degree likelihood ratio. Of independent interest, our results include general-purpose bounds on the low-degree likelihood ratio for multi-spiked matrix models, and an improved low-degree analysis of the stochastic block model.more » « less
-
null (Ed.)Belief propagation is a widely used message passing method for the solution of probabilistic models on networks such as epidemic models, spin models, and Bayesian graphical models, but it suffers from the serious shortcoming that it works poorly in the common case of networks that contain short loops. Here, we provide a solution to this long-standing problem, deriving a belief propagation method that allows for fast calculation of probability distributions in systems with short loops, potentially with high density, as well as giving expressions for the entropy and partition function, which are notoriously difficult quantities to compute. Using the Ising model as an example, we show that our approach gives excellent results on both real and synthetic networks, improving substantially on standard message passing methods. We also discuss potential applications of our method to a variety of other problems.more » « less
-
For the tensor PCA (principal component analysis) problem, we propose a new hierarchy of increasingly powerful algorithms with increasing runtime. Our hierarchy is analogous to the sumof-squares (SOS) hierarchy but is instead inspired by statistical physics and related algorithms such as belief propagation and AMP (approximate message passing). Our level-ℓ algorithm can be thought of as a linearized message-passing algorithm that keeps track of ℓ-wise dependencies among the hidden variables. Specifically, our algorithms are spectral methods based on the Kikuchi Hessian, which generalizes the well-studied Bethe Hessian to the higher-order Kikuchi free energies. It is known that AMP, the flagship algorithm of statistical physics, has substantially worse performance than SOS for tensor PCA. In this work we ‘redeem’ the statistical physics approach by showing that our hierarchy gives a polynomial-time algorithm matching the performance of SOS. Our hierarchy also yields a continuum of subexponential-time algorithms, and we prove that these achieve the same (conjecturally optimal) tradeoff between runtime and statistical power as SOS. Our proofs are much simpler than prior work, and also apply to the related problem of refuting random k-XOR formulas. The results we present here apply to tensor PCA for tensors of all orders, and to k-XOR when k is even. Our methods suggest a new avenue for systematically obtaining optimal algorithms for Bayesian inference problems, and our results constitute a step toward unifying the statistical physics and sum-of-squares approaches to algorithm design.more » « less
An official website of the United States government

Full Text Available