skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on May 8, 2026

Title: Who’s Afraid of the Base-Rate Fallacy?
Abstract This paper evaluates the back-and-forth between Mayo, Howson, and Achinstein over whether classical statistics commits the base-rate fallacy. I show that Mayo is correct to claim that Howson’s arguments rely on a misunderstanding of classical theory. I then argue that Achinstein’s refined version of the argument turns on largely undefended epistemic assumptions about “what we care about” when evaluating hypotheses. I end by suggesting that Mayo’s positive arguments are no more decisive than her opponents’: even if correct, they are unlikely to compel anyone not already sympathetic to the classical picture.  more » « less
Award ID(s):
2042366
PAR ID:
10618622
Author(s) / Creator(s):
Publisher / Repository:
Cambridge University Press
Date Published:
Journal Name:
Philosophy of Science
Volume:
92
Issue:
2
ISSN:
0031-8248
Page Range / eLocation ID:
453 to 469
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Amir Hashemi (Ed.)
    We present Hermite polynomial interpolation algorithms that for a sparse univariate polynomial f with coefficients from a field compute the polynomial from fewer points than the classical algorithms. If the interpolating polynomial f has t terms, our algorithms, require argument/value triples (w^i, f(w^i), f'(w^i)) for i=0,...,t + ceiling( (t+1)/2 ) - 1, where w is randomly sampled and the probability of a correct output is determined from a degree bound for f. With f' we denote the derivative of f. Our algorithms generalize to multivariate polynomials, higher derivatives and sparsity with respect to Chebyshev polynomial bases. We have algorithms that can correct errors in the points by oversampling at a limited number of good values. If an upper bound B >= t for the number of terms is given, our algorithms use a randomly selected w and, with high probability, ceiling( t/2 ) + B triples, but then never return an incorrect output. The algorithms are based on Prony's sparse interpolation algorithm. While Prony's algorithm and its variants use fewer values, namely, 2t+1 and t+B values f(w^i), respectively, they need more arguments w^i. The situation mirrors that in algebraic error correcting codes, where the Reed-Solomon code requires fewer values than the multiplicity code, which is based on Hermite interpolation, but the Reed-Solomon code requires more distinct arguments. Our sparse Hermite interpolation algorithms can interpolate polynomials over finite fields and over the complex numbers, and from floating point data. Our Prony-based approach does not encounter the Birkhoff phenomenon of Hermite interpolation, when a gap in the derivative values causes multiple interpolants. We can interpolate from t+1 values of f and 2t-1 values of f'. 
    more » « less
  2. We generalize Hermite interpolation with error correction, which is the methodology for multiplicity algebraic error correction codes, to Hermite interpolation of a rational function over a field K from function and function derivative values. We present an interpolation algorithm that can locate and correct <= E errors at distinct arguments y in K where at least one of the values or values of a derivative is incorrect. The upper bound E for the number of such y is input. Our algorithm sufficiently oversamples the rational function to guarantee a unique interpolant. We sample (f/g)^(j)(y[i]) for 0 <= j <= L[i], 1 <= i <= n, y[i] distinct, where (f/g)^(j) is the j-th derivative of the rational function f/g, f, g in K[x], GCD(f,g)=1, g <= 0, and where N = (L[1]+1)+...+(L[n]+1) >= C + D + 1 + 2(L[1]+1) + ... + 2(L[E]+1) where C is an upper bound for deg(f) and D an upper bound for deg(g), which are input to our algorithm. The arguments y[i] can be poles, which is truly or falsely indicated by a function value infinity with the corresponding L[i]=0. Our results remain valid for fields K of characteristic >= 1 + max L[i]. Our algorithm has the same asymptotic arithmetic complexity as that for classical Hermite interpolation, namely soft-O(N). For polynomials, that is, g=1, and a uniform derivative profile L[1] = ... = L[n], our algorithm specializes to the univariate multiplicity code decoder that is based on the 1986 Welch-Berlekamp algorithm. 
    more » « less
  3. Abstract If contextual values can play necessary and beneficial roles in scientific research, to what extent should science communicators be transparent about such values? This question is particularly pressing in contexts where there appears to be significant resistance among some non-experts to accept certain scientific claims or adopt science-based policies or recommendations. This paper examines whether value transparency can help promote non-experts’ warranted epistemic trust of experts. I argue that there is a prima facie case in favor of transparency because it can promote four conditions that are thought to be required for epistemic trustworthiness. I then consider three main arguments that transparency about values is likely to be ineffective in promoting such trust (and may undermine it). This analysis shows that while these arguments show that value transparency is not sufficient for promoting epistemic trust, they fail to show that rejecting value transparency as a norm for science communicators is more likely to promote warranted epistemic trust than a qualified norm of value transparency (along with other strategies). Finally, I endorse a tempered understanding of value transparency and consider what this might require in practice. 
    more » « less
  4. Abstract Isotonic regression is a standard problem in shape-constrained estimation where the goal is to estimate an unknown non-decreasing regression function $$f$$ from independent pairs $$(x_i, y_i)$$ where $${\mathbb{E}}[y_i]=f(x_i), i=1, \ldots n$$. While this problem is well understood both statistically and computationally, much less is known about its uncoupled counterpart, where one is given only the unordered sets $$\{x_1, \ldots , x_n\}$$ and $$\{y_1, \ldots , y_n\}$$. In this work, we leverage tools from optimal transport theory to derive minimax rates under weak moments conditions on $$y_i$$ and to give an efficient algorithm achieving optimal rates. Both upper and lower bounds employ moment-matching arguments that are also pertinent to learning mixtures of distributions and deconvolution. 
    more » « less
  5. Abstract There is a complex inclination structure present in the trans-Neptunian object (TNO) orbital distribution in the main classical-belt region (between orbital semimajor axes of 39 and 48 au). The long-term gravitational effects of the giant planets make TNO orbits precess, but nonresonant objects maintain a nearly constant “free” inclination (Ifree) with respect to a local forced precession pole. Because of the likely cosmogonic importance of the distribution of this quantity, we tabulate free inclinations for all main-belt TNOs, each individually computed using barycentric orbital elements with respect to each object’s local forcing pole. We show that the simplest method, based on the Laplace–Lagrange secular theory, is unable to give correct forcing poles for objects near theν18secular resonance, resulting in poorly conservedIfreevalues in much of the main belt. We thus instead implemented an averaged Hamiltonian to obtain the expected nodal precession for each TNO, yielding significantly more accurate free inclinations for nonresonant objects. For the vast majority (96%) of classical-belt TNOs, theseIfreevalues are conserved to < 1° over 4 Gyr numerical simulations, demonstrating the advantage of using this well-conserved quantity in studies of the TNO population and its primordial inclination profile; our computed distributions only reinforce the idea of a very coplanar surviving “cold” primordial population, overlain by a largeI-width implanted “hot” population. 
    more » « less