skip to main content


Title: Meta-Analysis Framework for Exact Inferences with Application to the Analysis of Rare Events
Summary

The usefulness of meta-analysis has been recognized in the evaluation of drug safety, as a single trial usually yields few adverse events and offers limited information. For rare events, conventional meta-analysis methods may yield an invalid inference, as they often rely on large sample theories and require empirical corrections for zero events. These problems motivate research in developing exact methods, including Tian et al.'s method of combining confidence intervals (2009, Biostatistics, 10, 275–281) and Liu et al.'s method of combining p-value functions (2014, JASA, 109, 1450–1465). This article shows that these two exact methods can be unified under the framework of combining confidence distributions (CDs). Furthermore, we show that the CD method generalizes Tian et al.'s method in several aspects. Given that the CD framework also subsumes the Mantel–Haenszel and Peto methods, we conclude that the CD method offers a general framework for meta-analysis of rare events. We illustrate the CD framework using two real data sets collected for the safety analysis of diabetes drugs.

 
more » « less
NSF-PAR ID:
10485058
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
Oxford University Press
Date Published:
Journal Name:
Biometrics
Volume:
72
Issue:
4
ISSN:
0006-341X
Format(s):
Medium: X Size: p. 1378-1386
Size(s):
p. 1378-1386
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract The research on gradual typing has led to many variations on the Gradually Typed Lambda Calculus (GTLC) of Siek & Taha (2006) and its underlying cast calculus. For example, Wadler and Findler (2009) added blame tracking, Siek et al . (2009) investigated alternate cast evaluation strategies, and Herman et al . (2010) replaced casts with coercions for space efficiency. The meta-theory for the GTLC has also expanded beyond type safety to include blame safety (Tobin-Hochstadt & Felleisen, 2006), space consumption (Herman et al ., 2010), and the gradual guarantees (Siek et al ., 2015). These results have been proven for some variations of the GTLC but not others. Furthermore, researchers continue to develop variations on the GTLC, but establishing all of the meta-theory for new variations is time-consuming. This article identifies abstractions that capture similarities between many cast calculi in the form of two parameterized cast calculi, one for the purposes of language specification and the other to guide space-efficient implementations. The article then develops reusable meta-theory for these two calculi, proving type safety, blame safety, the gradual guarantees, and space consumption. Finally, the article instantiates this meta-theory for eight cast calculi including five from the literature and three new calculi. All of these definitions and theorems, including the two parameterized calculi, the reusable meta-theory, and the eight instantiations, are mechanized in Agda making extensive use of module parameters and dependent records to define the abstractions. 
    more » « less
  2. Abstract

    Matrix population models are frequently built and used by ecologists to analyse demography and elucidate the processes driving population growth or decline. Life Table Response Experiments (LTREs) are comparative analyses that decompose the realized difference or variance in population growth rate () into contributions from the differences or variances in the vital rates (i.e. the matrix elements). Since their introduction, LTREs have been based on approximations and have not included biologically relevant interaction terms.

    We used the functional analysis of variance framework to derive an exact LTRE method, which calculates the exact response of to the difference or variance in a given vital rate, for all interactions among vital rates—including higher‐order interactions neglected by the classical methods. We used the publicly available COMADRE and COMPADRE databases to perform a meta‐analysis comparing the results of exact and classical LTRE methods. We analysed 186 and 1487 LTREs for animal and plant matrix population models, respectively.

    We found that the classical methods often had small errors, but that very high errors were possible. Overall error was related to the difference or variance in the matrices being analysed, consistent with the Taylor series basis of the classical method. Neglected interaction terms accounted for most of the errors in fixed design LTRE, highlighting the importance of two‐way interaction terms. For random design LTRE, errors in the contribution terms present in both classical and exact methods were comparable to errors due to neglected interaction terms. In most examples we analysed, evaluating exact contributions up to three‐way interaction terms was sufficient for interpreting 90% or more of the difference or variance in .

    Relative error, previously used to evaluate the accuracy of classical LTREs, is not a reliable metric of how closely the classical and exact methods agree. Error compensation between estimated contribution terms and neglected contribution terms can lead to low relative error despite faulty biological interpretation. Trade‐offs or negative covariances among matrix elements can lead to high relative error despite accurate biological interpretation. Exact LTRE provides reliable and accurate biological interpretation, and the R packageexactLTREmakes the exact method accessible to ecologists.

     
    more » « less
  3. Abstract

    Despite the wide application of meta‐analysis in ecology, some of the traditional methods used for meta‐analysis may not perform well given the type of data characteristic of ecological meta‐analyses.

    We reviewed published meta‐analyses on the ecological impacts of global climate change, evaluating the number of replicates used in the primary studies (ni) and the number of studies or records (k) that were aggregated to calculate a mean effect size. We used the results of the review in a simulation experiment to assess the performance of conventional frequentist and Bayesian meta‐analysis methods for estimating a mean effect size and its uncertainty interval.

    Our literature review showed thatniandkwere highly variable, distributions were right‐skewed and were generally small (medianni = 5, mediank = 44). Our simulations show that the choice of method for calculating uncertainty intervals was critical for obtaining appropriate coverage (close to the nominal value of 0.95). Whenkwas low (<40), 95% coverage was achieved by a confidence interval (CI) based on thetdistribution that uses an adjusted standard error (the Hartung–Knapp–Sidik–Jonkman, HKSJ), or by a Bayesian credible interval, whereas bootstrap orzdistribution CIs had lower coverage. Despite the importance of the method to calculate the uncertainty interval, 39% of the meta‐analyses reviewed did not report the method used, and of the 61% that did, 94% used a potentially problematic method, which may be a consequence of software defaults.

    In general, for a simple random‐effects meta‐analysis, the performance of the best frequentist and Bayesian methods was similar for the same combinations of factors (kand mean replication), though the Bayesian approach had higher than nominal (>95%) coverage for the mean effect whenkwas very low (k < 15). Our literature review suggests that many meta‐analyses that usedzdistribution or bootstrapping CIs may have overestimated the statistical significance of their results when the number of studies was low; more appropriate methods need to be adopted in ecological meta‐analyses.

     
    more » « less
  4. This paper introduces improved methods for statistically assessing birth seasonality and intra‐annual variation inδ18O from faunal tooth enamel. The first method estimates input parameters for use with a previously developed parametric approach by C. Torneroet al. The second method uses a non‐parametric clustering procedure to group individuals with similar time‐series data and estimate birth seasonality. This method was successful in analysing data from a modern sample with known season of birth, as well as two heterogeneous archaeological data sets. Modelling indicates that the non‐parametric approach estimates birth seasonality more successfully than the parametric method when less of the tooth row is preserved. The new approach offers a high level of statistical rigour and flexibility in dealing with the time‐series data produced through intra‐individual sampling in isotopic analysis.

     
    more » « less
  5. Summary

    This paper introduces an assumption-lean method that constructs valid and efficient lower predictive bounds for survival times with censored data. We build on recent work by Candès et al. (2023), whose approach first subsets the data to discard any data points with early censoring times and then uses a reweighting technique, namely, weighted conformal inference (Tibshirani et al., 2019), to correct for the distribution shift introduced by this subsetting procedure. For our new method, instead of constraining to a fixed threshold for the censoring time when subsetting the data, we allow for a covariate-dependent and data-adaptive subsetting step, which is better able to capture the heterogeneity of the censoring mechanism. As a result, our method can lead to lower predictive bounds that are less conservative and give more accurate information. We show that in the Type-I right-censoring setting, if either the censoring mechanism or the conditional quantile of the survival time is well estimated, our proposed procedure achieves nearly exact marginal coverage, where in the latter case we additionally have approximate conditional coverage. We evaluate the validity and efficiency of our proposed algorithm in numerical experiments, illustrating its advantage when compared with other competing methods. Finally, our method is applied to a real dataset to generate lower predictive bounds for users’ active times on a mobile app.

     
    more » « less