We describe Bayes factors functions based on the sampling distributions of z, t, χ2, and F statistics, using a class of inverse-moment prior distributions to define alternative hypotheses. These non-local alternative prior distributions are centered on standardized effects, which serve as indices for the Bayes factor function. We compare the conclusions drawn from resulting Bayes factor functions to those drawn from Bayes factors defined using local alternative prior specifications and examine their frequentist operating characteristics. Finally, an application of Bayes factor functions for replicated experimental designs in psychology are provided. 
                        more » 
                        « less   
                    This content will become publicly available on March 13, 2026
                            
                            Bayes Factor Functions for Partial Correlation Coefficients
                        
                    
    
            Partial correlation coefficients are widely applied in the social sciences to evaluate the relationship between two variables after accounting for the influence of others. In this article, we present Bayes Factor Functions (BFFs) for assessing the presence of partial correlation. BFFs represent Bayes factors derived from test statistics and are expressed as functions of a standardized effect size. While traditional frequentist methods based on p-values have been criticized for their inability to provide cumulative evidence in favor of the true hypothesis, Bayesian approaches are often challenged due to their computational demands and sensitivity to prior distributions. BFFs overcome these limitations and offer summaries of hypothesis tests as alternative hypotheses are varied over a range of prior distributions on standardized effects. They also enable the integration of evidence across multiple studies. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 2311005
- PAR ID:
- 10603951
- Publisher / Repository:
- arXiv; https://arxiv.org/html/2503.10787v1
- Date Published:
- Format(s):
- Medium: X
- Institution:
- Texas A&M University
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            The law expects jurors to weigh the facts and evidence of a case to inform the decision with which they are charged. However, evidence in legal cases is becoming increasingly complicated, and studies have raised questions about laypeople’s abilities to understand and use complex evidence to inform decisions. Compared to other studies that have looked at general evidence comprehension and expert credibility (e.g. Schweitzer & Saks, 2012), this experimental study investigated whether jurors can appropriately weigh strong vs. weak DNA evidence without special assistance. That is, without help to understand when DNA evidence is relatively weak, are jurors sensitive to the strength of weak DNA evidence as compared to strong DNA evidence? Responses from jury-eligible participants (N=346) were collected from Amazon Mechanical Turk (MTurk). Participants were presented with a summary of a robbery case before being asked a short questionnaire related to verdict preference and evidence comprehension. (Data is from the pilot of experiment 2 for the grant project). We hypothesized participants would not be able to distinguish high- from low-quality DNA evidence. We analyzed the data using Bayes Factors, which allows for directly testing the null hypothesis (Zyphur & Oswald, 2013). A Bayes Factor of 4-8 (depending on the priors used) was found supporting the null for participants’ rating of low vs. high quality scientific evidence. A Bayes Factor of 4 means that the null is four times as probable as an alternative hypothesis. Participants tended to rate the DNA evidence as “high quality” no matter the condition they were in. The Bayes Factor of 4-8 in this case gives good reason to believe that jury members are unable to discern what constitutes low quality DNA evidence without assistance. If jurors are unable to distinguish between different qualities of evidence, or if they are unaware that they may have to, they could give greater weight to low quality scientific evidence than is warranted. The current study supports the hypothesis that jurors have trouble distinguishing between complicated high vs. low quality evidence without help. Further attempts will be made to discover ways of presenting DNA evidence that could better calibrate jurors in their decisions. These future directions involve larger sample sizes in which jury-eligible participants will complete the study in person. Instead of reading about the evidence, they will watch a filmed mock jury trial. This plan also involves jury deliberation which will provide additional knowledge about how jurors come to conclusions as a group about different qualities of evidence. Acknowledging the potential issues in jury trials and working to solve these problems is a vital step in improving our justice system.more » « less
- 
            Testing for Granger causality relies on estimating the capacity of dynamics in one time series to forecast dynamics in another. The canonical test for such temporal predictive causality is based on fitting multivariate time series models and is cast in the classical null hypothesis testing framework. In this framework, we are limited to rejecting the null hypothesis or failing to reject the null -- we can never validly accept the null hypothesis of no Granger causality. This is poorly suited for many common purposes, including evidence integration, feature selection, and other cases where it is useful to express evidence against, rather than for, the existence of an association. Here we derive and implement the Bayes factor for Granger causality in a multilevel modeling framework. This Bayes factor summarizes information in the data in terms of a continuously scaled evidence ratio between the presence of Granger causality and its absence. We also introduce this procedure for the multilevel generalization of Granger causality testing. This facilitates inference when information is scarce or noisy or if we are interested primarily in population-level trends. We illustrate our approach with an application on exploring causal relationships in affect using a daily life study.more » « less
- 
            Abstract We develop alternative families of Bayes factors for use in hypothesis tests as alternatives to the popular default Bayes factors. The alternative Bayes factors are derived for the statistical analyses most commonly used in psychological research – one-sample and two-samplet tests, regression, and ANOVA analyses. They possess the same desirable theoretical and practical properties as the default Bayes factors and satisfy additional theoretical desiderata while mitigating against two features of the default priors that we consider implausible. They can be conveniently computed via an R package that we provide. Furthermore, hypothesis tests based on Bayes factors and those based on significance tests are juxtaposed. This discussion leads to the insight that default Bayes factors as well as the alternative Bayes factors are equivalent to test-statistic-based Bayes factors as proposed by Johnson.Journal of the Royal Statistical Society Series B: Statistical Methodology,67, 689–701. (2005). We highlight test-statistic-based Bayes factors as a general approach to Bayes-factor computation that is applicable to many hypothesis-testing problems for which an effect-size measure has been proposed and for which test power can be computed.more » « less
- 
            Bayes estimators are well known to provide a means to incorporate prior knowledge that can be expressed in terms of a single prior distribution. However, when this knowledge is too vague to express with a single prior, an alternative approach is needed. Gamma-minimax estimators provide such an approach. These estimators minimize the worst-case Bayes risk over a set Γ of prior distributions that are compatible with the available knowledge. Traditionally, Gamma-minimaxity is defined for parametric models. In this work, we define Gamma-minimax estimators for general models and propose adversarial meta-learning algorithms to compute them when the set of prior distributions is constrained by generalized moments. Accompanying convergence guarantees are also provided. We also introduce a neural network class that provides a rich, but finite-dimensional, class of estimators from which a Gamma-minimax estimator can be selected. We illustrate our method in two settings, namely entropy estimation and a prediction problem that arises in biodiversity studies.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
