Ly and Wagenmakers (Computational Brain & Behavior:1–8, in press) critiqued the Full Bayesian Significance Test (FBST) and the associated statistic FBST ev: similar to the frequentist p-value, FBST ev cannot quantify evidence for the null hypothesis, allows sampling to a foregone conclusion, and suffers from the Jeffreys-Lindley paradox. In response, Kelter (Computational Brain & Behavior:1–11, 2022) suggested that the critique is based on a measure-theoretic premise that is often inappropriate in practice, namely the assignment of non-zero prior mass to a point-null hypothesis. Here we argue that the key aspects of our initial critique remain intact when the point-null hypothesis is replaced either by a peri-null hypothesis or by an interval-null hypothesis; hence, the discussion on the validity of a point-null hypothesis is a red herring. We suggest that it is tempting yet fallacious to test a hypothesis by estimating a parameter that is part of a different model. By rejecting any null hypothesis before it is tested, FBST is begging the question. Although FBST may be useful as a measure of surprise under a single model, we believe that the concept of evidence is inherently relative; consequently, evidence for competing hypotheses ought to be quantified by examining the relative adequacy of their predictions. This philosophy is fundamentally at odds with the FBST.

, , , ,
doi.org/10.1007/s42113-022-00154-1
Computational Brain and Behavior

Ly, A., & Wagenmakers, E.-J. (2022). Measure-theoretic musings cannot salvage the Full Bayesian Significance Test as a measure of evidence: Rejoinder to Kelter. Computational Brain and Behavior, 5, 583–589. doi:10.1007/s42113-022-00154-1