In 2009, a paper estimated that 85% of our global health research investment is wasted each year. It recommended to reduce this waste by basing study design and reporting on meta-analyses, involving decision making (which issues to research and how) and interpretation (how new trial results relate to previous research). However, conventional meta-analysis reporting – p-values and confidence intervals – is neither suitable for such decisions nor straightforwardly interpretable. As a decision procedure, it treats a sequence of trials as an independent sample, while in reality both (1) whether additional trials are performed and how many and (2) the timing of the meta-analysis (‘stopping rule’) often depend on previous trial results. Ignoring this introduces (1) ‘Accumulation Bias’ and (2) optional stopping problems, while the existence of such dependencies has been empirically demonstrated, e.g. ‘Proteus effect’ and ‘citation bias’. To solve both (1) and (2), we propose ‘Safe Tests’ and a reporting framework. Which tests are ‘Safe’ (e.g. Bayesian t-test) and which are not (both Bayesian and frequentist tests) is intuitively discussed in the meta-analysis context, but mathematical detail is postponed to a forthcoming paper. The reporting framework is also focused on the meta-analysis context, but is based on an individual study setup put forward by Bayarri et al. (2016) as ‘Rejection Odds and Rejection Ratios’. Apart from decision making, our proposal also improves meta-analysis interpretation: reported values are related to gambling earnings, frequentist and Bayesian analyses and thus, apart from reducing waste, also contribute to the recently revived p-value debate.

Safe Bayesian Inference: A Theory of Misspecification based on Statistical Learning
2nd NRIN Research conference
Machine Learning

ter Schure, J., & Grünwald, P. (2018, April 20). Past and Future Dependencies in Meta-Analysis: Safe Statistics for Reducing Health Research Waste.