2020-02-21
Reducing Waste with Meta-Analysis/Replications: Why We Must and Can Do Better than All-or-Nothing Statistics
Publication
Publication
P-values and confidence intervals depend very strictly on the experimental design. For meta-analysis this means that to guarantee type-I error control and coverage, the number and size of studies should be known before any study is performed. All ‘alpha’ is spent as soon as all planned studies are analyzed, so p-values and confidence intervals for pooled outcomes allow for only one all-or-nothing analysis.
Hence, it is neither allowed to add more studies, nor to make intermediate decisions; p-values and confidence intervals cannot build on existing evidence. By ignoring that, conventional meta-analysis introduces Accumulation Bias in estimates and inflates type-I errors. This is in complete conflict with the goal to reduce research waste by informing new research by meta-analysis on past results. So we need better statistical methods if we want funders and researchers to implement these goals.
A very promising approach – to preserve error control and coverage while enabling intermediate judgments – is developed at CWI and called Safe Testing and Safe Estimation. My poster shows, by re-analysis of an animal experiment, how Safe Testing can reduce research waste as well as improve 3Rs, by including less animals in a series of replications.
Safe Tests are now available in the R package SafeStats.
Additional Metadata | |
---|---|
REWARD | EQUATOR Conference | |
Organisation | Machine Learning |
ter Schure, J. (2020). Reducing Waste with Meta-Analysis/Replications: Why We Must and Can Do Better than All-or-Nothing Statistics. |