Research Notifies Of Underestimated Uncertainty In Published Research
New research involving the University of Sydney Business School has found researchers underestimate the degree of uncertainty in their findings.
Professor Joakim Westerholm
Professor Joakim Westerholm
New research involving the University of Sydney Business School has found researchers underestimate the degree of uncertainty in their findings.
In empirical science, researchers analyse samples to test hypotheses, and this creates a within-researcher variation due to sampling error. Re-sampling yields different values of the estimator, and the standard deviation of this distribution is referred to as standard error.
Researchers are less aware, however, that there is an additional level of uncertainty due to there not being a standard analysis path.
Researchers vary in what they deem to be the most reasonable path, and estimates may vary across researchers as they might pick different paths. This is referred to a non-standard error.
The study, led by Professor Albert Menkveld at the Vrije University of Amsterdam and nine other academics, involved 164 teams testing the same hypotheses on the same data to measure the impact of non-standard errors.
A separate team of highly experienced researchers was engaged to peer review the work of each of the 164 teams.
The research, to be published in the Journal of Finance, found that such non-standard errors were substantial and similar in magnitude to standard errors.
A relatively straightforward hypotheses about market share produced a non-standard error rate of 1.2 percent. For a more complex hypothesis about market efficiency, the non-standard error rate was up at 6.7 percent.
Non-standard errors were smaller for better reproducible or higher-rated research, and slashed in half by adding peer-review stage.
Study participant Professor Joakim Westerholm from the University of Sydney Business School said the research highlights the importance of researchers taking into account the potential dispersion in estimates when testing hypotheses due to the researchers’ choice of analysis path.
“If researchers are not aligned on key decisions, such as selecting a statistical model or treating outliers, their estimates are likely to differ – adding uncertainty to the estimate reported by a single team,” Professor Westerholm said.
This type of uncertainty is often underestimated by researchers, which is why we need to be aware of our own bias and the steps we can take to minimise its impact.
Professor Joakim Westerholm
“While we cannot expect every question to be investigated by a team of 160 seasoned research teams, we can design approaches that take non-standard errors into account – for example, each member of a team could perform independent tests that are then compared and evaluated.”
Professor Westerholm said the next stage in the research may be to replicate the study using artificial intelligence and machine learning to see whether this has any impact on the rate of non-standard errors.