The more
research I do (and I have not done much) the less I trust other research, and
the less I trust my own. When you read a
paper in finance you will invariably find significant results that support the
author’s conclusions. When I run
regressions, my results are weak, sometimes contradicting my hypothesis
sometimes confirming it.
Significance bias
Weak
results don’t get published (and negative results, i.e. results that do not
find evidence for an hypothesis don’t get published either). Academics have an incentive to try things
until they find something significant. Of course, they then only report the
significant results. The problem is firstly that this is a form of data mining
and secondly that there is information in the insignificant results that is now
lost. Future researchers may well duplicate much of the work, not knowing it
has already been tried.
The story bias
We like
stories that fit together. In research (particularly research in finance) this
can be problematic. At some point during your research (or beforehand) you will
develop a theory, a kind of story puts all your results together like pieces of
a puzzle. This will develop long before you have all your results. It’s a good
thing to have a hypothesis before you start running tests on your data – it
helps to avoid data mining. However, the danger is that as you look at your
results, you discount the results that do not fit your story. This may not even be conscious. It’s just the
way your brain works. The danger, then,
is that you only report results that fit your story.
As a
researcher you are likely to have more results at your disposal than you can
reasonably report in an article or thesis. You must condense, summarise, omit. More
than that, you must give the
appearance of a coherent structure, an evidence-based explanation of the
underlying forces you have uncovered. You can’t just say, “well, it’s a bit of
a mess.” Your story has to make sense. But sometimes the world is a mess and
stories don’t work.
Self-examination
I wrote
this post not because I found these tendencies in other researchers. I found
their beginnings in myself. I have careful supervision that will prevent
abuses. In theory peer review should uncover these things as well. In practice,
I don’t trust the peer review system either. Thus this is a public reminder to
myself: be wary of cherry-picking results to suit your prejudices.