Showing posts with label research. Show all posts
Showing posts with label research. Show all posts

2013/07/09

Self-critique

I recently finished my masters thesis and I would now like to do something that most people never do. I am going to critique my work. I believe that being self-critical is essential not only in research, but in life in general. In research it is necessary to further the truth. It is not enough that others are critical of you (though it is necessary). You must be critical of yourself – only then will you be willing to remedy your flaws, change your convictions and pursue truth and goodness rather than your own prejudiced agenda.

I write this post as much to myself as for any reader who may come across it. It is another public reminder of a lesson I fear I may forget in the future, and of which I may need to be reminded. Even if you are not interested in my thesis, some of the points of criticism here may be useful for your own writing. I found in writing this post that self-criticism is hard. It’s really hard to come up with anything but weak flaws in your own work. I reckon it will take time to cultivate a truly self-critical nature.

My thesis was about momentum. I looked at time-series and cross-sectional momentum and their relationship with volatility and cross-sectional dispersion, specifically considering volatility weighting as a means of improving momentum strategies. If this sounds like Greek, do not fear, for I will be explaining all of this in later posts. Today I just want to list my critiques, starting with the more serious ones.

Subjective conclusions drawn from mixed results
I often found when I ran some numbers that I got results that were not clear-cut for any any one conclusion. Then I had to be satisfied with making a qualified conclusion if I thought there was enough support for it in the data. “Enough support” is subjective and others may feel differently. It may be a wiser course of action not to draw any conclusion at all, but it may also be far too conservative.

A lack of focus
Good academic research tends to focus on one thing and examine it thoroughly. My thesis, I think, tried to look at somewhat too much, and as a result ended up being huge and with not one aspect being treated quite as it deserved.

Strict assumptions (that I violate)
In order to prove things easily you need to make assumptions. Often you end up assuming independence or normality where it is clearly not the case in the data. In my case I needed to make such assumptions to prove things about volatility weighting and in at least one case I am not even certain there is a non-trivial (that is to say, interesting) process that satisfies my assumptions. I have little choice but to violate my assumptions (there are for instance no volatility estimators that satisfy the assumptions I had to make).

Overly detailed results that are hard to interpret
I have lots and lots and lots of tables in my thesis.  They are big and make your eyes sore. Ideally one should find a way of presenting just the right numbers, without hiding ambiguity and evidence that doesn’t support your conclusions. It is hard to read text referring to specific numbers in very large tables and keep track of what is happening. Making a graph is even better as it gives an immediate impression (but you may still want to report the numbers so that people can check them). I had relatively few graphs as I could think of no good way to convert my tables into something visual. This is a weakness. You may think that academics should be able to read such dense material, that they should take the time and effort. It is, however, a simple fact that academics are human and that they do not. Even if they do,  they are less likely to get the right picture if the information is not presented in accessible manner.

Use of advanced techniques without necessarily having the appropriate understanding
I used what are called “Robust regressions” in my thesis in order to cope with the fact that financial data contains so many extreme values. I had never used robust regressions before and only briefly looked up what the robust regressions did, then used them. I did not take the time to get well acquainted with their theory (as this would have been quite a task, I think) and I simply used a standard weighting function with a standard parameter.  Most likely this is still better than simply using OLS regressions (which I think are absolutely a no-go in financial research, except as a baseline comparison), but it is still possible that the version of robust regressions I used were not the most appropriate (deciding what is appropriate is of course more an art than a science) and it would have been preferable to have had more training in using them.

Linearity
I used linear models for the theoretical and empirical investigations. One thing that is clear from finance, though, is that nothing is linear and so results from linear models can be misleading. We have, however, I think, only poor substitutes and thus linearity is still common in academia. This is thus only a weak criticism on my part, but I would like to see a move away from linear models, if only we could find an accessible and preferably tractable alternative.

Little thought for practicalities
I did not consider that I was basing my results on markets that closed at different times (essentially I assumed they closed at the same time); I did not include transaction costs, commissions, taxes, etc. This is not unreasonable. To include all these things meticulously would detract from the main purpose of the study. But they are important and their inclusion could potentially change the nature of the relationships found (though this is unlikely).

If you read my thesis and you think there are other criticisms, then please let me know. Perhaps I'll include them in a further post.

2013/03/11

Don't trust research, not even your own


The more research I do (and I have not done much) the less I trust other research, and the less I trust my own.  When you read a paper in finance you will invariably find significant results that support the author’s conclusions.  When I run regressions, my results are weak, sometimes contradicting my hypothesis sometimes confirming it.

Significance bias

Weak results don’t get published (and negative results, i.e. results that do not find evidence for an hypothesis don’t get published either).  Academics have an incentive to try things until they find something significant. Of course, they then only report the significant results. The problem is firstly that this is a form of data mining and secondly that there is information in the insignificant results that is now lost. Future researchers may well duplicate much of the work, not knowing it has already been tried.

The story bias

We like stories that fit together. In research (particularly research in finance) this can be problematic. At some point during your research (or beforehand) you will develop a theory, a kind of story puts all your results together like pieces of a puzzle. This will develop long before you have all your results. It’s a good thing to have a hypothesis before you start running tests on your data – it helps to avoid data mining. However, the danger is that as you look at your results, you discount the results that do not fit your story.  This may not even be conscious. It’s just the way your brain works.  The danger, then, is that you only report results that fit your story.  

As a researcher you are likely to have more results at your disposal than you can reasonably report in an article or thesis. You must condense, summarise, omit. More than that, you must give the appearance of a coherent structure, an evidence-based explanation of the underlying forces you have uncovered. You can’t just say, “well, it’s a bit of a mess.” Your story has to make sense. But sometimes the world is a mess and stories don’t work.

Self-examination

I wrote this post not because I found these tendencies in other researchers. I found their beginnings in myself. I have careful supervision that will prevent abuses. In theory peer review should uncover these things as well. In practice, I don’t trust the peer review system either. Thus this is a public reminder to myself: be wary of cherry-picking results to suit your prejudices.

2013/01/31

The anonymous referee


In academic articles, you sometimes come across a paragraph that makes no sense (even on a second or third reading). Very often you will find, attached to this paragraph, a footnote, which says something like the following

“We thank the anonymous referee for drawing our attention to this.”

To the anonymous referee, that is exactly what is being said. But to anyone else reading the article it translates as

“We had to put this in our article in order to get it published because the anonymous referee is an idiot and insisted on it.”

One can then proceed to read the paper, ignoring that paragraph. This seems wrong to me. On the one hand I understand the trade-off: no publication vs make one little, albeit nonsensical, change. I would like to think I have more honour, more backbone than that, but I probably do not.

I think the anonymous peer review system is flawed. And it appears I’m not the only that thinks so.
Peer review is currently (at least in some fields and journals, not all though) double blind. In theory the referee does not know whose paper they are reviewing and the author does not know who the referee is (in practice, I think, anonymity is hardly guaranteed, especially in fields where there are a small number of specialists. Your writing, the references you choose (especially your own papers) can give you away).

It is one side of this double blindness that bothers me – that referees are anonymous. The other side may have flaws too, but at least it should prevent a paper being accepted merely because it is written by a bigshot academic.

Referee anonymity absolves journals of the responsibility of explaining their choice of referee (if they choose a referee obviously against the line of research or too obviously biased for it). It also means referees are not held accountable for their reviews. They may not take the process seriously and out of sheer laziness rather than malice block good research or let bad research pass.

I think that if referees are made known it will allow for greater dialogue. Referees can be challenged. Their reputations depend on being thorough. The feedback process may in fact lead to better research.

I think I, like one Dr Bertrand Meyer (see below), will always insist on signing my reviews (if I am ever in the position to review work for publication). My reputation is important to me. I want my reviews to reflect on my reputation (otherwise I will not take them seriously) and I want them to be thorough and thoughtful (otherwise they will reflect badly on my reputation).





2013/01/22

Humble academics


(The following is based on my initial and brief impressions of the quality of academic writing in finance. I may change my mind later)

I’ve been reading through (the introductions) of very many articles in finance these past two weeks. The more I read, the more I realise that in finance the truth is a very murky prospect. In physics it seems like the truth is more stable (although physicists have a nasty habit of confusing their “theories” with reality. They seem to forget when their theories are shown to be wrong that they ever thought of them as Gospel).  But in finance, if you find two papers that agree, they probably share an author.

I am pretty sure that all these papers have one thing in common: they are all wrong. But every author is confident of his conclusions. References to why their results may be spurious are rare. Hardly ever do authors mention that their underlying assumptions are completely wrong – it seems standard to just rely on run-of-the-mill statistical methods, which I cannot believe take into account the wild randomness of the markets. Very few seem to care.

Academics in finance needs to be a little more humble. I think every paper should contain a disclaimer:
“The results in this paper are only valid under the assumptions of the methods used. These assumptions are almost certainly violated. The conclusions in this paper are disputed. Please do not confuse what is presented here with the truth.”

Some tips for academics:
  • Write very clearly the underlying assumptions are – don’t just use methods without being very clear what it is they assume. 
  • If you’re using a method outside of an area in which it is (proven) valid, write it in CAPS LOCK, because otherwise you’re a fraud, a charlatan.
  • Show how the assumptions are violated (note I used “how” not “if”) – not just speculation, I want to see statistical tests and diagrams. 
  •  Please reference everyone who disagrees with you. They’re not right either, but at least we know where to look for alternatives. 
  • Stop being so sure of yourself.

Readers of anything in finance (of academic journals, of The Economist, etc.) should consider that anything can be challenged. There is no absolute truth. If there is, we cannot discover it, which amounts to the same thing. Live in a state of scepticism of everything you read. It isn’t fun – but the alternative, as Voltaire would say, is absurd.