There is a trade off between the explanatory powers of a model and its complexity: the more a model explains, the more complex it will be.
Would you disagree with that?
Before you answer, let me make a claim:
Mathematics is about revealing patterns that simplify.
The mathematician’s work is actually based on producing simplicity, showing how complex problems can be reduced to component pieces that are simpler.
Think about all the beautiful proofs you’ve known. They are beautiful because they have shown how a simple and harmonious vision can achieve a target better than alternatives that will seem like a hack in comparison. Wikipedia explains it well, here.
I would claim that:
model development is not a linear process.
It is a fact that any model or a theory that started out as a beautiful response to a given question or context will get bloated over time as it gets developed to deal with other problems.
Think of how lovely the Vasicek model is as a basic model of yield curves, and think of how it has become overworked as the central construction for fixed-income derivatives risk management in recent years.
It’s as simple as that: model development proceeds until we get to a point where the level of complexity makes it more and more obvious that the original idea has been pushed too far beyond itself.
At this point we need a break from the linear development: we need a new paradigm.
The new approach will offer a better trade off between explanatory power and complexity, and in order to do that it will probably need to be more harmonious. Without the element of harmony, or simplifying power, you will probably lose explanatory power.
In the investment world, the efficient frontier represents the best trade-off between expected return and risk: move your portfolio closer to the efficient frontier and you get the same returns for less risk.
For model developers and quants the same principle should apply:
work hard to get your models as close to the efficient frontier of simplicity/explanatory power as you can.
The future for derivatives risk management
Classic quant modellers need a paradigm shift.
Over the years, yield-curve models have become substantially more sophisticated in terms of modelling smile and volatility processes, but have not tried to tackle trading risks like supply, liquidity, squeezes (and plenty of others) that really determine the daily PnL of a trading desk.
Experience tells me that it is not at all easy to include these features into these kinds of models.
In other words, the simplicity/explanatory power trade off is going to be painful (if they succeed at all).
No, my intuition is that the techniques used by quants of the systematic trading kind (based more on statistical analysis of price/volume data rather than stochastic models) will show the way forward.
These approaches start with the data rather than with a theory.
Anyone agree with me? If you need convincing, read this book and then let’s talk!