Some of you may have already heard of the “Quants”, the supposed supermen of financial engineering. The majestic modellers of risk management and liquidity management that within their black box algorithmic alchemy they alone hold the key to unlocking the wondrous stability and growth of the late 20th / early 21st century.
Well, it seems that their black box “black magic”, might have turned out to be little more than complete fiction:
“Scientists struggle with models in many fields—including climate science, coastal erosion and nuclear safety—in which the phenomena they describe are very complex, or information is hard to come by, or, as is the case with financial models, both. But in no area of human activity is so much faith placed in such flimsy science as finance.”
States an article just published in the Scientific American:
“Calibrating a complex model for which parameters can’t be directly measured usually involves taking historical data, and, enlisting various computational techniques, adjusting the parameters so that the model would have “predicted” that historical data. At that point the model is considered calibrated, and should predict in theory what will happen going forward.”
It seems that a scientist (of the genuine variety) undertook a test whereby he created some dummy data from an underlying statistical trend model and then using the output data generated by this model tried to re-create the original formula; much as in the way a financial quant would study past trends of financial data and create a well fitting trend formula to predict the future.
So what did he find? Well, he found that he wound up not with one correct answer, but numerous, almost correct answers:
“while these different versions of the model might all match the historical data, they would in general generate different predictions going forward”
In other words the models “seemed” OK when tested against the data they were “trained on”, but then failed when tested against new data, it’s future forecasts.
So when the new data arrives, and it doesn’t match the orginal forecast, what do the Quants do? Well they don’t say their model was incorrect, they just say that it needs “re-calibrating”:
“When you have to keep recalibrating a model, something is wrong with it. If you had to readjust the constant in Newton’s law of gravity every time you got out of bed in the morning in order for it to agree with your scale, it wouldn’t be much of a law. But in finance they just keep on recalibrating and pretending that the models work.”
Yes, so much “faith” is placed on these financial geeks, and their flimsy “science” and flimsy “models” by a naïve public and gullible politicians.
In fact referring to what financial quants call “models” is an insult not just to science but to real models like Jordan and Kate Moss too.
NB: Scientific American may be a little late to the party in critiquing the Quants, as this film from 2010 explores the dubious underpinnings of high finance:
However, the beauty of the Scientific American article is the way that the investigator used a pure laboratory experiment to conclusively prove the folly of assuming that formulae created from past data is not always sufficient to conclusively predict the future.