Quants behind the quash

Wired has an article about David X. Li, the man who came up with a new formula to price bonds. It was because of the deficiencies of and overreliance on this formula that the stock market came crumbling down, or so sayeth Wired. The Wired article is a little irritating – do they really have to define what “correlated” means? – but it does tell an interesting story.

One problem with pricing bonds is that there is not enough historical information to accurately assess their risk. Li had the insight that one could just use the correlations between the credit default swaps (CDS) on the bonds. The CDS’s are insurance against a default on the bond, so their price measures the riskiness of a bond. By looking at the correlations between different CDS’s, one can more accurately measure the risk of different bonds. So if the price of a CDS went up for one company (meaning it was more likely to default) and simultaneously went up for another company (meaning it, too, was more likely to default), they must be responding to similar risks. Since we don’t see defaults of, say, Intel that often, we can get a better guess of whether Intel is going to default by using these correlations.

And that’s pretty cool! It’s a very elegant solution to the problem. There are, of course, a lot of assumptions going on in this. Just thinking about it really quickly, you have to assume that the CDS prices defaults correctly (no systematic biases), you have to assume that historical CDS prices are a good measurement of risk (they aren’t really because they’ve only really been around about ten years – since the housing bubble started), you have to make assumptions about correlation/causation, etc etc. And of course all the quants knew this! People are repeatedly quoted as saying the quants warned about this, and anyone who uses a model has to know that it’s only an approximation.

The problem was that the managers didn’t understand statistics. And didn’t understand the formulas. And ignored all the advice otherwise. And the ratings agencies were doing a terrible job at rating. The article talks about how all the quants would always give talks and say, “but there are a lot of possible problems” and the managers would just see the simple “correlations” and assume that’s enough to know.

It all adds up to a rather interesting mess. The lesson here that no one will pay attention to after the economy gets going again? Ensure the people making the decisions know all the assumptions going into a model and understand the models they are using so that they can know what the hell they’re really doing.

Oh and a quick thought – something like this happened on a much lesser scale after the Black-Scholes equation came out, I think? So maybe the problem is, anytime a new predictive model is discovered and spreads sufficiently throughout a community, problems with the model necessarily arise. After all, the model protects from whatever risk it predicts, and if that is what everyone uses, the only risks left are those the model doesn’t predict which will be dragged inexorably up to the surface. OK I’m done.