Wired has an article about David X. Li, the man who came up with a new formula to price bonds. It was because of the deficiencies of and overreliance on this formula that the stock market came crumbling down, or so sayeth Wired. The Wired article is a little irritating – do they really have to define what “correlated” means? – but it does tell an interesting story.

One problem with pricing bonds is that there is not enough historical information to accurately assess their risk. Li had the insight that one could just use the correlations between the credit default swaps (CDS) on the bonds. The CDS’s are insurance against a default on the bond, so their price measures the riskiness of a bond. By looking at the correlations between different CDS’s, one can more accurately measure the risk of different bonds. So if the price of a CDS went up for one company (meaning it was more likely to default) and simultaneously went up for another company (meaning it, too, was more likely to default), they must be responding to similar risks. Since we don’t see defaults of, say, Intel that often, we can get a better guess of whether Intel is going to default by using these correlations.

And that’s pretty cool! It’s a very elegant solution to the problem. There are, of course, a lot of assumptions going on in this. Just thinking about it really quickly, you have to assume that the CDS prices defaults correctly (no systematic biases), you have to assume that historical CDS prices are a good measurement of risk (they aren’t really because they’ve only really been around about ten years – since the housing bubble started), you have to make assumptions about correlation/causation, etc etc. And of course all the quants knew this! People are repeatedly quoted as saying the quants warned about this, and anyone who uses a model *has* to know that it’s only an approximation.

The problem was that the managers didn’t understand statistics. And didn’t understand the formulas. And ignored all the advice otherwise. And the ratings agencies were doing a terrible job at rating. The article talks about how all the quants would always give talks and say, “but there are a lot of possible problems” and the managers would just see the simple “correlations” and assume that’s enough to know.

It all adds up to a rather interesting mess. The lesson here that no one will pay attention to after the economy gets going again? Ensure the people making the decisions know all the assumptions going into a model and *understand the models they are using* so that they can know what the hell they’re really doing.

Oh and a quick thought – something like this happened on a much lesser scale after the Black-Scholes equation came out, I think? So maybe the problem is, anytime a new predictive model is discovered and spreads sufficiently throughout a community, problems with the model necessarily arise. After all, the model protects from whatever risk it predicts, and if that is what *everyone* uses, the only risks left are those the model *doesn’t* predict which will be dragged inexorably up to the surface. OK I’m done.

While I do not understand the explicit formulas that Mr. Li came up with it points to a fundamental failure in most economic models. They make assumptions that do not exactly model the real world.

An old friend of mine who is a brilliant economist and became chair at Stanford would always dismiss my concerns that his assumptions didn’t match want could happen in reality. The key to him was developing models that worked given the assumptions. If the assumptions were changed then that was just a different computation. The fact that this was not very useful to someone like me who was trying to see how it could be used in real life was irrelevent to him. I think your point is that this disconnect between the modele makers and the users migrated to wall street.

No model exactly replicates the real world. It is

alwaysan approximation. In my work, there are a lot of assumptions and a lot of cases where the model won’t work. But we still use that type of model because, for the most part, it gives a lot of predictive power. It isvery generallyright, and only sometimes wrong.But there are a lot of simple models of how neurons work that people use all the time. They have tons of deficiencies but people still use them because in the case in which they are being used, they are generally correct. I can’t speak for Rich’s work, but I assume that his models had predictive power? If they weren’t predictively valid, I’d agree they are kind of meaningless. But the point of a model is to explain why something happens, which I assume his did.

The problem is that when you rely on a model with a lot of assumptions, and failure of the real world to adhere to those assumptions causes catastrophic failure in everything you’re doing, you should probably be hedging against that. These managers clearly didn’t understand the assumptions and risks in the models, otherwise they would have made sure their quants were paying attention to how reality was diverging from their modeling.