Real info on HFT

There is a great comment from a higher-frequency trader that was left on a post on Marginal Revolution. It’s a few comments down on that page. Here’s the background that he gives:

I work as a quant at one of the major high frequency trading firms, this paper is definitely one of the better academic works I’ve seen on the subject. I’ll add a little more. Generally the way that HFT works is by looking for a set of predictive signals in the market. Those signals are combined with liquidity and execution constraints to try to find the most profitable set of parameters after transactions costs are taken into account.

90% of these signals are fall into two major categories: 1) Looking at the price movements of related securities. A good example is SP500 versus Nasdaq. The correlation between the two is around 85% on a daily horizon, but over a horizon of 10 secs or so correlation is virtually zero. So when one moves a certain you bet that the other one will either follow or the first mover will fall back. 2) The other one is by looking at the state of the limit order book and it’s evolution through time. As a very simple example if say you have 20,000 size quantity on the bid and it’s been monotonically increasing and 5,000 size quantity on the ask and it’s monotonically decreasing then it’s very likely that the level on the ask will get wiped out first and the price will go up.

In general what these two add up to is trying to distinguish noisy trades versus signal trades. Speculators/investors/hedgers/etc. are the primary players in the market. Some of those trades contain high information (e.g. maybe a person with access to insider information buying up stock before some announcement), some of them contain virtually no information and are pure noise (e.g. granny liquidating some of her portfolio for monthly expenses). In a naive market with no HFT signals we have no way of assessing the informational content of individual trades, we only have an estimated aggregate or average informational content of trade. Market makers will set their spread and sizes according to this aggregated informational content.

But over any sample the estimated average informational content of trades will not be the same as the realized, for example one week might more than usual insider trading, one month it might make up a small fraction. There’s also a ton of path dependency when you work out the math, that amounts to pure randomness. Because of this securities will not perfectly track their “true price.” The deviation is still stationary, because the more out of line the prices get with the fundamentals the more speculators will step in and push it back. No one is smarter than the market 100% of the time so every time a fundamental speculator sees a price that’s too low/high there’s some chance that the market is right and his valuation is missing something and some chance he’s right. Speculators that aren’t very good are probably only going to be “beating the market” when the valuation on securities looks insanely out of whack or by distributing his portfolio over a wide range of perceived mis-valuations to reduce his volatility. Only the very best speculators are going to be able to get their fundamental valuations consistently right within a small margin of error. So without HFT/Stat Arb./technical trading/whatever you want to call it/etc. the thing that keeps securities from randomly drifting too far are fundamental speculators.

Basically what HFT is doing, instead of fundamentally valuing securites, determining the informational content of individual trades or small time frames, using the signals I mentioned earlier. A segment of the price evolution with high information content tend to look very different from noisy trades on the small scale, but when aggregated up lose this distinguishability. It’s almost symmetrical when you think about it. Fundamental speculators estimate a price for the security and trust in the reliability of the price evolution process in brining the market price to their estimated “true price”. HFT trusts in the reliability of the initial price as being the best estimate of the value of the security and tries to identify errors and miscalculations in the price evolution process.

There is a lot more to that comment there, and plenty of other worthwhile comments as well. Go give them a read!

Economists have already ruined everything, so why should we care what they’re doing now?

909shippingroutes420

I of course read Krugman’s article in the NYT magazine on how economists got everything all wrong. It was somewhat interesting, though not incredibly so; his arguments are either a rehash of what’s been out there in other sources for a while or kind of wrong. I’m not an economist, so I certainly don’t know how in thrall economists are to mathematical perfection over ideas (though having worked in econometrics I can say, at least in the private sector, it’s not very much). I have a hard time with Krugman because even though I tend to agree with him politically, he’s kind of an asshole and a demagogue when he writes. But I read the article not as a very good attack on modern economics per se, but as a good attack on the dangers of groupthink.

A few people have offered rebuttals; some, like John Cochrane, appear more like bitter responses than substantive. He clearly misunderstands a lot of what Krugman wrote and offers plenty of incorrect assertions about how a Keyenesian would or would not act – though he is correct that Krugman doesn’t offer compelling evidence as to why Keyenesianism is the optimal model as opposed to the nine billion other ones.

Robert Levine offers alternative explanations for the crisis that Krugman should have considered, namely innovation and oil shocks. More interesting is Ben Gordon’s discussion of modern macroeconomics. He criticizes “modern macro”, here dynamic stochastic general equilibrium (DSGE), and compares it with 1978-era theories. Both Gordon’s and Levine’s criticisms seem similar, to me, though I don’t know enough about DSGE to really evaluate them.

How we model the economy is an important question. Krugman mentions behavioral economics (TED video here). Frankly, the thing that shocks me the most about academic economics is that it didn’t incorporate psychological and sociological research one hundred years ago when they were becoming ‘sciences’. Talk about groupthink! If we don’t understand how even small groups make economic decisions, how do we know how to generalize it? That’s what we call science. I would say the way forward in economics is behavioral economics combined with agent-based economics. Then, you derive laws from there.

Anyway, it looks like quants are doing something like that already. Here’s your bonus article with cool pictures of world trade links – though I’m not sure I’d agree with the cause they propose for the lack of links to certain countries.

Quants behind the quash

Wired has an article about David X. Li, the man who came up with a new formula to price bonds. It was because of the deficiencies of and overreliance on this formula that the stock market came crumbling down, or so sayeth Wired. The Wired article is a little irritating – do they really have to define what “correlated” means? – but it does tell an interesting story.

One problem with pricing bonds is that there is not enough historical information to accurately assess their risk. Li had the insight that one could just use the correlations between the credit default swaps (CDS) on the bonds. The CDS’s are insurance against a default on the bond, so their price measures the riskiness of a bond. By looking at the correlations between different CDS’s, one can more accurately measure the risk of different bonds. So if the price of a CDS went up for one company (meaning it was more likely to default) and simultaneously went up for another company (meaning it, too, was more likely to default), they must be responding to similar risks. Since we don’t see defaults of, say, Intel that often, we can get a better guess of whether Intel is going to default by using these correlations.

And that’s pretty cool! It’s a very elegant solution to the problem. There are, of course, a lot of assumptions going on in this. Just thinking about it really quickly, you have to assume that the CDS prices defaults correctly (no systematic biases), you have to assume that historical CDS prices are a good measurement of risk (they aren’t really because they’ve only really been around about ten years – since the housing bubble started), you have to make assumptions about correlation/causation, etc etc. And of course all the quants knew this! People are repeatedly quoted as saying the quants warned about this, and anyone who uses a model has to know that it’s only an approximation.

The problem was that the managers didn’t understand statistics. And didn’t understand the formulas. And ignored all the advice otherwise. And the ratings agencies were doing a terrible job at rating. The article talks about how all the quants would always give talks and say, “but there are a lot of possible problems” and the managers would just see the simple “correlations” and assume that’s enough to know.

It all adds up to a rather interesting mess. The lesson here that no one will pay attention to after the economy gets going again? Ensure the people making the decisions know all the assumptions going into a model and understand the models they are using so that they can know what the hell they’re really doing.

Oh and a quick thought – something like this happened on a much lesser scale after the Black-Scholes equation came out, I think? So maybe the problem is, anytime a new predictive model is discovered and spreads sufficiently throughout a community, problems with the model necessarily arise. After all, the model protects from whatever risk it predicts, and if that is what everyone uses, the only risks left are those the model doesn’t predict which will be dragged inexorably up to the surface. OK I’m done.