Modern macroeconomics finally rears its head

Via Greg Mankiw, here’s a good article on the state of modern macroeconomic modeling by the frequently-interesting Kocherlakota. There is a lot of nuance and detail in here that is missing from the conversations one usually hears about modern macroeconomics. I think it would be worthwhile to go through some of the references at some point, too.

The switch to modern macro models led to a fierce controversy within the field in the 1980s. Users of the new models (called “freshwater” economists because their universities were located on lakes and rivers) brought a new methodology. But they also had a surprising substantive finding to offer. They argued that a large fraction of aggregate fluctuations could be understood as an efficient response to shocks that affected the entire economy. As such, most, if not all, government stabilization policy was inefficient….Scholars in the opposing (“saltwater”) camp argued that in a large economy like the United States, it is implausible for the fluctuations in the efficient level of aggregate output to be as large as the fluctuations in the observed level of output. They pointed especially to downturns like the Great Depression as being obvious counterexamples.

…With the advent of better computers, better theory, and better programming, it is possible to solve a much wider class of modern macro models. As a result, the freshwater-saltwater divide has disappeared. Both camps have won (and I guess lost). On the one hand, the freshwater camp won in terms of its modeling methodology. Substantively, too, there is a general recognition that some nontrivial fraction of aggregate fluctuations is actually efficient in nature. On the other hand, the saltwater camp has also won, because it is generally agreed that some forms of stabilization policy are useful. As I will show, though, these stabilization policies take a different form from that implied by the older models (from the 1960s and 1970s).

…However, the models with asset market frictions (combined with the right kind of measurement from microeconomic data) make clear why the above analysis [of frictionless financial markets] is incomplete. During downturns, the loss of income is not spread evenly across all households, because some people lose their jobs and others don’t. Because of financial market frictions, the insurance against these outcomes is far from perfect (despite the presence of government-provided unemployment insurance). As a result, the fall in GDP from June 2008 to June 2009 does not represent a 4 percent loss of income for everyone. Instead, the aggregate downturn confronts many people with a disturbing game of chance that offers them some probability of losing an enormous amount of income (as much as 50 percent or more). It is this extra risk that makes aggregate downturns so troubling to people, not the average loss.

This way of thinking about recessions changes one’s views about the appropriate policy responses. Good social insurance (like extended unemployment benefits) becomes essential. Using GDP growth rates as a way to measure recession or recovery seems strained. Instead, unemployment rates become a useful (albeit imperfect) way to measure the concentration of aggregate shocks.

[photo from]

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s