A First Meeting of Old and New Keynesian Econometric Models

Lawrence Klein who died last October at age 93 is most remembered for the “creation of econometric models and the application to the analysis of economic fluctuations and economic policies” as the Nobel Prize committee put it in the 1980 citation.  But in these days of “macroeconomists at war” it is worth remembering that Klein was also a pioneer in exploring the reasons for differences between macro models and the views of the economists who build and estimate them.  The Model Comparison Seminar that he ran during the 1970s and 1980s brought macroeconomists and their models together—macroeconomists at peace?—to understand why their estimates of the impact of fiscal and monetary policy were different.   In my view there is too little of that today.

I will always be grateful to Lawrence Klein for inviting me to join his Model Comparison Seminar and enter into the mix a new kind of model with rational expectations and sticky prices which we were developing at Stanford in the mid-1980s.  The model was an estimated version of what would come to be called a “new Keynesian” model, and the other models in the comparison would thus logically be called “old Keynesian.” They included such famous workhorse models as the Data Resources Incorporated (DRI) model, the Federal Reserve Board’s model, the Wharton Econometric Forecasting Associates (WEFA) model, and Larry Meyer’s Macro Advisers model.  It was probably the first systematic comparison of old and new Keynesian models and was an invaluable opportunity for someone developing a new and untried model.

The performance comparison results were eventually collected and published in a book, Comparative Performance of U.S. Econometric Models. In the opening chapter Klein reviewed the comparative performance of the models, noting differences and similarities: “The multipliers from John Taylor’s model…are, in some cases, different from the general tendency of other models in the comparison, but not in all cases….Fiscal multipliers in his type of model appear to peak quickly and fade back toward zero. Most models have tended to underestimate the amplitude of induced price changes, while Taylor’s model shows more proneness toward inflationary movement in experiments where there is a stimulus to the economy.”

Klein was thus shedding light in why government purchases multipliers were so different—a controversial policy issue that is still of great interest to economists and policy makers as they evaluate the stimulus packages of 2008 and 2009 and other recent policies as in the paper “New Keynesian versus Old Keynesian Government Spending Multipliers,” by John Cogan, Tobias Cwik, Volker Wieland and me.

Posted in Teaching Economics

New Research Bolsters Link from Policy Uncertainty to Economy

Some continue to blame the great recession and the weak recovery on some intrinsic failure of the market system, the latest supposed market failure being a so-called “secular stagnation” due to a dearth of investment opportunities and glut of saving.  But the alternative view that policy—and policy uncertainty in particular—has been has a key factor looks better and better as the facts role in.

Last week a joint Princeton-Stanford conference held in Princeton focused on policy uncertainty and showcased new findings on connections between policy uncertainty and political polarization and on patterns in different states, countries and time periods.

Danny Shoag, for example, presented new work “Uncertainty and the Geography of the Great Recession,” co-authored with Stan Veuger, showing that  policy uncertainty across the United States has been highly and robustly correlated with state unemployment rates. As the authors explain, their “paper serves to counter such claims” as those made by Atif Mian and Amir Sufi that “an increase in business uncertainty at the aggregate level does not explain the stark cross-sectional patterns in employment losses” which had cast doubt on the role of policy uncertainty. Scott Baker, Nick Bloom and Steve Davis had written extensively on this at the national level and also presented new work at the conference. Bloom along with Brandice Canes-Wrone and Jonathan Rodden organized the conference.

In the policy panel at the end of the conference I argued that “Policies in Front of and in the Wake of the Great Recession Haven’t Worked” putting policy uncertainty in the context of four other areas of policy slippage described in First Principles: Five Keys to Restoring America’s Prosperity.

 

Posted in Slow Recovery

Transparency for Policy Wonks

This week the Federal Reserve Board posted for the first time its FRB/US policy evaluation model and related explanatory material on its website. This new transparency is good news for researchers, students and practitioners of monetary policy.

Making the model available finally enables people outside the Fed to replicate and critically examine the Fed’s monetary policy evaluation methods, one recent example being Janet Yellen’s evaluations of the Taylor rule that she reported in speeches just before she became chair. This makes it possible to understand the strengths and weaknesses of the methods, compare them with other methods, and maybe even improve on them.

The ability to replicate is essential to good research, and the same is true of good policy research.  Such replication was not possible previously for the Fed’s model, as I know from working with students at Stanford who tried to replicate published results using earlier or linear versions of FRB/US from Volker Wieland’s model data base and could not do so.

Having the model should also enable one to determine what the “headwinds” are that Fed officials so often say requires extra low rates for so long? It will also explain why some Fed staff thinks QE worked, or why they argue that the income effects of the low interest rate do not dominate the incentive effects on investment.

The Fed’s FRB/US model is a New Keynesian model in that it combines rational expectations and sticky wages or prices. But it can also be operated in an “Old Keynesian” mode by switching off the rational expectations, as when it was used in a paper by Christina Romer and Jared Bernstein to evaluate the 2009 stimulus package. For professors who teach about monetary policy evaluation in their courses it will be interesting and useful to show students how the Fed’s New Keynesian model differs from other New Keynesian models.

The newly posted material also clarifies important technical issues such as how the Fed staff has been solving their model in the case of rational expectations. We now know that they have been using the computer program Eviews, but we also learn that rather than the solution method built into Eviews (which is the Fair-Taylor algorithm) the Fed staff has used a different version of that algorithm.  This is important because solution methods sometimes give different answers.

It is easy to criticize practical workhorse models like the FRB/US model, but as New Keynesian models go it’s OK in my view.  In his review in the Wall Street Journal blog, Jon Hilsenrath criticizes the Fed’s model because “it missed a housing bubble and financial crisis,” but I don’t think that was simply the model’s fault. Rather it was due to policy mistakes that the model, if used properly, might have avoided. And models which included a financial sector or financial constraints do not do any better. We will see how the new models being built now do in the next crisis.

 

Posted in Monetary Policy

Where Do Policy Rules Come From?

I recently read Steve Williamson’s interpretation of what I was and was not claiming when I wrote my 1992 paper on what would come to be called the Taylor rule.  It’s quite a while ago, but I have a different view.

To be specific, here is Steve’s interpretation: “When Taylor first wrote down his rule, he didn’t make any claims that there was any theory which would justify it as some welfare-maximizing policy rule. It seemed to capture the idea that the Fed should care about inflation, and that there exist some non-neutralities of money which the Fed could exploit in influencing real economic activity. He then claimed that it worked pretty well (in terms of an ad hoc welfare criterion) in some quantitative models. Woodford used the Taylor rule to obtain determinacy in NK models, and even argued that it was optimal under some special circumstances.…”

But the research that led to the Taylor rule was based on economic theory and it did use specific objective functions.  The so-called Taylor curve, which was published in 1979, made this very clear: Given a specific theory (embodied in a model fit to data) and a specific objective function, one could use optimization methods to find an optimal policy rule. The monetary theory I used then combined rational expectations and price rigidities, two key ingredients of New Keynesian theories.

During the 1980s these rudimentary monetary models were developed further largely as part of a search for better policy rules. At Stanford we extended the model globally and included shocks to term structure spreads, exchange rates, and a zero bound on the interest rate. By the late 1980s many such models were being built and estimated, and there was an opportunity to compare the results from these different models.  Because the models were complex, the policy evaluation method was to put different candidate simple rules into the models, simulate them, and find the rules that worked best as defined by some objective function.

It was through this policy research that the Taylor rule emerged. I examined many model simulations, including my own. I saw common characteristics of the best policy rules: the interest rate rather than the money supply was on the left-hand side; there were two main variables—smoothed aggregate prices and a measure of GDP deviations from long-run trend—on the right-hand side; there was a need to react sufficiently to inflation to get determinacy and stability; and there was usually no exchange rate or asset prices on the right-hand side (these usually increased volatility).  So that is where the rule came from.

Later research (which Steve mentions) was very important. The proof of exact optimality of the rule in certain simple models as shown by Mike Woodford (and also Larry Ball) helped improve people’s understanding of why the rule worked well. Finding robustness to a surprisingly wide variety of models was quite useful, as was the historical finding that when monetary policy was close to such a rule, performance was good and when it departed, performance was not so good. But this all depended strongly on the economic theory and policy optimization results in the original research.

Posted in Monetary Policy, Teaching Economics

Why the IMF’s Exceptional Access Framework is So Important

In today’s editorial on the IMF legislation before Congress the Wall Street Journal refers to my oped of several weeks ago in which I strongly criticized the IMF for breaking its own rules in its “exceptional access framework” when it made loans to Greece in 2010 in an unsustainable debt situation.  Many have asked me about this framework and why I think breaking it was such a serious offense.

The framework was created in 2003 when I was Under Secretary of the U.S. Treasury for International Affairs. Its purpose was to place some sensible rules and limits on the way the IMF makes loans to support governments with debt problems—especially in emerging markets—and thereby move away from the bailout mentality that came out of the 1990s. Such a reform was essential for ending the terrible crisis atmosphere that then existed in emerging markets.  The reform was closely related to, and put in place nearly simultaneously with, the actions of several emerging market countries to place collective action clauses in their bond contracts.

I wrote about this reform in some detail in a chapter called “New Rules for the IMF” in my book Global Financial Warriors, explaining how modern economic theory, including time inconsistency and commitment issues, were used in crafting the reforms. A great deal of consensus formed around this framework at the time, and it was essential for garnering support for the IMF in the US Congress. In my view the framework played an important role in the sharp reduction of the crisis atmosphere in emerging market countries.

So when I learned that the IMF permanently abandoned the framework in 2010 so it could make loans to Greece in a clearly unsustainable situation (and under political pressure), I was greatly disappointed.

It’s not simply a matter of how one applies the framework. It’s a matter of whether there is a framework. It’s fundamental to the operation, credibility and effectiveness of the IMF.  The editorial is correct to highlight the need for such rules to be reinstated and adhered to before increasing the amount of funds available for such lending

Posted in International Economics

You Can’t Connect the [Fed’s] Dots Looking Forward

Many commentators view last week’s Fed meeting (including the FOMC statement, the Chair’s press conference, commentary from FOMC members) as another move toward more discretion and away from rules-based policy. Their reasoning is mainly based on the Fed’s change in forward guidance.  Rather than basing the future federal funds rate on a single quantitative measure—the unemployment rate—the Fed said it now would use a broader set of criteria without numerical quantities. John Cochrane wrote about this increased vagueness on his blog and Larry Kudlow and Rick Santelli asked me about it in interviews after the meeting.  They lamented the lack of rules-based policy, and so do I.

In my view the recent Fed statements convey about the same degree of discretion that the Fed has been continually engaged in since the panic of 2008 ended. That discretion is clearly revealed by the repeated changes in the forward guidance criteria every year since the recession. Here what the Fed said about the federal funds rate in the past six years.

Dec 2008: “Exceptionally low levels…for some time…”

Mar 2009: “…for an extended period…”

Aug 2011: “…at least through mid-2013…”

Jan 2012: “…late 2014…”

Sep 2012: “…through mid-2015…”

Dec 2012: “…at least as long as the unemployment rate remains above 6 ½ percent…”

March 2014:  “…the language that we use in this statement is considerable, period…. this is the kind of term it’s hard to define, but, you know, it probably means something on the order of around six months or that type of thing…”

With such rapid changes in operating procedures, there’s no way one can see a strategy or rule.  And this is coupled with the near impossibility of quantitative easing being conducted in a rule-like manner.

Was there any good news in this meeting for those who would like to see a return to a more rules-based policy? The Fed’s dots (individual forecasts of the future funds rate marked on a chart) are indicative. The meeting revealed some higher and earlier dots with no apparent corresponding change in forecasts for inflation or real GDP: The FOMC median forecast for the federal funds rate for the end of 2015 increased by .25 percentage points and the forecast for the end of 2016 increased by .5 percentage points.  These increases would bring policy slightly back in the direction of a rules based policy like the Taylor rule, which the Fed adhered to pretty closely in the past when policy worked well.

But we are a still long way off, and unfortunately there is no way using published material to connect the individual dots with individual forecasts of inflation and real GDP.  As Steve Jobs said (in a different context) in his famous speech to Stanford graduates in 2005 “you can’t connect the dots looking forward; you can only connect them looking backwards.”

Posted in Monetary Policy

Why Still No Real Jobs Takeoff?

For each month of this recovery, I’ve been tracking the change in the employment-to-population ratio and comparing it with the recovery from the previous deep recession in the 1980s.  Here is the latest update based on today’s release of February data:

emp-pop feb 2014

and here is a post from 2011 for comparison. It’s remarkable that there’s still no take-off and the percentage employed is still below what it was a bottom of the recession.

As would be expected after so many disappointing years, some are now seeing this as a secular issue of low labor force participation unrelated to the slow recovery from the recession. But research by Chris Erceg and Andy Levin, (Labor Force Participation and Monetary Policy in the Wake of the Great Recession) provides what they consider to be “compelling evidence that cyclical factors account for the bulk of the post-2007 decline in labor force participation.”  One convincing piece of evidence is their chart (see below) which shows the labor force participation rate (LFPR) projection by BLS and CBO before the downturn based on the demographics about which there have been no surprise.  The actual LFPR (63.0 percent as of today) is well below these projections.  Erceg-Levin

Posted in Slow Recovery