New Conference on Fed and Rules-Based Policy: A Preview

Today starts a two day conference at Stanford’s Hoover Institution on monetary policy. It’s part of the Fed Centennial. Here is the full agenda which includes talks and commentary by Esther George, Tom Sargent, Charles Plosser,  John Williams, Jeff Lacker, Ed Prescott, Allan Meltzer, Niall Ferguson, Maury Obstfeld, Barry Eichengreen, George Shultz, Monika Piazzesi,  Athanasios Orphanides, Otmar Issing, Martin Schneider and others.

The main purpose of the conference is to put forth and discuss a set of policy recommendations that are consistent with and encourage a more rules-based policy for the Fed, and would thus improve economic performance, especially in comparison with the past decade.  The idea is to base these recommendations as much as possible on economic theory, on data, and especially on the history of the past century. It is natural to do so at the time of the Centennial of the Fed.

The recommendations in the technical papers prepared in advance of the conference have set the stage for the discussion and the panels to come. Here is quick summary of the recommendations of the papers prepared in advance.   Links are on the agenda.

In his paper for the first session, John Cochrane recommends three things: first that the short-term interest rate should be in the future determined by setting interest rate on reserves; second, that the rate should adjusted according to a policy rule; and third, that the resulting large reserve balances at the Fed should not be used for discretionary interventionist policies, such as quantitative easing, which he argues have done little good.  We may hear some discussion about whether political economy considerations render the third point hard to achieve in practice.

David Papell’s paper provides a statistical foundation for the overall theme. He uses formal statistical techniques to determine when in history monetary policy was rule-like, and he finds the rule-like periods coincide remarkably well with periods of good economic performance.  A clear policy recommendation emerges directly from his statistical findings.

Marvin Goodfriend’s historical review of the Fed’s first century leads him to recommend a new “Fed-Treasury Credit Accord” which would have a “Treasuries only” asset acquisition policy with exceptions only in the case of well-specified lender of last resort actions.  This would deal with the recurrent mission creep problem where a limited purpose institution takes on other actions for which it was not granted independence.

Michael Bordo’s key policy recommendation nicely dovetails with Marvin Goodfriend’s. He recommends, again based on an examination of the history of the Fed, that, in order to prevent and deal with crises, the central bank needs to lay out and to announce a systematic rule for its lender of last resort actions, linking his policy recommendation to what has worked and what has not worked in practice.

Lee Ohanian puts monetary policy in the context of big real economic shocks that are caused in large part by other economic policies—a situation which many have argued characterizes the economic situation today.  He finds that discretionary Fed policy responses to these major shocks have in some cases, negatively impacted the economy. Also timely is his warning that overestimating the risks of deflation can lead monetary policy astray.

Andrew Levin recommends that a good communications strategy for systematic monetary should recognize that that the reference policy rule may change over time.  He usefully focuses on the possibility of  a change in the terminal or equilibrium federal funds rate, such as a decline from 4 percent now assumed by most FOMC members to perhaps as low as 2 percent as Richard Clarida has argued in a new paper with Bill Gross his colleague at PIMCO.

And we will also hear from Richard Clarida that policy rules work quite well in an international setting and lead to a smoother operating global monetary system with smaller spillovers.  He also argues that a policy rule framework will “re-emerge as the preferred de facto if not de jure construct for conducting, evaluating, and ultimately for communicating monetary policy”   Here his use of the word de jure is important for it suggests that some legislation may be needed to bring these reforms about and keep them in place.  That is an important and timely issue which will be discussed in the next two days.

Posted in Monetary Policy

Market Failure and Government Failure in Leading Economics Texts

A new review of 23 leading Principles of Economics texts reveals huge differences in the coverage of government failure versus market failure.  Jim Gwartney, who is the author of a leading text with a strong emphasis on public choice, along with his colleague Rosemarie Fike, conducted the review and posted the results here.

Jim and Rosemarie went through each of the 23 introductory texts looking for and tabulating references to various types of market failure and government failure. As they explain in the paper they also categorized the explanations of market failure (say due to externalities, public goods, market power,…) and government failure (say due to special interests, short-sightedness, rent-seeking,…). Different people can have different views about the criteria and about models of public choice and government developed by James Buchanan, George Stigler, and others.  But the paper endeavors to describe the methodology carefully, and I recommend reading it to get an understanding of the results.

The following is Table 3 of the paper. It shows the ratio of page coverage on government failure to page coverage on market failure in each text.  Other metrics reported in the paper give similar results.Gwartny Fike Table 3

The Paul Krugman-Robin Wells book is tied with the lowest ratio (0.00) along with the Robert Hall- Marc Leiberman book.  With more references to government failure, Gwartney, Cowen-Tabarrok, Arnold, and McEachern have much higher ratios.  It is interesting that the ratios in Baumol-Blinder and Mankiw are quite low, especially in comparison the Samuelson-Nordhaus ratio which is just a bit below the average. The ratio in my book with Akila Weerepana is a bit above the average.

Reviews like this can affect future texts and revisions as authors and users become more aware of the overall coverage in comparison with the market.  My guess is that future reviews will show an increase in the average ratio and some small reduction in the variance which preserves the overall diversity.

 

Posted in Teaching Economics

Deleting Vice and Other Revisions in Monetary Lectures

Yesterday I finished my course on Monetary Theory and Policy for this year’s 1st year PhD students at Stanford.  I have been teaching in the 1st year PhD core for a long time and it gets more interesting each year.  (Technically speaking I first taught in the 1st year graduate course in 1968 as a student at Stanford. Lorie Tarshis, author of the first Keynesian textbook, was the professor, and he asked me to give the lecture on dynamic stochastic models of the business cycle saying he did not know much about it.)

Of course the lectures have changed enormously over the years, especially during the 1970s and 1980s with the emergence of new Keynesian modeling (rational expectations with sticky prices). But the past few years of crisis and slow recovery have also seen big changes, for example, bringing in preferred habitat or affine equations for the term structure to the macro models in order to assess quantitative easing and forward guidance.  But I also teach that the basic macro models are still ok and that it was policy what went off track leading up to the crisis.

For the past two years I have been assigning and discussing some of Janet Yellen’s work such as this April 2012 talk relating to policy rules, so little update was required there—simply deleting Vice in Vice Chair as in the attached slide from lecture one.  I’ll be posting revised versions of all the lectures on my web page soon.slide 20

 

Posted in Teaching Economics

Debate Heats Up: Re-Normalize or New-Normalize Policy

Last week’s IMF conference on Monetary Policy in the New Normal revealed a lot of disagreement on the key issue of where policy should be headed in the future. A dispute that broke out between me and Adair Turner is one example.  I led off the first panel making the case that central banks should re-normalize rather than new-normalize monetary policy. At a later panel Turner, who headed the UK Financial Services Authority during the financial crisis, “very strongly” disagreed.

Turner took issue with the  view that a departure from predicable rules-based policy has been a big factor in our recent poor economic performance, essentially reversing the move to more predictable rules-based policy which led to good performance during the Great Moderation. I used the following diagram (slide 5 from my presentation), which is an updated version of a diagram Ben Bernanke presented in a paper ten years ago.

slide 5

The diagram shows a policy tradeoff curve (called the Taylor Curve by Bernanke in his paper following fairly common usage). I argued, as did Bernanke in that paper, that the improved performance from point A to point B was mainly due to monetary policy, not a shift in the curve. In my view, the recent deterioration in performance to the red dot at point C was also due to a departure from rules based policy, rather than a shift in the curve.

And this is what Adair Turner disputed. Here is a transcription of the relevant 1 minute of his talk (from 25.10 to 26.10 in this video): “I basically end up disagreeing very strongly with something that John Taylor said on his fifth slide. He basically argued for a rules—a fully rules-based approach—to what central banks do.  He argued that one had moved to a better tradeoff—a Bernanke tradeoff on that chart, because of rules, between the variance of output and the variance of inflation. And he suggested that we had then moved to his red dot, which was the post-2006 red dot, because we had moved away from those rules. I disagree. I think we moved to post-2006 and in particular post 2007-08 period precisely because we had those rules.  Because we fooled ourselves that there existed a simple set of rules with one objective a low and stable rate of inflation—and the inflation rate alone, we ignored a buildup of risks in our financial sector that produced the financial crisis of 2008 and the post crisis recession.”

But as I showed in my presentation (23.30-38.00 min) and in the written paper, monetary policy did not stick to those rules. The Fed deviated from its Great Moderation rules by holding interest rates too low for too long in 2003-05 thereby creating that “buildup of risks in our financial sector that produced the financial crisis of 2008” as Turner puts it. In addition, financial regulators and supervisors set aside safety and soundness rules. And in the post-panic period monetary policy has been anything but rule-like and predictable.

Turner is also incorrect to suggest that the simple rules in question, such as the Taylor rule, are so simple that react only the rate of inflation. They respond to developments in the real economy too.

If the IMF conference and other events last week are any guide, this debate is heating up.  At one extreme Adam Posen argued at the IMF conference for Quantitative Easing Forever, but Jeremy Stein, Brian Sack, and Paul Tucker were skeptical.  And at her speech in New York last week Janet Yellen referred to the Taylor rule, and some commentators here and here saw signs of laying the ground for a return to more rules-based policies.

 

Posted in Monetary Policy

A First Meeting of Old and New Keynesian Econometric Models

Lawrence Klein who died last October at age 93 is most remembered for the “creation of econometric models and the application to the analysis of economic fluctuations and economic policies” as the Nobel Prize committee put it in the 1980 citation.  But in these days of “macroeconomists at war” it is worth remembering that Klein was also a pioneer in exploring the reasons for differences between macro models and the views of the economists who build and estimate them.  The Model Comparison Seminar that he ran during the 1970s and 1980s brought macroeconomists and their models together—macroeconomists at peace?—to understand why their estimates of the impact of fiscal and monetary policy were different.   In my view there is too little of that today.

I will always be grateful to Lawrence Klein for inviting me to join his Model Comparison Seminar and enter into the mix a new kind of model with rational expectations and sticky prices which we were developing at Stanford in the mid-1980s.  The model was an estimated version of what would come to be called a “new Keynesian” model, and the other models in the comparison would thus logically be called “old Keynesian.” They included such famous workhorse models as the Data Resources Incorporated (DRI) model, the Federal Reserve Board’s model, the Wharton Econometric Forecasting Associates (WEFA) model, and Larry Meyer’s Macro Advisers model.  It was probably the first systematic comparison of old and new Keynesian models and was an invaluable opportunity for someone developing a new and untried model.

The performance comparison results were eventually collected and published in a book, Comparative Performance of U.S. Econometric Models. In the opening chapter Klein reviewed the comparative performance of the models, noting differences and similarities: “The multipliers from John Taylor’s model…are, in some cases, different from the general tendency of other models in the comparison, but not in all cases….Fiscal multipliers in his type of model appear to peak quickly and fade back toward zero. Most models have tended to underestimate the amplitude of induced price changes, while Taylor’s model shows more proneness toward inflationary movement in experiments where there is a stimulus to the economy.”

Klein was thus shedding light in why government purchases multipliers were so different—a controversial policy issue that is still of great interest to economists and policy makers as they evaluate the stimulus packages of 2008 and 2009 and other recent policies as in the paper “New Keynesian versus Old Keynesian Government Spending Multipliers,” by John Cogan, Tobias Cwik, Volker Wieland and me.

Posted in Teaching Economics

New Research Bolsters Link from Policy Uncertainty to Economy

Some continue to blame the great recession and the weak recovery on some intrinsic failure of the market system, the latest supposed market failure being a so-called “secular stagnation” due to a dearth of investment opportunities and glut of saving.  But the alternative view that policy—and policy uncertainty in particular—has been has a key factor looks better and better as the facts role in.

Last week a joint Princeton-Stanford conference held in Princeton focused on policy uncertainty and showcased new findings on connections between policy uncertainty and political polarization and on patterns in different states, countries and time periods.

Danny Shoag, for example, presented new work “Uncertainty and the Geography of the Great Recession,” co-authored with Stan Veuger, showing that  policy uncertainty across the United States has been highly and robustly correlated with state unemployment rates. As the authors explain, their “paper serves to counter such claims” as those made by Atif Mian and Amir Sufi that “an increase in business uncertainty at the aggregate level does not explain the stark cross-sectional patterns in employment losses” which had cast doubt on the role of policy uncertainty. Scott Baker, Nick Bloom and Steve Davis had written extensively on this at the national level and also presented new work at the conference. Bloom along with Brandice Canes-Wrone and Jonathan Rodden organized the conference.

In the policy panel at the end of the conference I argued that “Policies in Front of and in the Wake of the Great Recession Haven’t Worked” putting policy uncertainty in the context of four other areas of policy slippage described in First Principles: Five Keys to Restoring America’s Prosperity.

 

Posted in Slow Recovery

Transparency for Policy Wonks

This week the Federal Reserve Board posted for the first time its FRB/US policy evaluation model and related explanatory material on its website. This new transparency is good news for researchers, students and practitioners of monetary policy.

Making the model available finally enables people outside the Fed to replicate and critically examine the Fed’s monetary policy evaluation methods, one recent example being Janet Yellen’s evaluations of the Taylor rule that she reported in speeches just before she became chair. This makes it possible to understand the strengths and weaknesses of the methods, compare them with other methods, and maybe even improve on them.

The ability to replicate is essential to good research, and the same is true of good policy research.  Such replication was not possible previously for the Fed’s model, as I know from working with students at Stanford who tried to replicate published results using earlier or linear versions of FRB/US from Volker Wieland’s model data base and could not do so.

Having the model should also enable one to determine what the “headwinds” are that Fed officials so often say requires extra low rates for so long? It will also explain why some Fed staff thinks QE worked, or why they argue that the income effects of the low interest rate do not dominate the incentive effects on investment.

The Fed’s FRB/US model is a New Keynesian model in that it combines rational expectations and sticky wages or prices. But it can also be operated in an “Old Keynesian” mode by switching off the rational expectations, as when it was used in a paper by Christina Romer and Jared Bernstein to evaluate the 2009 stimulus package. For professors who teach about monetary policy evaluation in their courses it will be interesting and useful to show students how the Fed’s New Keynesian model differs from other New Keynesian models.

The newly posted material also clarifies important technical issues such as how the Fed staff has been solving their model in the case of rational expectations. We now know that they have been using the computer program Eviews, but we also learn that rather than the solution method built into Eviews (which is the Fair-Taylor algorithm) the Fed staff has used a different version of that algorithm.  This is important because solution methods sometimes give different answers.

It is easy to criticize practical workhorse models like the FRB/US model, but as New Keynesian models go it’s OK in my view.  In his review in the Wall Street Journal blog, Jon Hilsenrath criticizes the Fed’s model because “it missed a housing bubble and financial crisis,” but I don’t think that was simply the model’s fault. Rather it was due to policy mistakes that the model, if used properly, might have avoided. And models which included a financial sector or financial constraints do not do any better. We will see how the new models being built now do in the next crisis.

 

Posted in Monetary Policy