A Reawakening of Monetary Policy Research

Last May a group of economists, central bankers, market participants, and financial journalists convened at Stanford’s Hoover Institution “to put forth and discuss a set of policy recommendations that are consistent with and encourage a more rules-based policy for the Federal Reserve and would thus improve economic performance…”  Here’s the agenda, the published volume, and my summary.

Since then much has happened:  The House Financial Services Committee passed a policy rules bill out of committee, the Senate Banking Committee proposed a similar bill with other structural reforms (which also passed out of Committee), the Bank of England instituted significant communication reforms, a slew of economists and Fed officials weighed in (both pro and con) on proposals to make central bank policy rules more transparent, and Congress held several public hearings.

To analyze these new developments, many of the experts from last year’s conference and others convened last week to present papers and discuss key issues. All the papers are posted here. They were novel, on point, and rigorous whether using equations, regressions, history, legal analysis or political theory. The discussion was candid, with new questions raised about the effectiveness of the Fed’s deliberations. In my view it was kind of a reawakening—part of a broader reawakening—of monetary policy research. A written record of the whole conference is planned. In the meantime, here’s a quick summary:

Paul Tucker opened the conference. His paper showed that a systematic strategy for setting the instruments of policy is desirable, but that integrating that strategy with a more discretionary lender-of-last resort function is difficult and still needs to be worked out. He argued that “the central bank…should publish the Operating Principles” (the rules for the instruments), even stressing that simply doing this “is more important than that any particular set of principles or any particular instrument-rule be entrenched in a law that is justiciable via the courts.”

John Cochrane, the lead discussant of Paul’s paper, had different views about the focus on discretion in emergency lending saying that “Crisis-response and lender-of-last-resort actions need rules, or ‘regimes,’ even more than monetary policy actions need rules.” He then went on to propose that bailout problems be addressed through reforms in which “all fixed-value demandable assets had to be backed 100% by our abundant supply of short-term Treasuries,”

Next, David Papell presented a paper which used statistical methods to explore how recent policy rules bills would work in practice. Employing a counterfactual hypothesis and going back to the 1950s, he found that no single policy rule would have had the federal funds rate within a 2 percentage point band of the rule for every year, though some rules can be adjusted to fit certain periods. This suggests that attempts to use rules in seemingly arbitrary ways to justify policy in one period could backfire in later periods.  Except to say that such legislation would make policy more predictable, his paper did not draw conclusions about whether the legislation might affect the responsiveness of policy in one direction or another, such as during the early years of the Volcker disinflation period or during the 2003-2005 period when the policy deviations were quite large. Nevertheless, his is the first paper to apply formal econometric methods to these legislative questions.

In his discussion of David’s paper, Mike Dotsey of the Philadelphia Fed, argued that if one considered rules with the lagged interest rate on the right hand side then the actual funds rate comes well within a 2 percent band, though some noted that using lags this way can simply perpetuate monetary policy past errors.  There was also a discussion of the interesting new report (co-authored by Dotsey) at the Philadelphia Fed on using policy rules for benchmarking without being required by Congress.

Carl Walsh’s paper also used macroeconomic research methods in a novel way to assess the recent bills that require Fed reports on instrument rules. He employed both a simple calibrated new Keynesian model and a more complex estimated model to investigate whether such a rules-based requirement could improve on a goals-based requirement in which the central bank is simply required to achieve a goal of 2% inflation. More specifically, he asked how much weight should be placed on each of the two alternative requirements. In the case where the required rule is optimal (for the model used) his conclusion is to put all the weight on the rules-based requirement, but if the required rule is not optimal then the weight depends on whether shocks are demand-side or supply-side. In each case, the gain is the improvement in output stability and employment stability compared with the discretionary solution.

In discussing Carl’s paper, Andy Levin argued that the gains from using and reporting on the central bank’s strategy for the instruments go well beyond the calculations in Carl’s paper which he worried paid too little attention to model uncertainty. He argued that the Fed could do much more to clarify its strategy for the instruments, noting as evidence that the Fed’s recent Statement on Longer-Run Goals and Monetary Policy Strategy is all about goals and nothing about strategy.

Kevin Warsh’s paper and, even more so, his oral presentation on the lack of effective deliberations at the FOMC was one of the most surprising findings at the conference especially for people who have never attended an FOMC meeting. As Binyamin Appelbaum, attending from the New York Times, tweeted, there were “fascinating reflections from Kevin Warsh on the absence of real debate inside the FOMC.”  Binyamin also asked why publication of the transcripts years later would affect what people say inside.  Although Warsh, John Williams and Charles Plosser all indicated that it did not affect them, they all thought that it affected others.

Peter Fisher, former Fed, Treasury and Blackrock official, now at Dartmouth, began his discussion of Kevin’s presentation by noting the refreshing candor, and then read out a fascinating list of ways to judge whether a policy committee was functioning well. Among other things he listed the willingness of committee members to change priors in a Bayesian fashion when presented with new arguments or data. Another highlight of the discussion was a disagreement between Paul Tucker and Kevin Warsh about what went wrong on Lehman weekend.  Much more investigative and financial research is needed here.

The final paper of the day was by Michael Bordo whose broad historical sweep demonstrated the value of diversity of opinion coming from the district Fed banks and their presidents, and the danger of centering more power in Washington.  He argued for a more rules-based monetary policy as in the House and Senate bills, but was against having the presidents being appointed by the President of the United States.

In discussing Mike’s paper, Mary Karr, the General Counsel at the St, Louis Fed, reviewed the ways in which there are checks and balances in the current system of appointing district bank presidents, adding that the process is not a source of regulatory capture.  George Shultz, in one of his many helpful interventions at the conference, said he was reassured by Mary’s explanations but still worried, especially about the process at the NY Fed.  A common answer he got was “well, the NY Fed is different.”

The concluding policy panel featured John Williams, Charles Plosser and George Shultz. John Williams built on his recent speech on policy rules legislation, adding among other things that the Fed minutes were becoming too detailed and thereby detracted from Fed deliberations. Charles Plosser indicated the FOMC deliberations were more constructive when viewed from a multi-meeting perspective than by the conversation at a single meeting. Plosser also emphasized the importance of the Fed being a limited purpose institution, to which George Shultz responded at the start of his presentation with the word “Amen.”  George Shultz went on to argue that the Fed was in dire need of developing and communicating a strategy for the policy instruments making novel analogies with foreign policy and drawing on specific examples from his experience as Secretary of Labor, State, and Treasury. I expect more to be written about this analogy.

In sum there were many agreements about the importance of a rules-based policy or strategy at a general level, but disagreements about how a central bank should deliberate, implement and communicate about such policies, and thereby the need for more research. In this regard I noted at the conference a healthy respect for good economic research that must underlie effective policy, along with a great deal of congeniality as represented, for example, by John Williams and me exchanging T-shirts at the conference.

T-shirt exchange

Posted in Monetary Policy, Regulatory Policy

The Senate Moves Ahead on a Policy Rules Bill

Today the Chairman of Senate Banking Committee, Richard Shelby, released a draft bill entitled “The Financial Regulatory Improvement Act of 2015” covering a wide range of reforms. Like the widely-discussed House policy rules bill (Section 2 of HR 5018 of last year), this Senate bill (in the first section of Title V) would require that the Fed report on monetary policy rules. In fact, the Senate bill contains important principles regarding policy rules that are in the House Bill and should be ripe for compromise in conference.

Recall that the House bill, as I described in testimony before the Senate Banking Committee in March, “would require that the Fed ‘describe the strategy or rule of the Federal Open Market Committee for the systematic quantitative adjustment’ of its policy instruments. It would be the Fed’s job to choose the strategy and how to describe it. The Fed could change its strategy or deviate from it if circumstances called for a change, but the Fed would have to explain why.”

The Senate bill is quite similar in these essentials.

First, it would require that the Fed report each quarter to Congress “a description of any monetary policy rule or rules used or considered by the Committee that provides or provide the basis for monetary policy decisions, including short-term interest rate targets set by the Committee…” with the stipulation that “such description shall include, at a minimum, for each rule, a mathematical formula that models how monetary policy instruments will be adjusted based on changes in quantitative inputs…”

Second, it would require in each quarterly report “a detailed explanation of any deviation in the rule or rules…from any rule or rules…in the most recent quarterly report.” And to emphasize that rules and strategies have similar meanings, the bill includes a corresponding requirement to report on any monetary policy strategy or strategies. In other words, as in the House bill, the Fed could change strategies or rules, but it would have to explain why.

Neither the House bill nor the Senate bill would require the Fed to follow any particular rule—mechanical or otherwise. There is precedent for both the Senate bill and the House bill in previous legal requirements for the Fed to report on the monetary aggregates. Neither bill would reduce the Fed’s independence; based on my experience in government, they would bolster the Fed’s independence.

There are, of course, some differences between the bills.  For example, the Senate bill only requires the Fed to report a rule if such rule provides the basis for policy decisions. However, as is well known from Fed transcripts, the Fed regularly uses policy rules and discusses deviations from such rules, so the bill would require the Fed to report them. It is mainly a matter of transparency and thus hard to object to.

The House bill, but not the Senate bill, would also require that the Fed compare its reported rule with a so called reference rule, which turns out to be the Taylor rule in the House bill. However, since the House bill does not require the Fed to follow any rule, including the Taylor rule, this is a small difference in practice. Moreover, since it is common to compare rules or strategies with that rule, others will likely do that comparison anyway.

Another difference is that the Senate Bill does not have a role for the GAO in determining whether the Fed is complying with the law.  This would be the job of the members of Congress and their staffs based on the quarterly reports submitted by the Fed. This change makes the legislation closer to what I originally proposed, and taking the GAO out of the bill will likely remove some objections.

In sum, the Senate policy rules bill endeavors to install transparency and “responsible oversight,” as Senator Shelby puts it, without trying to micromanage the Fed. It maintains the same key reporting principles that are in the House bill in a way that should encourage bipartisan discussion and constructive input from the Fed. It has made the most of an opportunity presented by extensive legislative work and commentary during the past year and moves ahead on needed reform legislation.

Posted in Monetary Policy, Regulatory Policy

Surprising Findings at the Macro Handbook Conferences

In order to further progress on the new Handbook of Macroeconomics, which will be published next year, Harald Uhlig and I, the co-editors of the Handbook, hosted two conferences at Stanford and Chicago in April. Harald and I attended both conferences—three days in each venue—where we heard distinguished macroeconomists present 35 draft chapters and critical commentary on each of those chapters. With many chapters having coauthors there were about 85 presentations in all–way too much to summarize in a short blog though Harald and I plan to write such a summary in the volume’s introduction. Many of the preliminary drafts are posted on the conference web sites at the Hoover Institution at Stanford and and the Becker Friedman Institute at Chicago. Comments for the authors are welcome as final drafts will be prepared in the coming months.

The conferences displayed an amazing range of new and different ideas, which is understandable given all that has happened since the first Macro Handbook was published in 1999. The range of topics appeared surprisingly wide, extending well beyond traditional macro, and including such topics as Family Macro, Natural Experiments in Macro, Environmental Macro, and the Macroeconomics of Time Allocation. Of course there were both real business cycle chapters (Prescott, Ohanian, Hansen) and monetary business cycle papers (Christiano, Eichenbaum, Trabandt) and treatments of macro-prudential policy and fiscal policy at the zero lower bound on interest rates.  There were also the essential chapters on the latest and estimation and solution (in continuous and discrete time) techniques, and well as helpful displays of the key facts of economic growth and economic fluctuations both at the aggregate and individual level. The representative agent was not the only type of agent represented!

Though the new Handbook is by no means finished, there is already a very noticeable difference from the first Handbook. Perhaps the most important is that many authors included examinations of the role of financial frictions and the financial sector more generally in macro models. Of course, since the Global Financial Crisis and the Great Recession most people view the lack of such frictions to be a major gap in macro, but how that gap will most effectively be filled in remains to be seen.

The formal models in the chapters in the Handbook can help answer that question in ways that informal policy debates cannot, and I hope that this may be an important accomplishment of the Handbook in the end. Between the two conferences I attended an IMF conference in Washington, Rethinking Macro III, and participated in such debates (one with Ben Bernanke) which, while valuable, could not settle key issues without such formal modelling work as I think Olivier Blanchard made clear in his summary.

The first Handbook had the famous chapter by Bernanke, Gertler and Gilchrist on the financial accelerator, but the ideas in that research now appear in many chapters. One surprising finding,—clear in the Linde, Smets, and Wouters chapter—is that when you add such financial factors to the mainline macro models used at central banks, they do not help that much in explaining the financial crisis. To paraphrase simply, they can change the financial crisis from something like a 6-sigma event in the models to a 3-sigma event—an improvement but not ready to help much help in the next crisis. Look for more surprising and even debate-settling findings in future drafts.

Here is a group picture of authors and discussants from the conference at the Becker Friedman Institute in Chicago

Group Photo Chicago side

and here is one from the Hoover Institution at Stanford.

Group Photo Stanford

Both institutions provided a gallery of action shots too here and here

Posted in Budget & Debt, Financial Crisis, Fiscal Policy and Reforms, International Economics

A Monetary Policy for the Future

Yesterday I spoke at a panel on “Monetary Policy in the Future,” with Ben Bernanke and Gill Marcus at an IMF event Rethinking Macro Policy.  A written version of my opening is posted below (longer than my usual post). Ben Bernanke posted his opening here. Our views are quite different and a serious debate followed (see USA Today article) about which I’ll write later.

Let me begin with a mini history of monetary policy in the United States during the past 50 years. When I first started doing monetary economics in the late 1960s and 1970s, monetary policy was highly discretionary and interventionist. It went from boom to bust and back again, repeatedly falling behind the curve, and then over-reacting. The Fed had lofty goals but no consistent strategy. If you measure macroeconomic performance as I do by both price stability and output stability, the results were terrible. Unemployment and inflation both rose.

Then in the early 1980s policy changed. It became more focused, more systematic, more rules-based, and it stayed that way through the 1990s and into the start of this century.  Using the same performance measures, the results were excellent. Inflation and unemployment both came down.  We got the Great Moderation, or the NICE period (non-inflationary consistently expansionary) as Mervyn King put it. Researchers like John Judd and Glenn Rudebush at the San Francisco Fed and Richard Clarida, Mark Gertler and Jordi Gali showed that this improved performance was closely associated with more rules-based policy, which they defined as systematic changes in the instrument of policy—the federal funds rate—in response to developments in the economy.

Researchers found the same results in other countries. Stephen Cecchetti, Peter Hooper, Bruce Kasman, Kermit Schoenholtz, and Mark Watson showed that as policy became more rule-like in Germany, U.K., and Japan, economic performance improved.

Few complained about spillovers or beggar-thy-neighbor policies during the Great Moderation.  The developed economies were effectively operating in what I call a nearly international cooperative equilibrium, another NICE to join Mervyn King’s.  This was also a prediction of monetary theory which implied that if each country followed a good rules-based monetary policy then the international system would operate in a NICE way.

But then there was a setback. The Fed decided to hold the interest rate very low during 2003-2005, thereby deviating from the rules-based policy that worked well during the Great Moderation.  You do not need policy rules to see the change: With the inflation rate around 2%, the federal funds rate was only 1% in 2003, compared with 5.5% in 1997 when the inflation rate was also about 2%. The results were not good. In my view this policy change brought on a search for yield, excesses in the housing market, and, along with a regulatory process which broke rules for safety and soundness, was a key factor in the financial crisis and the Great Recession.

During the ensuing panic in the fall of 2008 the Fed did a good job of providing liquidity through loans to financial firms and swaps to foreign central banks. Reserve balances at the Fed expanded sharply due to these temporary liquidity provisions. They would have declined after the panic were it not for Fed’s initiation of its unconventional monetary policy, the large scale purchases of securities now called Quantitative Easing. Regardless of what you think of the impact of QE, it was not rule-like or predictable, and my research shows that it was not effective.  It did not deliver the economic growth that the Fed forecast and it did not lead to a good recovery.  And yet another deviation from rules-based policy was the continuation of a near zero interest rate through the present, long after Great Moderation rules would have called for its end.

This deviation from rules-based monetary policy went beyond the United States, as first pointed out by researchers at the OECD, and is now obvious to any observer. Central banks followed each other down through extra low interest rates in 2003-2005 and more recently through quantitative easing. QE in the US was followed by QE in Japan and by QE in the Eurozone with exchange rates moving as expected in each case.  Researchers at the BIS showed the deviation went beyond OECD and called it the Global Great Deviation.  Rich Clarida commented that “QE begets QE!” Complaints about spillover and pleas for coordination grew. NICE ended in both senses of the word. World monetary policy now seems to have moved into a strategy-free zone.

This short history demonstrates that shifts toward and away from steady predictable monetary policy have made a great deal of difference for the performance of the economy, just as basic macroeconomic theory tells us. This history has now been corroborated by David Papell and his colleagues using modern statistical methods.  Allan Meltzer found nearly the same thing in his more detailed monetary history of the Fed.

The implication of this experience is clear: monetary policy should re-normalize in the sense of transitioning to a predictable rule-like strategy for the instruments of policy.  Of course, it is possible technically for the Fed to move to and stick to such a policy, but the long departures from rules-based policy show that it is difficult.

These departures suggest that some legislative backing might help. Such legislation could simply require the Fed to describe its strategy or rule for adjusting its policy instruments. It would be the Fed’s job to choose the strategy and how to describe it. The Fed could change its strategy or deviate from it if circumstances called for a change, but the Fed would have to explain why.

There is precedent for such legislation.  The Federal Reserve Act used to require the Fed to report ranges of the monetary and credit aggregates.  The requirements were repealed in 2000. In many ways the proposed reform would simply replace them. It would provide responsible oversight without micro-managing. It would not chain the Fed to a mechanical rule. It would not threaten the Fed’s independence. Indeed, it would give the more independence from the executive branch of government.

Now let me consider some of the objections to such a monetary policy whether it is backed by legislation or not.

Some argue that the historical evidence in favor of rules is simply correlation not causation.  But this ignores the crucial timing of events:  in each case, the changes in policy occurred before the changes in performance, clear evidence for causality.  The decisions taken by Paul Volcker came before the Great Moderation.  The decisions to keep interest rates very low in 2003-2005 came before the Great Recession. And there are clear causal mechanisms, such as the search for yield, risk-taking, and the boom-bust in the housing market which were factors in the financial crisis.

Another point relates to the zero bound. Wasn’t that the reason that the central banks had to deviate from rules in recent years? Well it was certainly not a reason in 2003-2005 and it is not a reason now, because the zero bound is not binding. It appears that there was a short period in 2009 when zero was clearly binding. But the zero bound is not a new thing in economics research. Policy rule design research took that into account long ago. The default was to move to a stable money growth regime not to massive asset purchases.

Some argue that a rules-based policy is not enough anymore and that we need more international coordination.  I believe the current spillovers are largely due to these policy deviations and to unconventional monetary policy.  We heard complaints about the spillovers during the stop-go monetary policy in the 1970s.  But during the 1980s and 1990s and until recently there were few such complaints.  The evidence and theory is that rules-based policy brings about NICE results in both senses of the word

Some argue that rules based policy for the instruments is not needed if you have goals for the inflation rate or other variables. They say that all you really need for effective policy making is a goal, such as an inflation target and an employment target. The rest of policymaking is doing whatever the policymakers think needs to be done with the policy instruments. You do not need to articulate or describe a strategy, a decision rule, or a contingency plan for the instruments. If you want to hold the interest rate well below the rule-based strategy that worked well during the Great Moderation, as the Fed did in 2003-2005, then it’s ok as long as you can justify it at the moment in terms of the goal.

This approach has been called “constrained discretion” by Ben Bernanke, and it may be constraining discretion in some sense, but it is not inducing or encouraging a rule as a “rules versus discretion” dichotomy might suggest.  Simply having a specific numerical goal or objective is not a rule for the instruments of policy; it is not a strategy; it ends up being all tactics.  I think the evidence shows that relying solely on constrained discretion has not worked for monetary policy.

Some of the recent objections to a rules-based strategy sound like a revival of earlier debates.   Larry Summers makes analogies with medicine saying he would “rather have a doctor who most of the time didn’t tell me to take some stuff, and every once in a while said I needed to ingest some stuff into my body in response to the particular problem that I had. That would be a doctor who’s [advice], believe me, would be less predictable.”

So, much as the proponents of discretion in earlier rules versus discretion debates (such as Walter Heller and Milton Friedman), Summers argues in favor of relying on an all-knowing expert, a doctor who does not perceive the need for, and does not use, a set of guidelines.

But much of the progress in medicine over the years has been due to doctors using checklists.   Experience shows that checklists are invaluable for preventing mistakes, getting good diagnoses and appropriate treatments. Of course doctors need to exercise judgement in implementing checklists, but if they start winging it or skipping steps the patients usually suffer. Checklist-free medicine is as bad as rules-free monetary policy.

Many say that macro-prudential policy of the countercyclical variety is an essential part of a monetary policy for the future. In my view it is more important to get required levels of capital and liquidity sufficiently high. We do not know enough about the impacts of cyclical movements in capital buffers to engage in fine tuning, and it puts the central bank in the middle of a very difficult political issue.

Some argue that we should have QE forever, leave the balance sheet bloated, and use interest on reserves or reverse repos to set the short term interest rate.  But the distortions caused by these massive interventions and the impossibility of such policy being rule-like indicate that QE forever should not be part of a monetary policy for the future. The goal should be to get the balance sheet back to levels where the demand and supply of reserves determine the interest rate. Of course, interest rates on reserves and reverse repos could be used during a transition. And a corridor system would work if the market interest rate was in between the upper and lower bands and not hugging one or the other.

Should forward guidance be part of a monetary policy for the future?  My answer is yes, but only if it is consistent with the rules-based strategy of the central bank, and then it is simply a way to be transparent.  If forward guidance is used to make promises for the future that will not be appropriate in the future, then it is time-inconsistent and should not be part of monetary policy.

For all these reasons monetary policy in the future should be centered on a rule or strategy for the policy instruments designed to achieve stated goals with consistent forward guidance but without cyclical macroprudential actions or quantitative easing.  

 

Posted in Monetary Policy

Was Janet Yellen Test Driving the Policy Rule Bill?

In a speech last week Fed Chair Janet Yellen made use of policy rules, and in particular the Taylor rule, to explain her views on normalizing policy. This comes on the heels of Fed Vice-Chair Stanley Fischer’s reference to the Taylor rule in a speech earlier in the week, two influential Bloomberg View columns by Clive Crook (here and here) making the case for the Fed to use such rules, the Shadow Open Market Committee’s unanimous recommendation to use rules that way, and continued discussion of a bill in Congress which would require the Fed to state its rule and compare it with a reference rule.

In fact, Janet Yellen’s discussion of how current and upcoming policy might differ from a reference rule (the Taylor Rule) is not unlike what you  might see if the policy rule legislation under consideration in Congress became law.  So one can think of her discussion as sort of test drive or trial run. If so, it raises a number of questions.

Let me first quote from the relevant section of Janet Yellen’s speech starting on page 7 and embedding an important explanatory footnote at the end:

“Even with core inflation running below the Committee’s 2 percent objective, Taylor’s rule now calls for the federal funds rate to be well above zero if the unemployment rate is currently judged to be close to its normal longer-run level and the “normal” level of the real federal funds rate is currently close to its historical average. But the prescription offered by the Taylor rule changes significantly if one instead assumes, as I do, that appreciable slack still remains in the labor market, and that the economy’s equilibrium real federal funds rate–that is, the real rate consistent with the economy achieving maximum employment and price stability over the medium term–is currently quite low by historical standards. Under assumptions that I consider more realistic under present circumstances, the same rules call for the federal funds rate to be close to zero…

“For example, the Taylor rule is Rt = RR* + πt + 0.5(πt -2) + 0.5Yt, where R denotes the federal funds rate, RR* is the estimated value of the equilibrium real rate, π is the current inflation rate (usually measured using a core consumer price index), and Y is the output gap. The latter can be approximated using Okun’s law, Yt = -2 (Ut – U*), where U is the unemployment rate and U* is the natural rate of unemployment. If RR* is assumed to equal 2 percent (roughly the average historical value of the real federal funds rate) and U* is assumed to equal 5-1/2 percent, then the Taylor rule would call for the nominal funds rate to be set a bit below 3 percent currently, given that core PCE inflation is now running close to 1-1/4 percent and the unemployment rate is 5.5 percent. But if RR* is instead assumed to equal 0 percent currently (as some statistical models suggest) and U* is assumed to equal 5 percent (an estimate in line with many FOMC participants’ SEP projections), then the rule’s current prescription is less than 1/2 percent.”

So the main argument is that if one replaces the equilibrium federal funds rate of 2% in the Taylor rule with 0%, then the recommended setting for the funds rate declines by two percentage points. The additional slack due to a lower natural rate of unemployment is much less important.  But little or no rationale is given for slashing the equilibrium interest rate from 2% percent to 0%. She simply says “some statistical models suggest” it.  In my view, there is little evidence supporting it, but this is a huge controversial issue, deserving a lot of explanation and research which I hope the Fed is doing or planning to do.

If one can adjust the intercept term (that is, RR*) in a policy rule in a purely discretionary way, then it is not a rule at all any more.  It’s purely discretion. Sharp changes in the equilibrium interest rate need to be treated very carefully, as Andrew Levin pointed out recently.

Another important issue is that Janet Yellen does not make the argument here that the coefficient on the output gap in the Taylor rule should be 1.0 rather than 0.5 as she has in previous speeches advocating a lower rate.  The argument is different here, and no reason for dropping the old argument is given.  Perhaps the reason is that the gap is small now, so the coefficient on the gap does not make much difference.  This gives the appearance that one is changing the rule or the inputs to the rule to get an answer. I do not think that reason would go over well in Congressional testimony.

 

Posted in Monetary Policy

Bernanke Says “The Fed Has a Rule.” But It’s Only Constrained Discretion and It Hasn’t Worked

In response to a question about the policy rules bill at Brookings recently, Ben Bernanke remarked that the “The Fed has a rule.” His claim surprised quite a few people, especially given the Fed’s resistance to the policy rules bill, so he then went on to explain: “The Fed’s rule is that we will go for a two percent inflation rate. We will go for the natural rate of unemployment.  We will put equal weight on those two things. We will give you information about our projection, our interest rates. That is a rule.”  But the rule that Bernanke has in mind is not a rule for the instruments of the kind that I and many others have been working on for years, or that Janet Yellen referred to in speeches over the years, or that Milton Friedman made famous.

Rather the concept that he has in mind is called “constrained discretion,” a term which he dubbed long ago in an effort to distinguish it from the idea of a rule for the instruments such as Milton Friedman’s which he sharply criticized. Bernanke first used the term in a 1997 paper with Rick Mishkin and later in a 2003 speech shortly after joining the Fed board.  In fact, it is a concept he has favored from before the time that I first presented the Taylor rule.

It is that all you really need for effective policy making is a goal, such as an inflation target and an unemployment target. In medicine, it would be the goal of a healthy patient. The rest of policy making is doing whatever you as an expert, or you as an expert with models, thinks needs to be done with the policy instruments. You do not need to articulate or describe a strategy, a decision rule, or a contingency plan for the instruments. If you want to hold the interest rate well below the rule-based strategy that worked well during the Great Moderation, as the Fed did in 2003-2005 after Bernanke joined the board, then it’s ok as long as you can justify it at the moment in terms of the goal.

“Constrained discretion” is an appealing term, and it may affect discretion in some sense, but it is not inducing or encouraging a rule as the language would have you believe. Simply having a specific numerical goal or objective function is not a rule for the instruments of policy; it is not a strategy; it ends up being all tactics, all discretion.  Bernanke obviously likes the approach in part because he believes “the presumption that the Taylor rule is the right rule or the right kind of rule I think is no longer state of the art thinking.” There is plenty of evidence that relying solely on constrained discretion has in fact resulted in a huge amount of discretion, and that has not worked for monetary policy.  David Papell and his colleagues have shown empirically that it is during periods of rules-based policy, rather than periods of so-called constrained discretion, that economic performance has been good.

Posted in Monetary Policy

Central Banks Without Rules Are Like Doctors Without Checklists

Recent proposals for policy rules legislation have led to a fascinating replay of issues that have long been at the heart of the rules versus discretion debate. Larry Summers raised one in a debate between him and me at the American Economic Association meetings in Philadelphia and again at a conference at Stanford a week ago.  Here is how Larry started in Philadelphia (from the transcript in the Journal of Policy Modeling Vol. 36, Issue 4, 2014)

“John Taylor and I have, it will not surprise you…a fundamental philosophical difference, and I would put it in this way. I think about my doctor. Which would I prefer: for my doctor’s advice, to be consistently predictable, or for my doctor’s advice to be responsive to the medical condition with which I present? Me, I’d rather have a doctor who most of the time didn’t tell me to take some stuff, and every once in a while said I needed to ingest some stuff into my body in response to the particular problem that I had. That would be a doctor who’s [advice], believe me, would be less predictable.”

Much as the proponents of discretion in earlier rules versus discretion debates (Keynes and Hayek, Heller and Friedman), Summers argues in favor of relying on the all-knowing expert, a doctor who does not perceive the need for, and does not use, a set of guidelines, but who once in a while in an unpredictable way says to ingest some stuff or does something else.

I expressed my concern that Larry’s non-rules-based doctor would recommend the wrong stuff saying: “You know it would be great to have the all-knowing doctor that is there in every particular situation and just continues to do the right thing. But we have a lot of experience with economic policies; it’s not just model simulations, it’s historical. You know when things have worked better, [is] when they’re more predictable, more rules-based. You have the Great Moderation period, very clear in that respect. But I think the main thing here is we have theory, we have facts, and [they suggest]…the dangers of too much discretion.”

Greater doubts about Larry’s analogy with doctors come from direct experience and facts about medical care, especially surgery.  Indeed, there has been much progress in medical care over the years due to doctors using rules in the form of simple checklists, as described so well in a New Yorker article by Atul Gawande “The Checklist: If Something So Simple Can Transform Intensive Care, What Else Can It Do?” and in more detail in his book, Checklist Manifesto.  (Here is an interesting Daily Show  interview about his book.)  Simple checklists have proved to be invaluable for preventing mistakes, getting good diagnoses and appropriate treatments. Of course doctors need to exercise judgement in implementing checklists, but if they start winging it or skipping steps the patients usually suffer. (Here’s a surgery demo).

Practical experience and empirical studies show that checklist-free medical care is wrought with dangers just as rules-free monetary policy is.

Posted in Monetary Policy, Teaching Economics