Central Banks Going Beyond Their Range

Economist John Eatwell of Cambridge and I published a joint letter in the Financial Times today. We argue that monetary policy is off track and that other policies are sorely needed. I said the same in a CNBC interview from Miami this morning. To be sure, the headline that the FT editors chose for our letter “Monetarist tools have failed to lift economies” should not be taken to mean that rules-based monetary policies of the kind that monetarists like Milton Friedman advocated have failed. Recent policies have been anything but rules-based. Here’s the letter.

Sir, You report that Mark Carney, governor of the Bank of England, told MPs that the BoE is prepared to cut interest rates further from their historic low of 0.25 per cent (“Carney leaves open chance of more UK rate cuts”, September 8).

This is unfortunate. In the face of overwhelming evidence that the exclusive reliance on monetary policy, both orthodox and unorthodox, has not only failed to secure a significant recovery of economic activity in the US, the UK or the eurozone, but is producing major distortions in financial markets, Mr Carney is promising yet more of the same.

The distortions created by policy include excessive asset price inflation, severe pressures on pension funds and a weakened banking system. The fundamental error derives from the exclusive role given to monetary policy.

This has led to loosening being pursued far beyond appropriate limits, the folly of negative interest rates being an extreme example. A balanced approach, with fiscal, regulatory, and tax reforms, would secure improved performance of the real economy and permit the return of a rational monetary policy.

Pursuing current policies is likely to result in serious instability as and when the monetary stance is adjusted and the distortions unwind. In addition to the negative impact on the recovery of the real economy there will be collateral damage to central bankers’ reputation.

They are attempting the exclusive management of the economy with tools not up to the job, and, in consequence, central banks are pushed into being multipurpose institutions beyond their range of effective operation.

John Eatwell, President, Queens’ College, Cambridge, UK

John B Taylor, Former Under Secretary for International Affairs, US Treasury

Posted in Monetary Policy

Kocherlakota on the Fed and the Taylor Rule

The use of policy rules to analyze monetary policy has been a growing area of research for several decades, and the pace has picked up recently. Last month Janet Yellen presented a policy framework for the future centered around a Taylor rule, noting that the Fed has deviated from such a rule in recent years.  A week later, her FOMC colleague, Jeff Lacker, also showed that the Fed has deviated from a Taylor rule benchmark, adding that now is the time to get back.  Last week, the Mercatus Center and the Cato Institute hosted a conference with the theme that deviations from policy rules—including the Taylor rule discussed in my lunch talk—have caused serious problems in recent years.  And this week former FOMC member Narayana Kocherlakota argued that the problem with monetary policy in recent years has not been that it has deviated from a Taylor rule but that it has been too close to a Taylor rule! Debating monetary issues within a policy rule framework is helpful, but Kocherlakota’s amazingly contrarian paper is wrong in a number of ways

First, the paper ignores many of the advantages of policy rules discovered over the years, and focuses only on time inconsistency and inflation bias. I listed the many other advantages in my comment on the first version of Kocherlakota’s paper with the same title: “Rules Versus Discretion: A Reconsideration.” Research by me and others on policy rules preceded the time inconsistency research, and, contrary to Kocherlakota’s claim, the Taylor rule was derived from monetary theory as embodied in estimated models not from regressions or curve fitting during particular periods. (Here is a review of the research.)

Second, Kocherlakota ignores much historical and empirical research showing that the Fed deviated from Taylor type rules in recent years both before and after the crisis, including the work by Meltzer and Nikolsko-Rzhevskyy, Papell, Prodan.

Third, he rests his argument on an informal and judgmental comparison of the Fed staff’s model simulations and a survey of future interest rate predictions of FOMC members at two points in time (2009 and 2010).   He observes that the Fed staff’s model simulations for future years were based on a Taylor rule, and FOMC participants were asked, “Does your view of the appropriate path for monetary policy [or interest rates in 2009] differ materially from that [or the interest rate in 2009] assumed by the staff.”  However, a majority (20 out of 35) of the answers were “yes,” which hardly sounds like the Fed was following the Taylor rule. Moreover, these are future estimates of decisions not actual decisions, and the actual decisions turned out much different from forecast.

Fourth, he argues that the FOMC’s reluctance to use more forward guidance “seems in no little part due to its unwillingness to commit to a pronounced deviation from the prescriptions of its pre-2007 policy framework – that is, the Taylor Rule.” To the contrary, however, well known work by Reifschneider and Williams had already shown how forward guidance is perfectly consistent with the use of a Taylor rule with prescribed deviations.  I would also note that there is considerable evidence that the Fed significantly deviated from it Taylor rule framework in 2003-2005.

Fifth, Kocherlakota argues that the Taylor rule is based on interest rate smoothing in which weight is put on an “interest rate gap.” He argues that this slows down adjustments to inflation and output. But that is not how the rule was originally derived, and more recent work by Ball and Woodford deriving the Taylor rule in simple models does not have such a weight on interest rate gaps.

The last part of Kocherlakota’s paper delves into the classic rules versus discretion debate. Here he mistakenly assumes that rules-based policy must be based on a mathematical formula, and this leads him to advocate pure discretion and thereby object to recent policy rules legislation as in the FORM Act that recently passed the House of Representatives. However, as I explain in my 1993 paper and in critiques of the critiques of the recent legislation, a monetary policy rule need not be mechanical in practice.

Posted in Monetary Policy

Novel Research on Elections, Policymaking, Economic Uncertainty

The Becker Friedman Institute of the University of Chicago and the Hoover Institution of Stanford University teamed up yesterday to put on a Conference on Elections, Policymaking, and Economic Uncertainty. The conference was held at the Hoover Institution Offices in Washington D.C. Steve Davis, Lars Hansen and I organized it. The aim was to combine path-breaking research with in-depth discussions of policy, including a panel with Alan Greenspan, Chris DeMuth and Steve Davis which I moderated.

This is an interesting, quick-moving field with many new analytical techniques and “big data” developments. The complete conference agenda with links to the papers and commentary can be found on this web site, but here’s a summary of the findings and policy implications:

Mike Bordo started off by showing that there is a large and statistically significant negative impact of economic policy uncertainty—as measured by the Bloom-Baker-Davis EPU index—on the growth of bank credit in the United States.  His research (joint with John Duca and Christoffer Koch) explains much of the slow credit growth in recent years when policy uncertainty has been elevated.  The lead discussant, Charlie Calomiris, provided a simple model of bank lending to explain and interpret the empirical findings.  Bordo’s policy suggestion to reduce uncertainty and thereby increase growth is to strive for more predictable rule-like economic policy.

Moritz Schularick then described a fascinating new historical data set that he created along with his colleagues C. Trebesch and and M. Funke on financial crises and subsequent election results over many decades. By assigning numerical codes to each historical event, their paper showed that financial crises, including the recent global financial crisis (GFC), led to gains by political parties on the right.  Jesus Fernandez-Villaverde argued, however, with specific reference to developments in Greece, Italy, Ireland and especially Spain that the main political gains following the recent crisis have been more balanced, and, if anything, have shifted to the left. The discrepancy might be due to coding conventions used in the paper, a point that other commentators also noted in reference to determining what was a financial crisis and what was not.

Next came Hannes Mueller who focused on the role of politics and democratic institutions in international development. He showed (based on work with Timothy Besley) that there is a clear positive relationship between the degree of constraints (legislative or judicial) on the chief executive in a country and the amount of foreign investment flowing into the country. The discussant, Youngsuk Yook of the Federal Reserve Board, raised identification issues about how this policy measure stands up against other economic policy measures. I too wondered how this measure compared with the 17 different indicators of economic policy used in the US Millennium Challenge Corporation.

Tarek Hassan presented an amazing new data set that he has constructed (with Stephan Hollander, Laurence van Lent, and Ahmed Tahoun). Starting with transcripts of earnings reports from corporations, Hassan showed how they used novel text analysis and processing techniques to extract political references and concerns by the businesses.  They then examined whether these concerns translated into economic impacts on firms’ decisions and found remarkably that they did.  There was considerable discussion of the novel methodology for choosing political bi-grams (two-word combinations) and how, for example, it compared with the methods of Bloom, Baker (who was the discussant) and Davis. Baker also emphasized the importance of new researchers learning Perl and Python, the essential programming languages needed for work in this area.

The final paper of the day was presented by Kaveh Majlesi who focused on the economic influences on political developments in the United States over the period from 2002 to 2010. He presented results supporting the view that trade-related employment shocks from China imports affected political polarization in the United States: there were correlated moves to the political left and to the right during this period along with impacts of China imports to various parts of the country.  Nolan McCarty presented a fascinating alternative explanation. He argued that there has been a much longer trend unrelated to trade, and that the recent movements were part of a shorter swing toward Democrats in 2006 and then Republicans in 2010 and 2014. Economists and political scientists will be trying to sort this one out for a long time.

The concluding panel discussion with me, Alan Greenspan, Chris DeMuth, and Steve Davis focused first on some very disturbing current economic problems and second on possible political solutions. panelGreenspan started off by stating his grave concerns about the direction of economic policy in the U.S, emphasizing that it is not a new trend and tracing the development way back to the 1896 presidential election. Chris DeMuth gave an alarming discussion of the increased power of executive branch regulatory agencies, and Steve Davis showed how a new index of global economic policy uncertainty has been going the wrong way for a while. The solutions (in a nutshell) were to control the explosion of government entitlement spending (Greenspan), reestablish constraints on government agencies (DeMuth), and form a commission to estimate credibly the costs and benefits of regulatory proposals (Davis).

Few disputed the proposed solutions, but many wondered aloud how such reforms could be accomplished in the current political environment.  All agreed that research of the kind presented at the all-day conference was necessary to achieve progress in practice.

Posted in Financial Crisis, Fiscal Policy and Reforms, Regulatory Policy, Stimulus Impact

Think Again and Again About the Natural Rate of Interest

In a recent Wall Street Journal piece, “Think You Know the Natural Rate of Interest? Think Again,” James Mackintosh warns about the high level of uncertainty in recent estimates of the equilibrium interest rate—commonly called r* or the natural rate—that are being factored into monetary policy decisions by the Fed.  See discussion by Fed Chair Janet Yellen for example.  Mackintosh’s argument is simple. He takes the confidence intervals (uncertainty ranges) from the recent study by Kathryn Holston, Thomas Laubach, and John Williams, which, as in an earlier paper by Laubach and Williams, finds that the estimate of the equilibrium rate has declined. Here is a chart from his article showing the wide range of uncertainty.

mackintosh-wsj-chart

Mackintosh observes that the confidence intervals are very wide, “big enough to drive a truckload of economists through,” and then concludes that they are so large that “it’s entirely plausible that the natural rate of interest hasn’t moved at all.”

This uncertainty should give policy makers pause before they jump on the low r* bandwagon, but there is even more reason to think again: the uncertainty of the r* estimates is actually larger than reported in the Mackintosh article because it does not incorporate uncertainty about the models used.

In a recent paper Volker Wieland and I show that the models used by Holsten, Laubach and Williams and others showing a declining r* omit important variables and equations relating to structural policy and monetary policy. We show that there is a perfectly reasonable alternative explanation of the facts if one expands the model to include these omitted factors. Rather than conclude that the real equilibrium interest rate r* has declined, there is an alternative model in which economic policy has shifted, either in the form of reversible regulatory and tax policy or in the form if monetary policy. Moreover we show that there is empirical evidence in favor of this explanation.

Another recent paper “Reflections on the Natural Rate of Interest, Its Measurement, Monetary Policy and the Zero Bound,” (CEPR Discussion Paper) by Alex Cukierman reaches similar conclusions. He identifies specific items that could have shifted the relationship between the interest rate and macro variables including credit problems that affect investment. He also looks at specific factors that shifted the policy rule such as central bank concerns with financial stability.  His paper is complementary to ours, and he also usefully distinguishes between risky and risk-free rates, a difference ignored in most calculations.

Overall the Wieland, Taylor and Cukierman papers show that estimates of r* are way too uncertain to incorporate into policy rules in the ways that have been suggested. Nevertheless, it is promising that Chair Yellen and her colleagues are approaching the r* issue through the framework of monetary policy rules. Uncertainty in the equilibrium real rate is not a reason to abandon rules in favor of discretion.

Posted in Monetary Policy

Jackson Hole XXXV

Everyone keeps asking about this year’s Jackson Hole monetary conference and how it compared with the first. Well, I wrote about the first on my way to this conference, and I have to say the thirty fifth lived up to its billing as “monetary policy frameworks for the future.” JACKSON HOLE XXXVOne speaker after another proposed new frameworks—some weird, some not so weird—while discussants critiqued and central bankers and economists debated from the floor, and later on the hiking trails. Steve Liesman’s scary but unsurprising CNBC pre-conference survey demonstrated the timeliness of the topic: 60% of Fed watchers don’t think the Fed even has a policy framework.

Fed Chair Janet Yellen led off. Most of the media commentary—including TV interviews—focused on whether or not she signaled an increased probability of an interest rate rise. But mostly she talked about the theme of the conference: a monetary policy framework for the future. The framework that she put forward centered on a policy rule for the interest rate—a Taylor rule with an equilibrium rate of 3 percent—and in this sense the framework is rules-based with well-known advantages over discretion. Research at the Fed indicates that the rule would work pretty well without an inflation target higher than 2% or a mechanism for negative interest rates. But for an extra big recession when a zero lower bound might be binding for a long time, Chair Yellen suggested adding two things to the rule.

First, she would add in some “forward guidance” in which the Fed would say that the interest rate would stay at zero for a time after the rule recommended positive following a very deep recession during which the rule would otherwise call for a negative rate.  The forward guidance would be time consistent with the actual policy, so in this framework the Fed would not be saying one thing and doing another. You can see this in the upper left panel of this chart which was part of her presentation. Yellen chart JH

Note that the federal funds rate with forward guidance is actually quite close to the constrained rule without forward guidance.  This framework essentially follows the suggestion of Reifschneider and Williams (1999) to embed the Taylor rule into a “mega rule,” so in this respect also the framework is rules-based.

Second, Chair Yellen would augment the policy rule with a massive ($2 trillion) quantitative easing in an effort to bring long-term interest rates down. Here her chart suggests big effects on long-term interest rates, though empirical evidence for this is very weak. This is a “QE forever” framework. It would require a large balance sheet going forward with the funds rate determined administratively by setting interest on excess reserves, with the size of the quantitative easing determined in a discretionary rather than a rule-like fashion. The chart indicates only a small improvement in the unemployment rate, and there is a danger that such a discretionary policy could itself help cause instability and high unemployment as in the great recession. It is good, however, that the discretion is measured relative to a policy rule with an implicit understanding of a return to the rule.

Many other speakers talked about the size of the central bank’s balance sheet, and views were all over the place.  Ulrich Bindseil of the ECB argued for eventually returning to a “lean” balance sheet. This is a good goal because the central bank would then remain a limited purpose institution which is appropriate for an independent agency of government. Simon Potter of the New York Fed argued for a large balance sheet so the Fed had more room for interventions. Jeremy Stein, Robin Greenwood, and Samuel Hanson made a good case for more short-term Treasuries to satisfy a liquidity demand, but a much less convincing case that a large balance sheet at the Fed rather than additional Treasury issuance was the way to achieve this. In his lunch time talk Chris Sims also noted the problem with the Fed having a large footprint extending into fiscal policy where an independent agency of government is not appropriate.  My Stanford colleagues Darrell Duffie and Arvind Krishnamurthy warned about diminished pass-through from policy rates to other interest rates in the current regulatory environment with supplementary liquidity and capital requirements; they did not see the pass-through any faster or complete with lean balance sheet.

Ricardo Reis argued in favor of a balance sheet more bloated than the lean proposal of Bindseil but not as big as the current one in the US with the Fed following a Taylor rule by changing interest on reserves. Benoít Coeuré of the ECB spoke about the continued QE and growing balance sheet at the ECB; while recognizing international spillovers, he argued that the policy was working. Haruhiko Kuroda of the Bank of Japan made the case for QE as well, though the economic impact in Japan is hard to find. The sole representative of emerging markets on the program, Agustín Carstens, Governor of the Bank of Mexico, made the case for a classic inflation targeting framework, and showed that it was working just fine in Mexico.

Negative interest rates were also a frequent topic. Marvin Goodfriend made the case with a simple neoclassical model and suggestions for dealing with cash, and Miles Kimball intervened several time to support the case.  Kuroda showed that the BOJ’s recent sub-zero foray had a large effect on the long term rate and gave a good explanation why, but he lamented the lack of an effect on the Japanese economy.  However, Chair Yellen and other Fed participants showed little interest at this time, which made sense to me.

A Framework that Works

One monetary policy framework that was not on the program for Jackson Hole XXXV (but rangers at Jackson Lake Lodgewas emerging at Jackson Hole I) was the framework in operation during the 1980s, 1990s and until recently—during Volcker’s and much of Greenspan’s time as chair. It was a monetary policy framework that worked according to historians and econometricians until the Fed went off it and things did not work so well. That suggests it would be a good candidate for a future framework.

Posted in Monetary Policy

A Less Weird Time at Jackson Hole?

I’m on my way to join the world’s central bankers at Jackson Hole for the 35th annual monetary-policy conference in the Grand Teton Mountains. I attended the first monetary-policy conference there in 1982, and I may be the only person to attend both the 1st and the 35th.  I know the Tetons will still be there, but virtually everything else will be IMG_1732different. As the Wall Street Journal front page headline screamed out on Monday, central bank Stimulus Efforts Get Weirder. I’m looking forward to it.

Paul Volcker chaired the Fed in 1982. He went to Jackson Hole, but he was not on the program to give the opening address, and no one was speculating on what he might say. No other Fed governors were there, nor governors of any other central bank. In contrast, this year many central bankers will be there, including from emerging markets. Only four reporters came in 1982 — William Eaton (LA Times), Jonathan Fuerbringer (New York Times), Ken Bacon (Wall Street Journal) and John Berry (Washington Post). This year there will be scores. And there were no television people to interview central bankers in 1982 (with the awesome Grand Teton as backdrop).

It was clear to everyone in 1982 that Volcker had a policy strategy in place, so he didn’t need to use Jackson Hole to announce new interventions or tools. The strategy was to focus on price stability and thereby get inflation down, which would then restore economic growth and reduce unemployment. Some at the meeting, such as Nobel Laureate James Tobin, didn’t like Volcker’s strategy, but others did. I presented a paper  at the 1982 conference which supported the strategy.

The federal funds rate was over 10.1% in August 1982 down from 19.1% the previous summer. Today the policy rate is .5% in the U.S. and negative in the Eurozone, Japan, Switzerland, Sweden and Denmark. There will be lot of discussion about the impact of these unusual central bank policy rates, as well the unusual large scale purchases of corporate bonds and stock, and of course the possibility of helicopter money and other new tools, some of which greatly expand the scope of central banks.

I hope there is also a discussion of less weird policy, and in particular about the normalization of policy and the benefits of normalization. In fact, with so many central bankers from around the world at Jackson Hole, it will be an opportunity to discuss the global benefits of recent proposals to return to a rules-based international monetary system along the lines that Paul Volcker has argued for.

 

Posted in International Economics, Monetary Policy

CBO’s New Way to Evaluate Fiscal Consolidation Plans

In its recently released budget outlook, the Congressional Budget Office projects that this year’s federal deficit will increase by 35% from last year to $590 billion, and that the debt will rise from $14 trillion to $23 trillion by 2026, or from 77% to 86% of GDP. Clearly it’s time for a fiscal consolidation plan.

Yet we’re not hearing about any such a plan on the campaign trail. If anything candidates are proposing more, not less, federal government spending because many people think reducing the deficit is bad for the economy, even though modern economic models show this need not be the case. It would help the political debate if CBO would employ such state-of-the-art models, as I recommended here based on research with John Cogan, Volker Wieland and Maik Wolters.

Actually the CBO might be getting closer to such a recommendation. This year the CBO estimated the impact of a fiscal consolidation plan proposed by the House Budget Committee. The plan, which was used in developing a budget resolution, would reduce the deficit mainly by reducing non-interest spending as a share of GDP compared to the baseline (from 22% to 17% by the year 2040), while increasing revenues as a share of GDP by a much smaller amount (.2%).  Here is a graph showing the multi-year plan for non-interest spending (labeled Budget Resolution); it is unclear how much is discretionary versus mandatory spending, but based on the 2016 budget resolution (see summary here) it is mainly mandatory.

graph01

The CBO estimated the impact of this multi-year fiscal consolidation plan by combining a short-run Keynesian modeling approach with a long-run growth modeling approach.  The Keynesian approach captures demand-side effects while the growth model captures supply-side effects due to changes in the capital stock and labor supply.  As far as I can tell neither model is forward looking, but at least CBO’s approach to combining the models is clearly specified: Estimated output effects for the 1st, 2nd, 3rd, and 4th years of the consolidation plan have weights of 1.00, 0.75, 0.50, and 0.25, respectively, on the short-run model with the remaining weight on the long-run model. Estimates for the fifth year and beyond are based entirely on the long run model. (See the CBO report for more information). Obviously the weights are rather arbitrary, and I would prefer a single model which combines these effects in a consistent way and takes account of incentive effects. But at least it is a step in the right direction.

The results in terms of real GNP per person are shown here:

graph02

and percentage change from the baseline is shown here:

graph03

According to the CBO estimates, there is a negative demand-side effect in the short-run, but it is quite small especially compared with the larger and continuing longer-run supply side effects. Recall that in the Cogan, Taylor, Wieland, Wolters model, the short-run effects are all positive due largely to expectation effects.  In any case, even the CBO finds the overall impact of fiscal consolidation to be, on balance, very positive for economic growth.

 

Posted in Budget & Debt, Fiscal Policy and Reforms