The Statistical Analysis of Policy Rules

My teacher, colleague, and good friend Ted Anderson died this week at the age of 98.  Ted was my Ph.D. thesis adviser at Stanford in the early 1970s, and later a colleague when I returned to teach at Stanford in the 1980s. He was a most important influence on me and my research. He taught me an enormous amount about time series analysis, and about how to prove formal theorems in mathematics.  I am grateful to him for sharing his wisdom and for endeavoring to instill his rigorous approach to econometric research. His lectures were clear and insightful, but it was from interacting with him in his office or in the Econometrics Seminar that one really learned time series analysis.

The Stanford Econometrics Seminar in the early 1970s was an amazingly fertile place for developing new ideas.  Fortunately for me, the seminar focused on several new books which explored formally the problem of optimal economic policy formulation in statistical or probabilistic settings.  We read and each presented chapters from Peter Whittle’s Prediction and Regulation, Masanao Aoki’s Optimization of Stochastic Systems, and Box and Jenkins’ Time Series Analysis: Forecasting and Control. It was a terrific way to learn about the latest opportunities for pushing research frontiers.  We each presented and critiqued chapters from these books and freely discussed possible extensions and implications.

The title of this post is a variation on the title of Ted Anderson’s 1971 classic book, The Statistical Analysis of Time Series, which was the first textbook I used to study time series analysis. My Ph.D. thesis was on policy rules, and in particular on the “joint estimation and control” problem. The problem was to find an optimal economic policy in a model in which one does not know the parameters and therefore has to estimate and control the system simultaneously.

An unresolved issue was how much experimentation should be incorporated into the policy.  Could one find an optimal way to move the policy instrument around in order to learn more about the model or its parameters? While costly in the short run, the improved information would pay off in the future with better settings for the instruments.

At that time everyone was approaching this problem through a dynamic programming and optimal control approach as in Aoki’s book and, in a special case, in Ed Prescott’s Ph.D. thesis at Carnegie Mellon which appeared in Econometica in 1972.  This approach was very difficult for any reasonably realistic application because of the curse of dimensionality of the backward induction method of dynamic programming.

Confronting these difficulties at Stanford in the early 1970s, we looked for a different approach to the problem.  The approach was much closer to the methods of classical mathematical statistics as in Ted’s book.  The idea was to start by proposing a rule for setting the instruments, perhaps using some heuristic method, and then examine how the rule performed using standard statistical criteria adapted to the control rather than estimation problem. The rules were contingency plans that described how the instruments would evolve over time.  The criteria for evaluating the rules stressed consistency and speed of convergence.

In my Ph.D. thesis as well as in a series of later papers with Ted, various convergence theorems were stated and proved. Later, statisticians T.L. Lai and Herbert Robbins proved more theorems that established desirable speeds of convergence in more general nonlinear settings.

The main finding from that research was that following a simple rule without special experimentation features was a good approximation. That made future work much simpler because it eliminated a great deal of complexity. This same basic approach has been used to develop recommendations for the actual conduct of monetary policy in the years since then, especially in the 1980s and 1990s.  The interaction between actual policy decisions and the recommendations of such policy rules provides ways to learn about how policy works.  In reality, of course, gaps develop between actual monetary policy and recommended policy rules based on economic and statistical theory. By focusing on these gaps one can learn more about what works and what doesn’t in the spirit of the approach to policy evaluation developed in that research with Ted at Stanford in the early 1970s.

Posted in Teaching Economics

Central Banks Going Beyond Their Range

Economist John Eatwell of Cambridge and I published a joint letter in the Financial Times today. We argue that monetary policy is off track and that other policies are sorely needed. I said the same in a CNBC interview from Miami this morning. To be sure, the headline that the FT editors chose for our letter “Monetarist tools have failed to lift economies” should not be taken to mean that rules-based monetary policies of the kind that monetarists like Milton Friedman advocated have failed. Recent policies have been anything but rules-based. Here’s the letter.

Sir, You report that Mark Carney, governor of the Bank of England, told MPs that the BoE is prepared to cut interest rates further from their historic low of 0.25 per cent (“Carney leaves open chance of more UK rate cuts”, September 8).

This is unfortunate. In the face of overwhelming evidence that the exclusive reliance on monetary policy, both orthodox and unorthodox, has not only failed to secure a significant recovery of economic activity in the US, the UK or the eurozone, but is producing major distortions in financial markets, Mr Carney is promising yet more of the same.

The distortions created by policy include excessive asset price inflation, severe pressures on pension funds and a weakened banking system. The fundamental error derives from the exclusive role given to monetary policy.

This has led to loosening being pursued far beyond appropriate limits, the folly of negative interest rates being an extreme example. A balanced approach, with fiscal, regulatory, and tax reforms, would secure improved performance of the real economy and permit the return of a rational monetary policy.

Pursuing current policies is likely to result in serious instability as and when the monetary stance is adjusted and the distortions unwind. In addition to the negative impact on the recovery of the real economy there will be collateral damage to central bankers’ reputation.

They are attempting the exclusive management of the economy with tools not up to the job, and, in consequence, central banks are pushed into being multipurpose institutions beyond their range of effective operation.

John Eatwell, President, Queens’ College, Cambridge, UK

John B Taylor, Former Under Secretary for International Affairs, US Treasury

Posted in Monetary Policy

Kocherlakota on the Fed and the Taylor Rule

The use of policy rules to analyze monetary policy has been a growing area of research for several decades, and the pace has picked up recently. Last month Janet Yellen presented a policy framework for the future centered around a Taylor rule, noting that the Fed has deviated from such a rule in recent years.  A week later, her FOMC colleague, Jeff Lacker, also showed that the Fed has deviated from a Taylor rule benchmark, adding that now is the time to get back.  Last week, the Mercatus Center and the Cato Institute hosted a conference with the theme that deviations from policy rules—including the Taylor rule discussed in my lunch talk—have caused serious problems in recent years.  And this week former FOMC member Narayana Kocherlakota argued that the problem with monetary policy in recent years has not been that it has deviated from a Taylor rule but that it has been too close to a Taylor rule! Debating monetary issues within a policy rule framework is helpful, but Kocherlakota’s amazingly contrarian paper is wrong in a number of ways

First, the paper ignores many of the advantages of policy rules discovered over the years, and focuses only on time inconsistency and inflation bias. I listed the many other advantages in my comment on the first version of Kocherlakota’s paper with the same title: “Rules Versus Discretion: A Reconsideration.” Research by me and others on policy rules preceded the time inconsistency research, and, contrary to Kocherlakota’s claim, the Taylor rule was derived from monetary theory as embodied in estimated models not from regressions or curve fitting during particular periods. (Here is a review of the research.)

Second, Kocherlakota ignores much historical and empirical research showing that the Fed deviated from Taylor type rules in recent years both before and after the crisis, including the work by Meltzer and Nikolsko-Rzhevskyy, Papell, Prodan.

Third, he rests his argument on an informal and judgmental comparison of the Fed staff’s model simulations and a survey of future interest rate predictions of FOMC members at two points in time (2009 and 2010).   He observes that the Fed staff’s model simulations for future years were based on a Taylor rule, and FOMC participants were asked, “Does your view of the appropriate path for monetary policy [or interest rates in 2009] differ materially from that [or the interest rate in 2009] assumed by the staff.”  However, a majority (20 out of 35) of the answers were “yes,” which hardly sounds like the Fed was following the Taylor rule. Moreover, these are future estimates of decisions not actual decisions, and the actual decisions turned out much different from forecast.

Fourth, he argues that the FOMC’s reluctance to use more forward guidance “seems in no little part due to its unwillingness to commit to a pronounced deviation from the prescriptions of its pre-2007 policy framework – that is, the Taylor Rule.” To the contrary, however, well known work by Reifschneider and Williams had already shown how forward guidance is perfectly consistent with the use of a Taylor rule with prescribed deviations.  I would also note that there is considerable evidence that the Fed significantly deviated from it Taylor rule framework in 2003-2005.

Fifth, Kocherlakota argues that the Taylor rule is based on interest rate smoothing in which weight is put on an “interest rate gap.” He argues that this slows down adjustments to inflation and output. But that is not how the rule was originally derived, and more recent work by Ball and Woodford deriving the Taylor rule in simple models does not have such a weight on interest rate gaps.

The last part of Kocherlakota’s paper delves into the classic rules versus discretion debate. Here he mistakenly assumes that rules-based policy must be based on a mathematical formula, and this leads him to advocate pure discretion and thereby object to recent policy rules legislation as in the FORM Act that recently passed the House of Representatives. However, as I explain in my 1993 paper and in critiques of the critiques of the recent legislation, a monetary policy rule need not be mechanical in practice.

Posted in Monetary Policy

Novel Research on Elections, Policymaking, Economic Uncertainty

The Becker Friedman Institute of the University of Chicago and the Hoover Institution of Stanford University teamed up yesterday to put on a Conference on Elections, Policymaking, and Economic Uncertainty. The conference was held at the Hoover Institution Offices in Washington D.C. Steve Davis, Lars Hansen and I organized it. The aim was to combine path-breaking research with in-depth discussions of policy, including a panel with Alan Greenspan, Chris DeMuth and Steve Davis which I moderated.

This is an interesting, quick-moving field with many new analytical techniques and “big data” developments. The complete conference agenda with links to the papers and commentary can be found on this web site, but here’s a summary of the findings and policy implications:

Mike Bordo started off by showing that there is a large and statistically significant negative impact of economic policy uncertainty—as measured by the Bloom-Baker-Davis EPU index—on the growth of bank credit in the United States.  His research (joint with John Duca and Christoffer Koch) explains much of the slow credit growth in recent years when policy uncertainty has been elevated.  The lead discussant, Charlie Calomiris, provided a simple model of bank lending to explain and interpret the empirical findings.  Bordo’s policy suggestion to reduce uncertainty and thereby increase growth is to strive for more predictable rule-like economic policy.

Moritz Schularick then described a fascinating new historical data set that he created along with his colleagues C. Trebesch and and M. Funke on financial crises and subsequent election results over many decades. By assigning numerical codes to each historical event, their paper showed that financial crises, including the recent global financial crisis (GFC), led to gains by political parties on the right.  Jesus Fernandez-Villaverde argued, however, with specific reference to developments in Greece, Italy, Ireland and especially Spain that the main political gains following the recent crisis have been more balanced, and, if anything, have shifted to the left. The discrepancy might be due to coding conventions used in the paper, a point that other commentators also noted in reference to determining what was a financial crisis and what was not.

Next came Hannes Mueller who focused on the role of politics and democratic institutions in international development. He showed (based on work with Timothy Besley) that there is a clear positive relationship between the degree of constraints (legislative or judicial) on the chief executive in a country and the amount of foreign investment flowing into the country. The discussant, Youngsuk Yook of the Federal Reserve Board, raised identification issues about how this policy measure stands up against other economic policy measures. I too wondered how this measure compared with the 17 different indicators of economic policy used in the US Millennium Challenge Corporation.

Tarek Hassan presented an amazing new data set that he has constructed (with Stephan Hollander, Laurence van Lent, and Ahmed Tahoun). Starting with transcripts of earnings reports from corporations, Hassan showed how they used novel text analysis and processing techniques to extract political references and concerns by the businesses.  They then examined whether these concerns translated into economic impacts on firms’ decisions and found remarkably that they did.  There was considerable discussion of the novel methodology for choosing political bi-grams (two-word combinations) and how, for example, it compared with the methods of Bloom, Baker (who was the discussant) and Davis. Baker also emphasized the importance of new researchers learning Perl and Python, the essential programming languages needed for work in this area.

The final paper of the day was presented by Kaveh Majlesi who focused on the economic influences on political developments in the United States over the period from 2002 to 2010. He presented results supporting the view that trade-related employment shocks from China imports affected political polarization in the United States: there were correlated moves to the political left and to the right during this period along with impacts of China imports to various parts of the country.  Nolan McCarty presented a fascinating alternative explanation. He argued that there has been a much longer trend unrelated to trade, and that the recent movements were part of a shorter swing toward Democrats in 2006 and then Republicans in 2010 and 2014. Economists and political scientists will be trying to sort this one out for a long time.

The concluding panel discussion with me, Alan Greenspan, Chris DeMuth, and Steve Davis focused first on some very disturbing current economic problems and second on possible political solutions. panelGreenspan started off by stating his grave concerns about the direction of economic policy in the U.S, emphasizing that it is not a new trend and tracing the development way back to the 1896 presidential election. Chris DeMuth gave an alarming discussion of the increased power of executive branch regulatory agencies, and Steve Davis showed how a new index of global economic policy uncertainty has been going the wrong way for a while. The solutions (in a nutshell) were to control the explosion of government entitlement spending (Greenspan), reestablish constraints on government agencies (DeMuth), and form a commission to estimate credibly the costs and benefits of regulatory proposals (Davis).

Few disputed the proposed solutions, but many wondered aloud how such reforms could be accomplished in the current political environment.  All agreed that research of the kind presented at the all-day conference was necessary to achieve progress in practice.

Posted in Financial Crisis, Fiscal Policy and Reforms, Regulatory Policy, Stimulus Impact

Think Again and Again About the Natural Rate of Interest

In a recent Wall Street Journal piece, “Think You Know the Natural Rate of Interest? Think Again,” James Mackintosh warns about the high level of uncertainty in recent estimates of the equilibrium interest rate—commonly called r* or the natural rate—that are being factored into monetary policy decisions by the Fed.  See discussion by Fed Chair Janet Yellen for example.  Mackintosh’s argument is simple. He takes the confidence intervals (uncertainty ranges) from the recent study by Kathryn Holston, Thomas Laubach, and John Williams, which, as in an earlier paper by Laubach and Williams, finds that the estimate of the equilibrium rate has declined. Here is a chart from his article showing the wide range of uncertainty.

mackintosh-wsj-chart

Mackintosh observes that the confidence intervals are very wide, “big enough to drive a truckload of economists through,” and then concludes that they are so large that “it’s entirely plausible that the natural rate of interest hasn’t moved at all.”

This uncertainty should give policy makers pause before they jump on the low r* bandwagon, but there is even more reason to think again: the uncertainty of the r* estimates is actually larger than reported in the Mackintosh article because it does not incorporate uncertainty about the models used.

In a recent paper Volker Wieland and I show that the models used by Holsten, Laubach and Williams and others showing a declining r* omit important variables and equations relating to structural policy and monetary policy. We show that there is a perfectly reasonable alternative explanation of the facts if one expands the model to include these omitted factors. Rather than conclude that the real equilibrium interest rate r* has declined, there is an alternative model in which economic policy has shifted, either in the form of reversible regulatory and tax policy or in the form if monetary policy. Moreover we show that there is empirical evidence in favor of this explanation.

Another recent paper “Reflections on the Natural Rate of Interest, Its Measurement, Monetary Policy and the Zero Bound,” (CEPR Discussion Paper) by Alex Cukierman reaches similar conclusions. He identifies specific items that could have shifted the relationship between the interest rate and macro variables including credit problems that affect investment. He also looks at specific factors that shifted the policy rule such as central bank concerns with financial stability.  His paper is complementary to ours, and he also usefully distinguishes between risky and risk-free rates, a difference ignored in most calculations.

Overall the Wieland, Taylor and Cukierman papers show that estimates of r* are way too uncertain to incorporate into policy rules in the ways that have been suggested. Nevertheless, it is promising that Chair Yellen and her colleagues are approaching the r* issue through the framework of monetary policy rules. Uncertainty in the equilibrium real rate is not a reason to abandon rules in favor of discretion.

Posted in Monetary Policy

Jackson Hole XXXV

Everyone keeps asking about this year’s Jackson Hole monetary conference and how it compared with the first. Well, I wrote about the first on my way to this conference, and I have to say the thirty fifth lived up to its billing as “monetary policy frameworks for the future.” JACKSON HOLE XXXVOne speaker after another proposed new frameworks—some weird, some not so weird—while discussants critiqued and central bankers and economists debated from the floor, and later on the hiking trails. Steve Liesman’s scary but unsurprising CNBC pre-conference survey demonstrated the timeliness of the topic: 60% of Fed watchers don’t think the Fed even has a policy framework.

Fed Chair Janet Yellen led off. Most of the media commentary—including TV interviews—focused on whether or not she signaled an increased probability of an interest rate rise. But mostly she talked about the theme of the conference: a monetary policy framework for the future. The framework that she put forward centered on a policy rule for the interest rate—a Taylor rule with an equilibrium rate of 3 percent—and in this sense the framework is rules-based with well-known advantages over discretion. Research at the Fed indicates that the rule would work pretty well without an inflation target higher than 2% or a mechanism for negative interest rates. But for an extra big recession when a zero lower bound might be binding for a long time, Chair Yellen suggested adding two things to the rule.

First, she would add in some “forward guidance” in which the Fed would say that the interest rate would stay at zero for a time after the rule recommended positive following a very deep recession during which the rule would otherwise call for a negative rate.  The forward guidance would be time consistent with the actual policy, so in this framework the Fed would not be saying one thing and doing another. You can see this in the upper left panel of this chart which was part of her presentation. Yellen chart JH

Note that the federal funds rate with forward guidance is actually quite close to the constrained rule without forward guidance.  This framework essentially follows the suggestion of Reifschneider and Williams (1999) to embed the Taylor rule into a “mega rule,” so in this respect also the framework is rules-based.

Second, Chair Yellen would augment the policy rule with a massive ($2 trillion) quantitative easing in an effort to bring long-term interest rates down. Here her chart suggests big effects on long-term interest rates, though empirical evidence for this is very weak. This is a “QE forever” framework. It would require a large balance sheet going forward with the funds rate determined administratively by setting interest on excess reserves, with the size of the quantitative easing determined in a discretionary rather than a rule-like fashion. The chart indicates only a small improvement in the unemployment rate, and there is a danger that such a discretionary policy could itself help cause instability and high unemployment as in the great recession. It is good, however, that the discretion is measured relative to a policy rule with an implicit understanding of a return to the rule.

Many other speakers talked about the size of the central bank’s balance sheet, and views were all over the place.  Ulrich Bindseil of the ECB argued for eventually returning to a “lean” balance sheet. This is a good goal because the central bank would then remain a limited purpose institution which is appropriate for an independent agency of government. Simon Potter of the New York Fed argued for a large balance sheet so the Fed had more room for interventions. Jeremy Stein, Robin Greenwood, and Samuel Hanson made a good case for more short-term Treasuries to satisfy a liquidity demand, but a much less convincing case that a large balance sheet at the Fed rather than additional Treasury issuance was the way to achieve this. In his lunch time talk Chris Sims also noted the problem with the Fed having a large footprint extending into fiscal policy where an independent agency of government is not appropriate.  My Stanford colleagues Darrell Duffie and Arvind Krishnamurthy warned about diminished pass-through from policy rates to other interest rates in the current regulatory environment with supplementary liquidity and capital requirements; they did not see the pass-through any faster or complete with lean balance sheet.

Ricardo Reis argued in favor of a balance sheet more bloated than the lean proposal of Bindseil but not as big as the current one in the US with the Fed following a Taylor rule by changing interest on reserves. Benoít Coeuré of the ECB spoke about the continued QE and growing balance sheet at the ECB; while recognizing international spillovers, he argued that the policy was working. Haruhiko Kuroda of the Bank of Japan made the case for QE as well, though the economic impact in Japan is hard to find. The sole representative of emerging markets on the program, Agustín Carstens, Governor of the Bank of Mexico, made the case for a classic inflation targeting framework, and showed that it was working just fine in Mexico.

Negative interest rates were also a frequent topic. Marvin Goodfriend made the case with a simple neoclassical model and suggestions for dealing with cash, and Miles Kimball intervened several time to support the case.  Kuroda showed that the BOJ’s recent sub-zero foray had a large effect on the long term rate and gave a good explanation why, but he lamented the lack of an effect on the Japanese economy.  However, Chair Yellen and other Fed participants showed little interest at this time, which made sense to me.

A Framework that Works

One monetary policy framework that was not on the program for Jackson Hole XXXV (but rangers at Jackson Lake Lodgewas emerging at Jackson Hole I) was the framework in operation during the 1980s, 1990s and until recently—during Volcker’s and much of Greenspan’s time as chair. It was a monetary policy framework that worked according to historians and econometricians until the Fed went off it and things did not work so well. That suggests it would be a good candidate for a future framework.

Posted in Monetary Policy

A Less Weird Time at Jackson Hole?

I’m on my way to join the world’s central bankers at Jackson Hole for the 35th annual monetary-policy conference in the Grand Teton Mountains. I attended the first monetary-policy conference there in 1982, and I may be the only person to attend both the 1st and the 35th.  I know the Tetons will still be there, but virtually everything else will be IMG_1732different. As the Wall Street Journal front page headline screamed out on Monday, central bank Stimulus Efforts Get Weirder. I’m looking forward to it.

Paul Volcker chaired the Fed in 1982. He went to Jackson Hole, but he was not on the program to give the opening address, and no one was speculating on what he might say. No other Fed governors were there, nor governors of any other central bank. In contrast, this year many central bankers will be there, including from emerging markets. Only four reporters came in 1982 — William Eaton (LA Times), Jonathan Fuerbringer (New York Times), Ken Bacon (Wall Street Journal) and John Berry (Washington Post). This year there will be scores. And there were no television people to interview central bankers in 1982 (with the awesome Grand Teton as backdrop).

It was clear to everyone in 1982 that Volcker had a policy strategy in place, so he didn’t need to use Jackson Hole to announce new interventions or tools. The strategy was to focus on price stability and thereby get inflation down, which would then restore economic growth and reduce unemployment. Some at the meeting, such as Nobel Laureate James Tobin, didn’t like Volcker’s strategy, but others did. I presented a paper  at the 1982 conference which supported the strategy.

The federal funds rate was over 10.1% in August 1982 down from 19.1% the previous summer. Today the policy rate is .5% in the U.S. and negative in the Eurozone, Japan, Switzerland, Sweden and Denmark. There will be lot of discussion about the impact of these unusual central bank policy rates, as well the unusual large scale purchases of corporate bonds and stock, and of course the possibility of helicopter money and other new tools, some of which greatly expand the scope of central banks.

I hope there is also a discussion of less weird policy, and in particular about the normalization of policy and the benefits of normalization. In fact, with so many central bankers from around the world at Jackson Hole, it will be an opportunity to discuss the global benefits of recent proposals to return to a rules-based international monetary system along the lines that Paul Volcker has argued for.

 

Posted in International Economics, Monetary Policy