Take Off the Muzzle and the Economy Will Roar

muzzle-removedIn his Saturday Wall Street Journal essay “Why the Economy Doesn’t Roar Anymore”—illustrated with a big lion with its mouth shut—Marc Levinson offers the answer that the “U.S. economy isn’t behaving badly. It is just being ordinary.”  But there is nothing ordinary (or secular) about the current stagnation of  barely 2 percent growth. The economy is not roaring because it’s muzzled by government policy, and if we take off that muzzle—like Lucy and Susan did in “The Lion, the Witch and the Wardrobe”—the economy will indeed roar.

It is of course true, as Levinson states, that “faster productivity growth” is “the key to faster economic growth.” But it’s false, as he also states,  that it has all been downhill since the “long boom after World War II” and “there is no going back.” The following chart of productivity growth drawn from my article in the American Economic Review shows why Levinson misinterprets recent history. Whether you look at 5 year averages, statistically filtered trends, or simple directional arrows, you can see huge swings in productivity growth in recent years.  These movements—the productivity slump of the 1970s, the rebound of the 1980s and 1990s, and the recent slump—are closely related to shifts in economic policy, and economic theory indicates that the relationship is causal, as I explain here and here and in blogs and opeds. You can also see that the recent terrible performance—negative productivity growth for the past year—is anything but ordinary.  Productivity Growth

Writing about the 1980’s and 1990s, Levinson claims that “deregulation, privatization, lower tax rates, balanced budgets and rigid rules for monetary policy—proved no more successful at boosting productivity than the statist policies…” The chart shows the contrary: productivity growth was generally picking up in the 1980s and 1990s.  It is the stagnation of the late 1960s, the 1970s, and the last decade that is state-sponsored.  To turn the economy around we need to take the muzzle off, and that means regulatory reform, tax reform, budget reform, and monetary reform.

Posted in Fiscal Policy and Reforms, Regulatory Policy, Slow Recovery

Should the Previous Framework for Monetary Policy Be Fundamentally Reconsidered?

“Did the crisis reveal that the previous consensus framework for monetary policy was inadequate and should be fundamentally reconsidered?”  “Did economic relationships fundamentally change after the crisis and if so how?” These important questions set the theme for an excellent conference at the De Nederlandsche Bank (DNB) in Amsterdam this past week. In a talk at the conference I tried to answer the questions. Here’s a brief summary.

Eighty Years Ago

To understand the policy framework that existed before the financial crisis, it’s useful and fitting at this conference to trace the framework’s development back to its beginning exactly eighty years ago. It was in 1936 that Jan Tinbergen built “the first macroeconomic model ever.” It “was developed to answer the question of whether the government should leave the Gold standard and devaluate the Dutch guilder” as described on the DNB web site.

“Tinbergen built his model to give advice on policy,” as Geert Dhaene and Anton Barten explain in When It All Began. “Under certain assumptions about exogenous variables and alternative values for policy instrument he generated a set of time paths for the endogenous variables, one for each policy alternative. These were compared with the no change case and the best one was selected.”  In other words, Tinbergen was analyzing policy in what could be called “path-space,” and his model showed that a path of
devaluation would benefit the Dutch economy.tinbegen-chart

Tinbergen presented his paper, “An Economic Policy for 1936,” to the Dutch Economics
and Statistics Association on October 24, 1936, but “the paper itself was already available in September,” according to Dhaene and Barten, who point out the amazing historical timing of events: “On 27 September the Netherlands abandoned the gold parity of the guilder, the last country of the gold block to do so. The guilder was effectively devalued by 17 – 20%.” As is often the case in policy evaluation, we do not know whether the paper influenced that policy decision, but the timing at least allows for that possibility.

In any case the idea of doing policy analysis with macro models in “path-space” greatly influenced the subsequent development of a policy framework. Simulating paths for instruments—whether the exchange rate, government spending or the money supply—and examining the resulting path of the targets variables demonstrated the importance of correctly distinguishing between instruments and targets, of obtaining structural parameters rather than reduced form parameters, and of developing econometric methods such as FIML, LIML and TSLS to estimate structural parameters. Indeed, this largely defined the research agenda of the Cowles Commission and Foundation at Chicago and Yale, of Lawrence Klein at Penn, and of many other macroeconomists around the world.  Macroeconomic models like MPS and MCM were introduced to the Fed’s policy framework in the 1960s and 1970s.

Forty years ago

Starting about forty years ago, this basic framework for policy analysis with macroeconomic models changed dramatically.  It moved from “path space” to “rule space.”  Policy analysis in “rule space” examines the impact of different rules for the policy instruments rather than different set paths for the instruments.  Here I would like to mention two of my teachers—Phil Howrey and Ted Anderson—who encouraged me to work in this direction for a number of reasons, and my 1976 Econometrica paper with Anderson, “Some Experimental Results on the Statistical Properties of Least Squares Estimates in Control Problems.” The influential papers by Robert Lucas (1976) “Econometric Policy Evaluation: A Critique” and by Finn Kydland and Ed Prescott (1977) “Rules Rather Than Discretion: The Inconsistency of Optimal Plans,” provided two key reasons for a “rules space” approach, and my 1979 Econometrica paper “Estimation and Control of a Macroeconomic Model with Rational Expectations” used the approach to find good monetary policy rules in an estimated econometric model with rational expectations and sticky prices.

Over time more complex empirical models with rational expectations and sticky prices (new Keynesian) provided better underpinnings for this monetary policy framework. Many of the new models were international with highly integrated capital markets and no-arbitrage conditions on the term-structure. Soon the new Keynesian FRB/US, FRB/Global and SIGMA models replaced the MPS and MCM models at the Fed. The objective was to find monetary policy rules which improved macroeconomic performance.  The Taylor Rule is an example, but, more generally, the monetary policy framework found that rules-based monetary policy led to better economic performance in the national economy as well as in the global economy where an international Nash equilibrium in “rule space” was early optimal.

This was the monetary policy framework that was in place at the time of the financial crisis.  Many of the models that underpinned this framework can be found in the invaluable archives of the Macro Model Data Base (MMB) of Volker Wieland where there are many models dated 2008 and earlier including the models of Smets and Wouters, of Christiano, Eichenbaum, and Evans, of De Graeve, and of Taylor. Many models included financial sectors with a variety of interest rates; the De Graeve model included a financial accelerator.  The impact of monetary shocks was quite similar in the different models, as shown here and summarized in the following chart of four models, and simple policy rules were robust to different models.model-comp-pre-crisis

Perhaps most important, the framework worked in practice. There is overwhelming evidence that when central banks moved toward more transparent rules-based policies in the 1980s and 1990s, including through a focus on price stability, there was a dramatic improvement compared with the 1970s when policy was less rule-like and more unpredictable.  Moreover, there is considerable evidence that monetary policy deviated from the framework in recent years by moving away from rule-like policies, especially during the “too low for too long” period of 2003-2005 leading up to the financial crisis, and that this deviation has continued. In other words, deviating from the framework has not worked.

Have the economic relationships and therefore the framework fundamentally changed since the crisis?  Of course, as Tom Sargent puts it in his editorial  review of the forthcoming  Handbook of Macroeconomics by Harald Uhlig and me, “both before and after that crisis, working macroeconomists had rolled up their sleeves to study how financial frictions, incentive problems, incomplete markets, interactions among monetary, fiscal, regulatory, and bailout policies, and a host of other issues affect prices and quantities and good economic policies.”  But, taking account of this research, the overall basic macro framework has shown a great degree of constancy as suggested by studies in the new Handbook of Macroeconomics. For example, Jesper Linde, Frank Smets, and Raf Wouters examine some of the major changes—such as the financial accelerator or a better modeling of the zero lower bound. They find that these changes do not alter the behavior of the models during the financial crisis by much.  They also note that there is little change in the framework—despite efforts to do so—to incorporate the impact of unconventional policy instruments such as quantitative easing and negative interest rates.  In another paper in the new Handbook, Volker Wieland, Elena Afanasyeva, Meguy Kuete, and Jinhyuk Yoo examine how new models of financial frictions or credit constraints affect policy rules. They find only small changes, including a benefit from including credit growth in the rules.

All this suggests that the crisis did not reveal that the previous consensus framework for monetary policy should be fundamentally reconsidered, or even that it has fundamentally changed. This previous framework was working.  The mistake was deviating from it.  Of course, macroeconomists should keep working and reconsidering, but it’s the deviation from the framework—not the framework itself—that needs to be fundamentally reconsidered at this time. I have argued that there is a need to return to the policy recommendations of such a framework domestically and internationally.

We are not there yet, of course, but it is a good sign that central bankers have inflation goals and are discussing policy rules. Janet Yellen’s policy framework for the future, put forth at Jackson Hole in August, centers around a Taylor rule.  Many are reaching the conclusion that unconventional monetary policy may not be very effective. Paul Volcker and Raghu Rajan are making the case for a rules-based international system, and Mario Draghi argued at Sintra in June that “we would all clearly benefit from enhanced understanding among central banks on the relative paths of monetary policy. That comes down, above all, to improving communication over our reaction functions and policy frameworks.”

Posted in Financial Crisis, Monetary Policy, Teaching Economics

The Statistical Analysis of Policy Rules

My teacher, colleague, and good friend Ted Anderson died this week at the age of 98.  Ted was my Ph.D. thesis adviser at Stanford in the early 1970s, and later a colleague when I returned to teach at Stanford in the 1980s. He was a most important influence on me and my research. He taught me an enormous amount about time series analysis, and about how to prove formal theorems in mathematics.  I am grateful to him for sharing his wisdom and for endeavoring to instill his rigorous approach to econometric research. His lectures were clear and insightful, but it was from interacting with him in his office or in the Econometrics Seminar that one really learned time series analysis.

The Stanford Econometrics Seminar in the early 1970s was an amazingly fertile place for developing new ideas.  Fortunately for me, the seminar focused on several new books which explored formally the problem of optimal economic policy formulation in statistical or probabilistic settings.  We read and each presented chapters from Peter Whittle’s Prediction and Regulation, Masanao Aoki’s Optimization of Stochastic Systems, and Box and Jenkins’ Time Series Analysis: Forecasting and Control. It was a terrific way to learn about the latest opportunities for pushing research frontiers.  We each presented and critiqued chapters from these books and freely discussed possible extensions and implications.

The title of this post is a variation on the title of Ted Anderson’s 1971 classic book, The Statistical Analysis of Time Series, which was the first textbook I used to study time series analysis. My Ph.D. thesis was on policy rules, and in particular on the “joint estimation and control” problem. The problem was to find an optimal economic policy in a model in which one does not know the parameters and therefore has to estimate and control the system simultaneously.

An unresolved issue was how much experimentation should be incorporated into the policy.  Could one find an optimal way to move the policy instrument around in order to learn more about the model or its parameters? While costly in the short run, the improved information would pay off in the future with better settings for the instruments.

At that time everyone was approaching this problem through a dynamic programming and optimal control approach as in Aoki’s book and, in a special case, in Ed Prescott’s Ph.D. thesis at Carnegie Mellon which appeared in Econometica in 1972.  This approach was very difficult for any reasonably realistic application because of the curse of dimensionality of the backward induction method of dynamic programming.

Confronting these difficulties at Stanford in the early 1970s, we looked for a different approach to the problem.  The approach was much closer to the methods of classical mathematical statistics as in Ted’s book.  The idea was to start by proposing a rule for setting the instruments, perhaps using some heuristic method, and then examine how the rule performed using standard statistical criteria adapted to the control rather than estimation problem. The rules were contingency plans that described how the instruments would evolve over time.  The criteria for evaluating the rules stressed consistency and speed of convergence.

In my Ph.D. thesis as well as in a series of later papers with Ted, various convergence theorems were stated and proved. Later, statisticians T.L. Lai and Herbert Robbins proved more theorems that established desirable speeds of convergence in more general nonlinear settings.

The main finding from that research was that following a simple rule without special experimentation features was a good approximation. That made future work much simpler because it eliminated a great deal of complexity. This same basic approach has been used to develop recommendations for the actual conduct of monetary policy in the years since then, especially in the 1980s and 1990s.  The interaction between actual policy decisions and the recommendations of such policy rules provides ways to learn about how policy works.  In reality, of course, gaps develop between actual monetary policy and recommended policy rules based on economic and statistical theory. By focusing on these gaps one can learn more about what works and what doesn’t in the spirit of the approach to policy evaluation developed in that research with Ted at Stanford in the early 1970s.

Posted in Teaching Economics

Central Banks Going Beyond Their Range

Economist John Eatwell of Cambridge and I published a joint letter in the Financial Times today. We argue that monetary policy is off track and that other policies are sorely needed. I said the same in a CNBC interview from Miami this morning. To be sure, the headline that the FT editors chose for our letter “Monetarist tools have failed to lift economies” should not be taken to mean that rules-based monetary policies of the kind that monetarists like Milton Friedman advocated have failed. Recent policies have been anything but rules-based. Here’s the letter.

Sir, You report that Mark Carney, governor of the Bank of England, told MPs that the BoE is prepared to cut interest rates further from their historic low of 0.25 per cent (“Carney leaves open chance of more UK rate cuts”, September 8).

This is unfortunate. In the face of overwhelming evidence that the exclusive reliance on monetary policy, both orthodox and unorthodox, has not only failed to secure a significant recovery of economic activity in the US, the UK or the eurozone, but is producing major distortions in financial markets, Mr Carney is promising yet more of the same.

The distortions created by policy include excessive asset price inflation, severe pressures on pension funds and a weakened banking system. The fundamental error derives from the exclusive role given to monetary policy.

This has led to loosening being pursued far beyond appropriate limits, the folly of negative interest rates being an extreme example. A balanced approach, with fiscal, regulatory, and tax reforms, would secure improved performance of the real economy and permit the return of a rational monetary policy.

Pursuing current policies is likely to result in serious instability as and when the monetary stance is adjusted and the distortions unwind. In addition to the negative impact on the recovery of the real economy there will be collateral damage to central bankers’ reputation.

They are attempting the exclusive management of the economy with tools not up to the job, and, in consequence, central banks are pushed into being multipurpose institutions beyond their range of effective operation.

John Eatwell, President, Queens’ College, Cambridge, UK

John B Taylor, Former Under Secretary for International Affairs, US Treasury

Posted in Monetary Policy

Kocherlakota on the Fed and the Taylor Rule

The use of policy rules to analyze monetary policy has been a growing area of research for several decades, and the pace has picked up recently. Last month Janet Yellen presented a policy framework for the future centered around a Taylor rule, noting that the Fed has deviated from such a rule in recent years.  A week later, her FOMC colleague, Jeff Lacker, also showed that the Fed has deviated from a Taylor rule benchmark, adding that now is the time to get back.  Last week, the Mercatus Center and the Cato Institute hosted a conference with the theme that deviations from policy rules—including the Taylor rule discussed in my lunch talk—have caused serious problems in recent years.  And this week former FOMC member Narayana Kocherlakota argued that the problem with monetary policy in recent years has not been that it has deviated from a Taylor rule but that it has been too close to a Taylor rule! Debating monetary issues within a policy rule framework is helpful, but Kocherlakota’s amazingly contrarian paper is wrong in a number of ways

First, the paper ignores many of the advantages of policy rules discovered over the years, and focuses only on time inconsistency and inflation bias. I listed the many other advantages in my comment on the first version of Kocherlakota’s paper with the same title: “Rules Versus Discretion: A Reconsideration.” Research by me and others on policy rules preceded the time inconsistency research, and, contrary to Kocherlakota’s claim, the Taylor rule was derived from monetary theory as embodied in estimated models not from regressions or curve fitting during particular periods. (Here is a review of the research.)

Second, Kocherlakota ignores much historical and empirical research showing that the Fed deviated from Taylor type rules in recent years both before and after the crisis, including the work by Meltzer and Nikolsko-Rzhevskyy, Papell, Prodan.

Third, he rests his argument on an informal and judgmental comparison of the Fed staff’s model simulations and a survey of future interest rate predictions of FOMC members at two points in time (2009 and 2010).   He observes that the Fed staff’s model simulations for future years were based on a Taylor rule, and FOMC participants were asked, “Does your view of the appropriate path for monetary policy [or interest rates in 2009] differ materially from that [or the interest rate in 2009] assumed by the staff.”  However, a majority (20 out of 35) of the answers were “yes,” which hardly sounds like the Fed was following the Taylor rule. Moreover, these are future estimates of decisions not actual decisions, and the actual decisions turned out much different from forecast.

Fourth, he argues that the FOMC’s reluctance to use more forward guidance “seems in no little part due to its unwillingness to commit to a pronounced deviation from the prescriptions of its pre-2007 policy framework – that is, the Taylor Rule.” To the contrary, however, well known work by Reifschneider and Williams had already shown how forward guidance is perfectly consistent with the use of a Taylor rule with prescribed deviations.  I would also note that there is considerable evidence that the Fed significantly deviated from it Taylor rule framework in 2003-2005.

Fifth, Kocherlakota argues that the Taylor rule is based on interest rate smoothing in which weight is put on an “interest rate gap.” He argues that this slows down adjustments to inflation and output. But that is not how the rule was originally derived, and more recent work by Ball and Woodford deriving the Taylor rule in simple models does not have such a weight on interest rate gaps.

The last part of Kocherlakota’s paper delves into the classic rules versus discretion debate. Here he mistakenly assumes that rules-based policy must be based on a mathematical formula, and this leads him to advocate pure discretion and thereby object to recent policy rules legislation as in the FORM Act that recently passed the House of Representatives. However, as I explain in my 1993 paper and in critiques of the critiques of the recent legislation, a monetary policy rule need not be mechanical in practice.

Posted in Monetary Policy

Novel Research on Elections, Policymaking, Economic Uncertainty

The Becker Friedman Institute of the University of Chicago and the Hoover Institution of Stanford University teamed up yesterday to put on a Conference on Elections, Policymaking, and Economic Uncertainty. The conference was held at the Hoover Institution Offices in Washington D.C. Steve Davis, Lars Hansen and I organized it. The aim was to combine path-breaking research with in-depth discussions of policy, including a panel with Alan Greenspan, Chris DeMuth and Steve Davis which I moderated.

This is an interesting, quick-moving field with many new analytical techniques and “big data” developments. The complete conference agenda with links to the papers and commentary can be found on this web site, but here’s a summary of the findings and policy implications:

Mike Bordo started off by showing that there is a large and statistically significant negative impact of economic policy uncertainty—as measured by the Bloom-Baker-Davis EPU index—on the growth of bank credit in the United States.  His research (joint with John Duca and Christoffer Koch) explains much of the slow credit growth in recent years when policy uncertainty has been elevated.  The lead discussant, Charlie Calomiris, provided a simple model of bank lending to explain and interpret the empirical findings.  Bordo’s policy suggestion to reduce uncertainty and thereby increase growth is to strive for more predictable rule-like economic policy.

Moritz Schularick then described a fascinating new historical data set that he created along with his colleagues C. Trebesch and and M. Funke on financial crises and subsequent election results over many decades. By assigning numerical codes to each historical event, their paper showed that financial crises, including the recent global financial crisis (GFC), led to gains by political parties on the right.  Jesus Fernandez-Villaverde argued, however, with specific reference to developments in Greece, Italy, Ireland and especially Spain that the main political gains following the recent crisis have been more balanced, and, if anything, have shifted to the left. The discrepancy might be due to coding conventions used in the paper, a point that other commentators also noted in reference to determining what was a financial crisis and what was not.

Next came Hannes Mueller who focused on the role of politics and democratic institutions in international development. He showed (based on work with Timothy Besley) that there is a clear positive relationship between the degree of constraints (legislative or judicial) on the chief executive in a country and the amount of foreign investment flowing into the country. The discussant, Youngsuk Yook of the Federal Reserve Board, raised identification issues about how this policy measure stands up against other economic policy measures. I too wondered how this measure compared with the 17 different indicators of economic policy used in the US Millennium Challenge Corporation.

Tarek Hassan presented an amazing new data set that he has constructed (with Stephan Hollander, Laurence van Lent, and Ahmed Tahoun). Starting with transcripts of earnings reports from corporations, Hassan showed how they used novel text analysis and processing techniques to extract political references and concerns by the businesses.  They then examined whether these concerns translated into economic impacts on firms’ decisions and found remarkably that they did.  There was considerable discussion of the novel methodology for choosing political bi-grams (two-word combinations) and how, for example, it compared with the methods of Bloom, Baker (who was the discussant) and Davis. Baker also emphasized the importance of new researchers learning Perl and Python, the essential programming languages needed for work in this area.

The final paper of the day was presented by Kaveh Majlesi who focused on the economic influences on political developments in the United States over the period from 2002 to 2010. He presented results supporting the view that trade-related employment shocks from China imports affected political polarization in the United States: there were correlated moves to the political left and to the right during this period along with impacts of China imports to various parts of the country.  Nolan McCarty presented a fascinating alternative explanation. He argued that there has been a much longer trend unrelated to trade, and that the recent movements were part of a shorter swing toward Democrats in 2006 and then Republicans in 2010 and 2014. Economists and political scientists will be trying to sort this one out for a long time.

The concluding panel discussion with me, Alan Greenspan, Chris DeMuth, and Steve Davis focused first on some very disturbing current economic problems and second on possible political solutions. panelGreenspan started off by stating his grave concerns about the direction of economic policy in the U.S, emphasizing that it is not a new trend and tracing the development way back to the 1896 presidential election. Chris DeMuth gave an alarming discussion of the increased power of executive branch regulatory agencies, and Steve Davis showed how a new index of global economic policy uncertainty has been going the wrong way for a while. The solutions (in a nutshell) were to control the explosion of government entitlement spending (Greenspan), reestablish constraints on government agencies (DeMuth), and form a commission to estimate credibly the costs and benefits of regulatory proposals (Davis).

Few disputed the proposed solutions, but many wondered aloud how such reforms could be accomplished in the current political environment.  All agreed that research of the kind presented at the all-day conference was necessary to achieve progress in practice.

Posted in Financial Crisis, Fiscal Policy and Reforms, Regulatory Policy, Stimulus Impact

Think Again and Again About the Natural Rate of Interest

In a recent Wall Street Journal piece, “Think You Know the Natural Rate of Interest? Think Again,” James Mackintosh warns about the high level of uncertainty in recent estimates of the equilibrium interest rate—commonly called r* or the natural rate—that are being factored into monetary policy decisions by the Fed.  See discussion by Fed Chair Janet Yellen for example.  Mackintosh’s argument is simple. He takes the confidence intervals (uncertainty ranges) from the recent study by Kathryn Holston, Thomas Laubach, and John Williams, which, as in an earlier paper by Laubach and Williams, finds that the estimate of the equilibrium rate has declined. Here is a chart from his article showing the wide range of uncertainty.


Mackintosh observes that the confidence intervals are very wide, “big enough to drive a truckload of economists through,” and then concludes that they are so large that “it’s entirely plausible that the natural rate of interest hasn’t moved at all.”

This uncertainty should give policy makers pause before they jump on the low r* bandwagon, but there is even more reason to think again: the uncertainty of the r* estimates is actually larger than reported in the Mackintosh article because it does not incorporate uncertainty about the models used.

In a recent paper Volker Wieland and I show that the models used by Holsten, Laubach and Williams and others showing a declining r* omit important variables and equations relating to structural policy and monetary policy. We show that there is a perfectly reasonable alternative explanation of the facts if one expands the model to include these omitted factors. Rather than conclude that the real equilibrium interest rate r* has declined, there is an alternative model in which economic policy has shifted, either in the form of reversible regulatory and tax policy or in the form if monetary policy. Moreover we show that there is empirical evidence in favor of this explanation.

Another recent paper “Reflections on the Natural Rate of Interest, Its Measurement, Monetary Policy and the Zero Bound,” (CEPR Discussion Paper) by Alex Cukierman reaches similar conclusions. He identifies specific items that could have shifted the relationship between the interest rate and macro variables including credit problems that affect investment. He also looks at specific factors that shifted the policy rule such as central bank concerns with financial stability.  His paper is complementary to ours, and he also usefully distinguishes between risky and risk-free rates, a difference ignored in most calculations.

Overall the Wieland, Taylor and Cukierman papers show that estimates of r* are way too uncertain to incorporate into policy rules in the ways that have been suggested. Nevertheless, it is promising that Chair Yellen and her colleagues are approaching the r* issue through the framework of monetary policy rules. Uncertainty in the equilibrium real rate is not a reason to abandon rules in favor of discretion.

Posted in Monetary Policy