A New Opportunity for Monetary Reform

The opportunity for pro-growth tax reform (lower rates with a broader base) and pro-growth regulatory reform (with rigorous cost-benefit tests) is now better than it has been in years, because of similarities between reform ideas put forth by Congress—many in bills that have passed the House—and those put forth by president-elect Trump.

Although less commented on, the opportunity for monetary reform also seems better than it has been in years. And the reason is the same. Goals such as insulating the Fed from political pressures and creating a more predictable-transparent-accountable policy appear to be common to the Congress and the incoming Administration.  To see this, take a look at the often-overlooked monetary passages on pages 44 and 45 of the House economic reform document A Better Way: The Economy. Here’s an excerpt:

“… our economy would be healthier if the Federal Reserve was more predictable in its conduct of monetary policy and more transparent about its decision-making…. Legislation sponsored by Rep. Bill Huizenga and approved by the House – the Fed Oversight Reform and Modernization Act (the FORM Act) does the following:

  • Protects the Fed’s independence to chart whatever monetary policy course it deems appropriate, but requires the Fed to give the American people a greater accounting of its actions.
  •  Requires the Fed to generate a monetary policy strategy of its own choosing in order to provide added transparency about the factors leading to its monetary policy decisions.
  •  Helps consumers and investors make better decisions in the present and form better expectations about the future.

These improvements are important for Americans to enjoy greater economic opportunity. By pursuing this expansion through increased transparency instead of policy mandates, the FORM Act further insulates the Fed from political pressures.”

Here is a detailed statement of support of this monetary reform from economists and practitioners. Of course, there is much to be worked out, including incorporating constructive comments from the Fed, which has not yet been forthcoming. Monetary reform is at least as controversial as tax and regulatory reform, but no less crucial to a prosperous economy. The economics behind pursuing monetary, tax and regulatory reforms together can be found in First Principles: Five Keys to Restoring America’s Prosperity.

 

Posted in Monetary Policy

Central Bank Models: A Key to Future Monetary Policy

In thinking about the future of monetary policy, it’s important to consider legislative reforms and appointments, but it’s also important to consider the economic models that have come to be a key part of policy making in central banks. The Bank of Canada showed a great deal of vision last week when it invited economists and practitioners to discuss “Central Bank Models: The Next Generation.”

We can learn from history here. In my keynote address for the conference I reviewed the 80-year history of macro policy models from Jan Tinbergen’s model of 1936 to the present. I showed how policy making with models began in “path-space” (simulating paths for the policy instruments and seeing the impact on the paths for target variables), but, with a major paradigm shift 40 years ago, moved to “rule-space” (simulating rules for the instruments and seeing the impact of economic stability over time).  Central bank models followed these developments, though with a lag, to the benefit of monetary policy and economic performance.

However, there was a retrogression in parts of the central banking world in the past dozen years, and economic performance has deteriorated with a great recession and a very slow recovery. This history suggests that a pressing problem for central bank research is to get back to a “rule-space” framework. The framework was good while it lasted, and still benefits countries that continue to implement it. The departure from this approach should be fundamentally reconsidered, which means finding out how to get back to the framework, understanding why the departure occurred, and figuring out how to prevent future departures.

But what can researchers do? What kind of research can help?

I offered some some ideas for research direction and workflow, giving examples from Chapter 15 by Volker Wieland, Elena Afanasyeva, Meguy Kuete, Jinhyuk Yoo and Chapter 28 by Jesper Linde, Frank Smets, Raf Wouters  in the new Handbook of Macroeconomics.  As explained here, research should focus on

— how changes in the economy and models of the economy affect monetary policy rules.  Examples include changes in the technology of financial inter-mediation, increased integration of the financial and real sides of the economy, behavioral economics factors, new distribution channels of monetary policy, agent-based models, heterogeneous price and wage setting, and the impact of an effective lower bound on the interest rate.

— designing models for the purpose of evaluating policy rules. Having a purpose in mind helps determine the size, scope and type of model.

— robustness through transparent and replicable macro model comparisons such as in the new Macroeconomic Model Comparison Initiative using the Macro Model Data Base.

— the interface between policy models and policy decisions. Here more transparent reporting on policy rules used in practice as in recent legislation that has passed the House would be useful.

— the connection between monetary policy rules and an international rules-based monetary system. Perhaps there is no more important, and no more difficult, application of “rule-space” analysis.  One research idea would study an international agreement in which each central bank would report and commit to its monetary rule or strategy, and thereby help build the foundation of the international monetary system.

— distinguishing between instrument rules, “forecast targeting,” and “constrained discretion,” which means delving deeper into the classic rules versus discretion debate.

In my view, these ideas are worth pursuing at central banks and by monetary researchers outside of central banks, even if one has different views of the reasons for poor economic performance during the past decade.

Posted in Monetary Policy, Teaching Economics

A World Cup in the Battle of Ideas

Markus Brunnermeier, Harold James and Jean-Pierre Landau have just published a fascinating book, The Euro and the Battle of Ideas, in which they bring together their respective skills in economic theory, economic history and economic policy to bear on one of the most important macroeconomic problems of our times—the rules versus discretion debate. Anyone who has studied this debate—and that’s just about anyone who has taken a course in economics—would benefit from reading this book.

The book focuses on the role of government in the economy and how conflicting ideas about this role affect economic policy in Europe and beyond, including at the International Monetary Fund. But most of the economic policy ideas revolve around the rules versus discretion debate, and include policy credibility, accountability, time inconsistency, policy flexibility, moral hazard, and fiscal consolidation versus stimulus.  All of these issues have been extensively researched, and views differ. Many economists (including me) are of the view that rules-based policies are more effective, while others (including Larry Summers) argue against rules.

The fascinating hypothesis—and the organizing principle—of the book is the authors’ view that ideas differ by country. In particular, they argue that in Europe the battle of ideas is largely between Germany, on the side of rules, and France, on the side of discretion. They note related differences in other parts of the world: in Italy they see a regional split in which “great liberals…come from the North, whereas the pioneers of practical Keynesianism…are Southerners.”  And I can think of other examples where opinions appear to differ by group. In the United States, for example, there have been schools of thought—Cambridge versus Chicago, Samuelson versus Friedman—distinguished by positions in the rules versus discretion debate. Ulrike Malmendier and her colleagues show that the degree to which Fed policy makers are rules-based appears to be related to whether they personally experienced the great inflation or not.france-map

But this book is mainly about France versus Germany, and in portraying the battle of ideas this way the book provides novel explanations of why certain policies are chosen. Rather than a question of good policy versus bad policy, it is a question of who is up or down politically, and the relative power of France versus Germany.

The authors consider key debates within the rules versus discretion framework: quantitative easing, bailouts, fiscal stimulus. Within Europe they argue, for example, that “Starting from the negotiations for the Maastricht Treaty and the SGP, fiscal policy rules following the traditional German rigorist approach clashed with the deeply ingrained Keynesian instincts in France.”

I particularly liked the book’s discussion of the IMF’s decision to break the rules of its exceptional access framework when it lent to Greece. This framework was put in place in 2002, and it led to a stop in financial crises emanating from emerging market countries which had been raging before then.  Unfortunately, these rules were broken in 2010 when the IMF made a large loan to Greece which eventually led to a bailout of private creditors. Many realized it was a mistake, but the U.S. Treasury objected to restoring the rules arguing the case that discretion was needed. With the IMF management, staff and other countries in favor of restoration, a shift in the U.S position was all that was needed to make the needed changes. When the U.S. Treasury finally agreed to remove its objection, the Congress supported the reform, which then passed. I was involved in the restoration effort and saw how the battle of ideas had both a country basis and an intellectual basis, much as the book tells the story.

While many of the chapters conclude with a sensible set of policy recommendations, the book provides more of a “positive approach” rather than “normative approach” to policy. This perspective is useful in helping to identify barriers to better policy, and possible compromises. It is not, of course, a replacement for traditional analytical research which has the power to change views based on ideas rather than national differences.

In this regard, it is promising that the authors show that national views change over time. Indeed, France and Germany switched sides:  Early in the book the authors show that the current French-German dichotomy is relatively new.  Before World War II, they argue that Germany was more statist and discretionary (they give the example of Friedrich List criticizing free trade), while France was more laissez faire and rules-based (they give the examples of the classical liberal economists Jean-Baptiste Say and Frederic Bastiat).  But the experiences of the World War II, they argue, caused a counter reaction and a switch with Germany now more rules-based and France more discretionary. Maurice Allais, who they classify as a classical liberal and who was writing in France during and after World War II, is an exception.

Posted in Financial Crisis, International Economics, Monetary Policy, Regulatory Policy

Committing to Economic Freedom at Home and Abroad

Dartmouth’s Doug Irwin has been writing about the General Agreement on Tariffs and Trade (GATT) that was finalized seventy years ago this month. His tweets includes a link to President Harry Truman’s statement upon the announcement of the completion of the agreement. It is amazing how different attitudes about trade agreements are now compared to then, and how rapidly attitudes have changed in the ten years since Doug wrote an optimistic op-ed “GATT Turns 60.” Doug is, of course, hoping that we can learn from how people addressed these problems seventy years ago, and develop a strategy to meet our current challenges, even though the world and problems are different.

Here it is important to point out that the GATT was part of a flurry of economic institution building in the mid-1940s. In 1945 Truman signed the Bretton Woods Agreements Act, officially creating the International Monetary Fund and the World Bank. As Henry Morgenthau, the Secretary of the Treasury, said: the Bretton Woods agreements aimed to “do away with economic evils—competitive currency devaluations and destructive impediments to trade.”

In 1946 Truman signed the Employment Act. It created two more new institutions: The President’s Council of Economic Advisers (CEA) and the Congress’s Joint Economic Committee (JEC). The CEA and the JEC brought professional economics to both international and domestic economic policy.

In 1947 along with the GATT, came and the Truman Doctrine, and in 1948 the Marshall Plan. The Truman Doctrine and Marshall Plan supported freedom and democracy with the principle that they are “essential to economic stability and orderly political processes” as stated by Truman at a joint session of Congress. Economics was a big part of these new foreign policies.

The reforms required leadership by the U. S. Administration and Congress. Legislation passed in two different congresses, the 79th controlled by Democrats in 1945 and 1946 and the 80th controlled by Republicans in 1947 and 1948. As Chicago economist Jacob Viner put it in 1945: “The United States is in effect, and, as concerns leadership, very nearly singlehanded, trying to reverse the whole trend of policy and practice of the world at large in the field of international economic relations…”.

U.S. leadership made it possible, but what else about the strategy is worth applying today?  I outlined reform ideas in a Truman Medal talk a year ago. The essence of the strategy is that United States should commit to promoting the principles of economic freedom–predictable policies, the rule of law, incentives provided by markets and a limited role for government—in all aspects of its economic policy. These are the keys to restoring prosperity and security at home and abroad.

Posted in International Economics

Take Off the Muzzle and the Economy Will Roar

muzzle-removedIn his Saturday Wall Street Journal essay “Why the Economy Doesn’t Roar Anymore”—illustrated with a big lion with its mouth shut—Marc Levinson offers the answer that the “U.S. economy isn’t behaving badly. It is just being ordinary.”  But there is nothing ordinary (or secular) about the current stagnation of  barely 2 percent growth. The economy is not roaring because it’s muzzled by government policy, and if we take off that muzzle—like Lucy and Susan did in “The Lion, the Witch and the Wardrobe”—the economy will indeed roar.

It is of course true, as Levinson states, that “faster productivity growth” is “the key to faster economic growth.” But it’s false, as he also states,  that it has all been downhill since the “long boom after World War II” and “there is no going back.” The following chart of productivity growth drawn from my article in the American Economic Review shows why Levinson misinterprets recent history. Whether you look at 5 year averages, statistically filtered trends, or simple directional arrows, you can see huge swings in productivity growth in recent years.  These movements—the productivity slump of the 1970s, the rebound of the 1980s and 1990s, and the recent slump—are closely related to shifts in economic policy, and economic theory indicates that the relationship is causal, as I explain here and here and in blogs and opeds. You can also see that the recent terrible performance—negative productivity growth for the past year—is anything but ordinary.  Productivity Growth

Writing about the 1980’s and 1990s, Levinson claims that “deregulation, privatization, lower tax rates, balanced budgets and rigid rules for monetary policy—proved no more successful at boosting productivity than the statist policies…” The chart shows the contrary: productivity growth was generally picking up in the 1980s and 1990s.  It is the stagnation of the late 1960s, the 1970s, and the last decade that is state-sponsored.  To turn the economy around we need to take the muzzle off, and that means regulatory reform, tax reform, budget reform, and monetary reform.

Posted in Fiscal Policy and Reforms, Regulatory Policy, Slow Recovery

Should the Previous Framework for Monetary Policy Be Fundamentally Reconsidered?

“Did the crisis reveal that the previous consensus framework for monetary policy was inadequate and should be fundamentally reconsidered?”  “Did economic relationships fundamentally change after the crisis and if so how?” These important questions set the theme for an excellent conference at the De Nederlandsche Bank (DNB) in Amsterdam this past week. In a talk at the conference I tried to answer the questions. Here’s a brief summary.

Eighty Years Ago

To understand the policy framework that existed before the financial crisis, it’s useful and fitting at this conference to trace the framework’s development back to its beginning exactly eighty years ago. It was in 1936 that Jan Tinbergen built “the first macroeconomic model ever.” It “was developed to answer the question of whether the government should leave the Gold standard and devaluate the Dutch guilder” as described on the DNB web site.

“Tinbergen built his model to give advice on policy,” as Geert Dhaene and Anton Barten explain in When It All Began. “Under certain assumptions about exogenous variables and alternative values for policy instrument he generated a set of time paths for the endogenous variables, one for each policy alternative. These were compared with the no change case and the best one was selected.”  In other words, Tinbergen was analyzing policy in what could be called “path-space,” and his model showed that a path of
devaluation would benefit the Dutch economy.tinbegen-chart

Tinbergen presented his paper, “An Economic Policy for 1936,” to the Dutch Economics
and Statistics Association on October 24, 1936, but “the paper itself was already available in September,” according to Dhaene and Barten, who point out the amazing historical timing of events: “On 27 September the Netherlands abandoned the gold parity of the guilder, the last country of the gold block to do so. The guilder was effectively devalued by 17 – 20%.” As is often the case in policy evaluation, we do not know whether the paper influenced that policy decision, but the timing at least allows for that possibility.

In any case the idea of doing policy analysis with macro models in “path-space” greatly influenced the subsequent development of a policy framework. Simulating paths for instruments—whether the exchange rate, government spending or the money supply—and examining the resulting path of the targets variables demonstrated the importance of correctly distinguishing between instruments and targets, of obtaining structural parameters rather than reduced form parameters, and of developing econometric methods such as FIML, LIML and TSLS to estimate structural parameters. Indeed, this largely defined the research agenda of the Cowles Commission and Foundation at Chicago and Yale, of Lawrence Klein at Penn, and of many other macroeconomists around the world.  Macroeconomic models like MPS and MCM were introduced to the Fed’s policy framework in the 1960s and 1970s.

Forty years ago

Starting about forty years ago, this basic framework for policy analysis with macroeconomic models changed dramatically.  It moved from “path space” to “rule space.”  Policy analysis in “rule space” examines the impact of different rules for the policy instruments rather than different set paths for the instruments.  Here I would like to mention two of my teachers—Phil Howrey and Ted Anderson—who encouraged me to work in this direction for a number of reasons, and my 1976 Econometrica paper with Anderson, “Some Experimental Results on the Statistical Properties of Least Squares Estimates in Control Problems.” The influential papers by Robert Lucas (1976) “Econometric Policy Evaluation: A Critique” and by Finn Kydland and Ed Prescott (1977) “Rules Rather Than Discretion: The Inconsistency of Optimal Plans,” provided two key reasons for a “rules space” approach, and my 1979 Econometrica paper “Estimation and Control of a Macroeconomic Model with Rational Expectations” used the approach to find good monetary policy rules in an estimated econometric model with rational expectations and sticky prices.

Over time more complex empirical models with rational expectations and sticky prices (new Keynesian) provided better underpinnings for this monetary policy framework. Many of the new models were international with highly integrated capital markets and no-arbitrage conditions on the term-structure. Soon the new Keynesian FRB/US, FRB/Global and SIGMA models replaced the MPS and MCM models at the Fed. The objective was to find monetary policy rules which improved macroeconomic performance.  The Taylor Rule is an example, but, more generally, the monetary policy framework found that rules-based monetary policy led to better economic performance in the national economy as well as in the global economy where an international Nash equilibrium in “rule space” was early optimal.

This was the monetary policy framework that was in place at the time of the financial crisis.  Many of the models that underpinned this framework can be found in the invaluable archives of the Macro Model Data Base (MMB) of Volker Wieland where there are many models dated 2008 and earlier including the models of Smets and Wouters, of Christiano, Eichenbaum, and Evans, of De Graeve, and of Taylor. Many models included financial sectors with a variety of interest rates; the De Graeve model included a financial accelerator.  The impact of monetary shocks was quite similar in the different models, as shown here and summarized in the following chart of four models, and simple policy rules were robust to different models.model-comp-pre-crisis

Perhaps most important, the framework worked in practice. There is overwhelming evidence that when central banks moved toward more transparent rules-based policies in the 1980s and 1990s, including through a focus on price stability, there was a dramatic improvement compared with the 1970s when policy was less rule-like and more unpredictable.  Moreover, there is considerable evidence that monetary policy deviated from the framework in recent years by moving away from rule-like policies, especially during the “too low for too long” period of 2003-2005 leading up to the financial crisis, and that this deviation has continued. In other words, deviating from the framework has not worked.

Have the economic relationships and therefore the framework fundamentally changed since the crisis?  Of course, as Tom Sargent puts it in his editorial  review of the forthcoming  Handbook of Macroeconomics by Harald Uhlig and me, “both before and after that crisis, working macroeconomists had rolled up their sleeves to study how financial frictions, incentive problems, incomplete markets, interactions among monetary, fiscal, regulatory, and bailout policies, and a host of other issues affect prices and quantities and good economic policies.”  But, taking account of this research, the overall basic macro framework has shown a great degree of constancy as suggested by studies in the new Handbook of Macroeconomics. For example, Jesper Linde, Frank Smets, and Raf Wouters examine some of the major changes—such as the financial accelerator or a better modeling of the zero lower bound. They find that these changes do not alter the behavior of the models during the financial crisis by much.  They also note that there is little change in the framework—despite efforts to do so—to incorporate the impact of unconventional policy instruments such as quantitative easing and negative interest rates.  In another paper in the new Handbook, Volker Wieland, Elena Afanasyeva, Meguy Kuete, and Jinhyuk Yoo examine how new models of financial frictions or credit constraints affect policy rules. They find only small changes, including a benefit from including credit growth in the rules.

All this suggests that the crisis did not reveal that the previous consensus framework for monetary policy should be fundamentally reconsidered, or even that it has fundamentally changed. This previous framework was working.  The mistake was deviating from it.  Of course, macroeconomists should keep working and reconsidering, but it’s the deviation from the framework—not the framework itself—that needs to be fundamentally reconsidered at this time. I have argued that there is a need to return to the policy recommendations of such a framework domestically and internationally.

We are not there yet, of course, but it is a good sign that central bankers have inflation goals and are discussing policy rules. Janet Yellen’s policy framework for the future, put forth at Jackson Hole in August, centers around a Taylor rule.  Many are reaching the conclusion that unconventional monetary policy may not be very effective. Paul Volcker and Raghu Rajan are making the case for a rules-based international system, and Mario Draghi argued at Sintra in June that “we would all clearly benefit from enhanced understanding among central banks on the relative paths of monetary policy. That comes down, above all, to improving communication over our reaction functions and policy frameworks.”

Posted in Financial Crisis, Monetary Policy, Teaching Economics

The Statistical Analysis of Policy Rules

My teacher, colleague, and good friend Ted Anderson died this week at the age of 98.  Ted was my Ph.D. thesis adviser at Stanford in the early 1970s, and later a colleague when I returned to teach at Stanford in the 1980s. He was a most important influence on me and my research. He taught me an enormous amount about time series analysis, and about how to prove formal theorems in mathematics.  I am grateful to him for sharing his wisdom and for endeavoring to instill his rigorous approach to econometric research. His lectures were clear and insightful, but it was from interacting with him in his office or in the Econometrics Seminar that one really learned time series analysis.

The Stanford Econometrics Seminar in the early 1970s was an amazingly fertile place for developing new ideas.  Fortunately for me, the seminar focused on several new books which explored formally the problem of optimal economic policy formulation in statistical or probabilistic settings.  We read and each presented chapters from Peter Whittle’s Prediction and Regulation, Masanao Aoki’s Optimization of Stochastic Systems, and Box and Jenkins’ Time Series Analysis: Forecasting and Control. It was a terrific way to learn about the latest opportunities for pushing research frontiers.  We each presented and critiqued chapters from these books and freely discussed possible extensions and implications.

The title of this post is a variation on the title of Ted Anderson’s 1971 classic book, The Statistical Analysis of Time Series, which was the first textbook I used to study time series analysis. My Ph.D. thesis was on policy rules, and in particular on the “joint estimation and control” problem. The problem was to find an optimal economic policy in a model in which one does not know the parameters and therefore has to estimate and control the system simultaneously.

An unresolved issue was how much experimentation should be incorporated into the policy.  Could one find an optimal way to move the policy instrument around in order to learn more about the model or its parameters? While costly in the short run, the improved information would pay off in the future with better settings for the instruments.

At that time everyone was approaching this problem through a dynamic programming and optimal control approach as in Aoki’s book and, in a special case, in Ed Prescott’s Ph.D. thesis at Carnegie Mellon which appeared in Econometica in 1972.  This approach was very difficult for any reasonably realistic application because of the curse of dimensionality of the backward induction method of dynamic programming.

Confronting these difficulties at Stanford in the early 1970s, we looked for a different approach to the problem.  The approach was much closer to the methods of classical mathematical statistics as in Ted’s book.  The idea was to start by proposing a rule for setting the instruments, perhaps using some heuristic method, and then examine how the rule performed using standard statistical criteria adapted to the control rather than estimation problem. The rules were contingency plans that described how the instruments would evolve over time.  The criteria for evaluating the rules stressed consistency and speed of convergence.

In my Ph.D. thesis as well as in a series of later papers with Ted, various convergence theorems were stated and proved. Later, statisticians T.L. Lai and Herbert Robbins proved more theorems that established desirable speeds of convergence in more general nonlinear settings.

The main finding from that research was that following a simple rule without special experimentation features was a good approximation. That made future work much simpler because it eliminated a great deal of complexity. This same basic approach has been used to develop recommendations for the actual conduct of monetary policy in the years since then, especially in the 1980s and 1990s.  The interaction between actual policy decisions and the recommendations of such policy rules provides ways to learn about how policy works.  In reality, of course, gaps develop between actual monetary policy and recommended policy rules based on economic and statistical theory. By focusing on these gaps one can learn more about what works and what doesn’t in the spirit of the approach to policy evaluation developed in that research with Ted at Stanford in the early 1970s.

Posted in Teaching Economics