A Whole New Section on Policy Rules in Fed’s Report

The Federal Reserve Board’s semi-annual Monetary Policy Report issued by Chair Janet Yellen last Friday contains a whole new section called “Monetary Policy Rules and Their Role in the Federal Reserve’s Policy Process.” The section contains new information and is well worth reading. Below is an excerpt which first lists three “key principles of good monetary policy” that the Fed says are incorporated into policy rules; it then lists five policy rules, including the Taylor rule and four variations on that rule that the Fed uses, with helpful references in notes which are also excerpted below.

The three principles sound quite reasonable: on the third–called the “Taylor Principle” by Mike Woodford and others–the Fed is quite specific in that it gives the numerical range for the response of the policy rate–the federal finds rate–to the inflation rate. The policy instrument is not mentioned specifically for the other two principles

More information, including some algebra, is given in Figure A which is reproduced below. It is good that one of the five policy rules–which the Fed calls the “Taylor (1993) rule, adjusted”–is based on the important 2000 research paper by David Reifschneider and John Williams on the zero lower bound which I have written about here. Note that the Fed describes these rules using the unemployment rate rather than real GDP, relying on an empirical connection between the real GDP/potential GDP gap and the unemployment rate (Okun’s law). Note that what the Fed calls the “balanced-approach rule” is the Taylor rule with a different coefficient on the cyclical variable

The Fed’s Report then goes on to compare the FOMC’s settings for the federal funds rate with the rules as summarized in the next chart. It shows that the interest rate was too low for too long in the 2003-2005 period according to the Taylor rule (not sure if the Fed was looking at the other rules back then), and that according to three of the rules the current fed funds rate should be moving up. (The Fed makes these calculations using its estimate of time variation in the neutral rate of interest ).In reporting on well-known policy rules, the Fed is doing part of what is called for in the legislation which recently passed the House as Title X, Section 1001 of H.R. 10. However, aside from being positive about the three principles, it does not say much about its own policy strategy in the document as also called for in the legislation.

In addition, the report focuses extensively on differences, rather than similarities, in the policy rules, and on the differences in inputs to the policy rules. The differences in measures of inflation, the neutral interest rate, and other variables are part of monetary policy making and always will be. In reality they are a reason to use policy rules as a means of translating these differences in measurement into differences about policy in a systematic way. Such differences do not imply that policy rules or strategies are impractical, as the Report seems to suggest, at least based on some financial reporting.

Chair Yellen will testify at the Financial Services Committees of the House on Wednesday and at the Banking Committee of the Senate on Thursday of this week on the Report and other matters. The testimony and the questions and answers about the Report at the hearings will be well worth following.

 

 

 

 

Posted in Monetary Policy | Leave a comment

Seeing Through the Fog of Federal Budget Forecasting

Every summer since 2010 I’ve charted the latest Congressional Budget Office (CBO) long-term projection of the federal debt, noting the similarity with the Fourth of July fireworks. But during these years, the CBO has changed its procedures several times, fogging up comparisons over time and lessons from experience.

Starting with my first post in 2010 on this topic, CBO reported projections of the debt as a percentage of GDP going out 75 years based on their “alternative fiscal scenario” which is more realistic than their “baseline scenario” that assumes no-change in current law. Here’s what CBO projections made in 2009 and 2010 looked like. You can see the explosions clearly as the debt to GDP ratio forecast reached 767% by 2083 in the 2009 estimate, and even higher in the 2010 estimate. I also sketched in an optimistic, but sensible, idea of what could happen the next year, 2011.

Perhaps because the explosions looked so bad, CBO implemented procedural changes to make their projections look less like fireworks. First, starting with the 2011 projections, CBO stopped reporting the debt to GDP ratio once it exceeded 200 percent of GDP, which turned out to be in 2031, ending the exercise 50 years earlier than the previous projection.  So, in my blog post about the the 2011 projections I had to calculate my own debt projection.  It was based entirely on CBO assumptions and is shown in this next chart, which clarified that the explosion was still there, even though CBO stopped publishing it. (As you can see the 2011 projection was not what I hoped for.)

The second CBO procedural change was to discontinue the use of the “alternative fiscal scenario” in the long-term projections which made it impossible to update my plots and make comparisons as before.  So, in the 2013 post, I simply superimposed CBO’s longer-term 2009 projection on their shorter-term 2013 projection, and thereby simulated a comparison as in the next chart.  As the chart shows quite clearly, the debt picture had not improved at all.

The third change at CBO was to stop using the alternative fiscal scenario completely, and instead rely only on the “baseline scenario” which unrealistically assumes current law remains fixed.

This change was unfortunate in my view. In fact, it turns out that the “alternative fiscal scenario” has been more accurate than the “baseline scenario. To show this I compared the CBO 2010 projection of the debt for the years from 2011 to 2016 with actual debt just reported by the CBO in their March 2017 long-term budget outlook. The next chart shows how much closer the alternative fiscal scenario is to what actually transpired compared with the baseline scenario.

In any case, without the alternative fiscal scenario, it is not possible to continue updating and comparing the charts in an apples-to-apples fashion. Fortunately, the Committee for a Responsible Federal Budget (CRFB) has come to the rescue by filling the void left by CBO’s omission of its alternative fiscal scenario. In a recent piece, How High Will Debt Rise If Current Policy Continues?, CRFB created and reported their own alternative fiscal scenario, writing that “as we show in this piece, debt could grow far higher if policymakers continue to act as they have in recent years.”

Here is a chart from the CFRB study which helps to illustrate the differences; it goes out to 2047 and thereby shows part of the explosion. To be sure, this projection, which was done in April, does not include the impacts of already-enacted or administratively-executed regulatory changes, and clearly does not include tax reform, budget reform, and monetary reform that have been proposed. Without these reforms the explosive story is the same.

But, as I have argued for a long time, with these reforms, the budget projections and the economy would be quite different. The CBO assumes that real GDP growth will be only 1.9 percent per year during the period of its long-term budget projection, which CBO divides into 1.6% for productivity growth and 0.3% for worker-hours growth. With the changes in policy, including with the fiscal consolidation plan implicit in the chart above that recommended budget reform in 2011, growth would be higher–say 3% as I argued here with both productivity growth and labor force participation rates rising. In sum, despite the fog that may have been created by changes in CBO’s budget procedures, the history of these forecasts and the failure of policy thus far show that the economy is still like a caged eagle ready to be set free, a fine analogy for the Fourth of July.

 

Posted in Budget & Debt | Leave a comment

Reserve Balances and the Fed’s Balance Sheet in the Future

An important part of the Fed’s normalization policy is to reduce its holdings of securities and thereby reserve balances—deposits of banks at the Fed—used to finance these holdings. As I argued when quantitative easing began in 2009, this reduction should be predictable and strategic.  That view was given some empirical support by the “taper tantrum” in 2013, when Ben Bernanke abruptly said in a congressional hearing that the Fed’s purchases of securities would taper in “the next few meetings.” In contrast, when the tapering later became more predictable, markets digested it easily.

The Addendum to the Policy Normalization Principles and Plans recently issued by the Fed conforms to this gradual and predictable approach. The Fed said it intends to reduce its holdings of Treasury and mortgage-backed securities by decreasing reinvestment of principal payments to the extent that they exceed gradually phased-in caps. As stated in the Addendum, the Fed “anticipates that the caps will remain in place once they reach their respective maximums so that the Federal Reserve’s securities holdings will continue to decline in a gradual and predictable manner until the Committee judges that the Federal Reserve is holding no more securities than necessary to implement monetary policy efficiently and effectively. Gradually reducing the Federal Reserve’s securities holdings will result in a declining supply of reserve balances.”

The statement that the supply of reserve balances will decline in a gradual and predicable manner is welcome. But there is still the important question about what the Fed is aiming for. As explained in the Addendum, the “Committee currently anticipates reducing the quantity of reserve balances, over time, to a level appreciably below that seen in recent years but larger than before the financial crisis; the level will reflect the banking system’s demand for reserve balances and the Committee’s decisions about how to implement monetary policy most efficiently and effectively in the future. The Committee expects to learn more about the underlying demand for reserves during the process of balance sheet normalization.”

It is important that the Fed refers to reserve balances so much in this statement. There are two basic approaches to the question of what the Fed should aim for, and the level of reserve balances in the balance sheet is the key difference between them. One approach is for the Fed to aim at an eventual balance sheet and a corresponding level of reserve balances in which the interest rate is determined by the demand and supply of reserves—in other words, by market forces—rather than by an administered rate under interest on excess reserves. (To be sure, during the normalization or transition period with inherited high reserve balances, there is no choice but to use interest on excess reserves). Conceptually this means the Fed would eventually operate under a framework as it did in the two decades before crisis.  Most likely the level of reserve balances will be greater than the levels of 2007, but that will depend on liquidity regulations. The defining concept of this approach is a market determined interest rate.

I think the case can be made for such a framework. The assessment of Peter Fisher, who ran the trading desk at the New York Fed for many years, is that such a framework would work. At the recent monetary policy conference at the Hoover Institution, he said “we could get back and manage it with quantities; it’s not impossible. We could just re-engineer the system and go back to the way we were.”  I agree based on the time I spent in the markets for federal funds in those days watching how they operated and writing up an institutional description and model of how people traded in those markets. If we went back to that framework, there would not be any need for interest on excess reserves. If the Fed wanted to change the short term interest rate, it would just adjust the supply of reserves.  The amount of reserves would be set so that the supply and demand for reserves determine the interest rate.

The Fed could also provide liquidity support if it needed to do so in this framework. Recall the events of 9/11 when the devastating physical damage led the Fed to provide effective lender of last resort loans. So you can have that kind of liquidity support in such a regime.

If it wanted to, the Fed could operate with corridor system in this framework. There would be a lower-interest rate on deposits at the floor of the corridor, a higher-interest rate on borrowing at the ceiling of the corridor, and, most important, a market-determined interest rate above the floor and below the ceiling.

This approach creates an important connection between the Fed’s policy interest rate and the amount of reserves or money in the system. The Fed is responsible for reserves and money, and that connection is important to maintain. Without that connection, you raise the chances of the Fed being a multipurpose institution, which leads people to raise questions about its independence.  The Fed has already been involved in credit allocation with mortgage-backed securities purchases, and Charles Plosser argues it might do much more.

The second approach is a system where the quantity supplied of reserves remains well above the demand, and the interest rate is administered through interest on excess reserves as recently discussed along with other normalization issues by Fed Governor Powell.  The method is sometimes called a “floor” system, but the federal funds rate moves a bit below the floor, so it is not really a floor. In any case, the interest rate is not market determined.

Those who support the second approach argue that more reserves than the amount needed to determine the interest rate are needed for liquidity purposes. Some (see Todd Keister) argue that the payment system doesn’t function well with a smaller amount of reserves. In the past there were large daylight overdrafts. However, one could limit the size of the overdrafts, perhaps as a percentage of collateral. There also may be some regulatory changes that would reduce the demand for liquidity.

Some argue that with a large balance sheet the Fed could provide depository services to regular people, just like it provides depository services to banks, with advantages described by John Cochrane here in an earlier conference volume. The Treasury could provide that service without interfering with the Fed’s operations, however, or there may be other ways to provide the service without creating a disconnect between the interest rate and reserves.

Others argue that a permanently large balance sheet with large reserve balances would allow quantitative easing to be used regularly.  I don’t think quantitative easing has been that effective, and because there is uncertainty about its impact, it is hard to conduct a rules-based monetary policy with such interventions.  Moreover, the spreading of quantitative easing across the international monetary system adds turbulence to exchange rates and capital flows.

In sum, we should not only be thinking about how to reduce the size of the balance sheet in a predictable, strategic way. We should also be thinking about where reserve balances are going.  I think the first proposal described here makes sense. After the normalization, after the transition is finished, interest rates would again be determined by market forces.

Posted in Financial Crisis, Monetary Policy, Regulatory Policy | Leave a comment

R-Star Wars

In a recent speech at the Economic Club of New York, Fed Governor Jay Powell stated that the endpoint of the Fed’s normalization process “will occur when our target reaches the long-run neutral rate of interest. Estimates of that rate are subject to significant uncertainty. The median estimate of its level by FOMC participants in March was 3 percent, more than a full percentage point below pre-crisis estimates.” The neutral rate of interest is commonly designated as R*. (Sometimes R* is stated in real terms, rather than nominal terms. With an inflation target of 2 percent, a real neutral rate would would be 1 percent according to the FOMC median of 3 percent nominal.)

Actually the estimated drop in R* is quite recent: the FOMC median nominal R* was 4 percent just 4 years ago, which illustrates the uncertainty. It’s a very important issue: If there has actually been a drop, then, as some argue, the Fed should be ready for another zero lower bound event with more quantitative easing or even a higher inflation target. If there has not been a drop, then the Fed’s normalization endpoint will likely be the type of policy used in the 1980s and 1990s.

To take a deep dive into the issue, the Hoover Institution and the Stanford Institute for Economic Policy Research held an R-STAR WARS debate between two of the world’s foremost experts on this question, with me as moderator. On one side was Volker Wieland of Goethe University of Frankfurt and the German Council of Economic Experts. On the other was John C. Williams of the Federal Reserve Bank of San Francisco and a member of the FOMC.  Williams argued that the neutral rate is now much lower than in the past. Volker Wieland argued that there has been no such decline. Each made a good case in my view, but you be the judge. Here is a video of the debate in three parts: Opening remarksQuestions from Taylor, and Questions from the audience.  At the least you will see why Powell says there is “significant uncertainty,” and perhaps that is the main takeaway for policymakers.

Posted in Monetary Policy | Leave a comment

ECB Watching

Hundreds of financial market participants and news reporters showed up for the 18th annual “ECB and its Watchers” conference in Frankfurt last week. I was one of the speakers as I was at the first conference in 1999. It was a good day for talking about policy with candid questions and answers.  ECB President Mario Draghi led off with a review of current policy; I followed with a talk about asset purchase programs (my assigned topic), and for the rest of the day we listened to presentations and discussion of unconventional policy, structural reform, international coordination, and an enlightening debate between Volker Wieland and John Williams on the “neutral” real interest rate moderated by Sam Fleming of the FT.

The previous time I spoke at this event in 2014 I examined the implications of the Fed’s large-scale asset purchases from 2009 to 2014. I argued then that the purchases did not lower longer term rates except for possible signaling and temporary announcement effects, and I pointed to ten possible negative unintended consequences:

  • Distortion of price discovery in markets
  • Unresponsiveness of long-term rates to key events as in normal times.
  • Non-functioning of money markets
  • Risk of discouraging lending and investment
  • Uncertainty about unwinding
  • Too much risk-taking by risk-averse investors
  • Undermining of fiscal discipline
  • Public questioning of the need for central bank independence
  • Large re-distributive impacts
  • International contagion of policy as central banks followed each other

This time I examined the ECB’s asset purchase program, which expanded dramatically in 2014. I mentioned the same ten concerns (later the ECB’s Peter Praet noted they were monitoring these). But I also noted more of an impact on financial markets and possibly inflation than in the US. The graphs below show the expansion of ECB purchases and the accompanying change in the ECB inflation rate, though the latter may prove to be temporary.

There are also much more detailed studies of the ECB’s asset purchases, many of which find large exchange rate effects. Empirical work by Demertzis and Wolff in a June 2016 Bruegel Policy Contribution found a big effect on the euro exchange rate as did a January 2017 Bundesbank Monthly Report which went beyond announcement effects with dynamic regression estimates. A September 2016 ECB working paper by Andrade, Breckenfelder, De Fiore, Karadi, Tristani found effects on asset prices, though they focused on announcements; they then plugged these into a model to see the impact on lending, but, in my view, the model simulations still need to be checked for robustness.

The exchange rate effects raise international issues (per the tenth unintended consequence on the above list). To see this, trace the recent history of the asset purchase programs at the Bank of Japan and ECB. During the Fed’s asset purchase program, Prime Minister Shinzō Abe, when he was first running for office, complained about the strong yen, and, when he won, he appointed Haruhiko Kuroda, who then expanded asset purchases in 2013 which was accompanied by a depreciation of the yen as seen in the yen-dollar graph.

Soon after that Mario Draghi spoke about the strong euro at Jackson Hole in August 2014, the ECB’s asset purchase program began, and there was a large depreciation of the euro also shown in the euro-dollar graph.

These international developments along with improving conditions in the euro area indicate that it is time to consider a strategy for normalizing ECB policy in a clear, predictable way. Economics and the experience with normalization by the Fed suggest ways to do it gradually and strategically (including the taper tantrum experience of what not to do). The question of sequencing (reducing the size of purchases versus policy rate increases) is best approached by paying careful attention to the negative impacts listed above, as Mario Draghi suggested in his talk, and there are special considerations for a currency zone where some countries are performing differently than others.

It would be best if normalization were toward a rules-based monetary policy which has worked well in the past, as I explained in a presentation on interest rate rules at the 1999 ECB Watchers conference. Moreover, rules-based monetary policy at the ECB, the BOJ, and other central banks would help to create a much-needed rules-based international monetary system.

Posted in Monetary Policy | Leave a comment

It’s Time to Pass the Financial Institutions Bankruptcy Act

Today the House Judiciary Subcommittee lead by Tom Marino held a hearing on the Financial Institution Bankruptcy Act (FIBA) which lays out in clear legislative language the “Chapter 14 type” reform proposals that Stanford’s Hoover Resolution Project have been working on since the financial crisis. Based on this hearing, which included top legal experts familiar with the bankruptcy code, including Bankruptcy Judge Mary Walrath, I am optimistic that the bill will become law soon.  The written testimony of all of the witnesses, including me, can be found here.

As I stated in my opening remarks at the hearing, FIBA, which adds a new Subchapter to Chapter 11, is an essential element of a pro-growth economic program.  The legislation makes failure feasible under clear rules without disruptive spillovers. It would

  • help prevent bailouts
  • diminish excessive risk-taking
  • remove uncertainty associated with an ad hoc bailout process
  • reduce the likelihood and severity of financial crises and thereby lead to stronger economic growth.

Chapter 11 has many benefits, including its basic reliance on the rule of law, but for large complex financial institutions it has shortcomings because it is likely to be too slow and cumbersome to deal with runs on failing financial institutions.

FIBA would also rely on the rule of law and strict priority rules of bankruptcy, but it would operate faster—over a weekend—leaving operating subsidiaries outside of bankruptcy entirely. It would do this by moving the original financial firm’s operations to a new bridge company that is not in bankruptcy.  This bridge company would be recapitalized by leaving behind long-term unsecured debt. It would thus let a failing financial firm go into bankruptcy in a predictable, rules-based manner without spillovers.

It is important to understand how a reformed bankruptcy code would resolve a large financial institution and in an important contribution Emily Kapur has done just that, examining how it would have worked in the case of Lehman.

FIBA would work better than Title II of the Dodd-Frank Act in which the FDIC would have to exercise considerable discretion and might wish to hold some creditors harmless in order to prevent spillovers.  The perverse incentive effects of such bailouts occur whether or not the extra payment comes from the Treasury, from a fund financed by financial institutions, or from smaller payments for other creditors.

Moreover, under Title II, a government agency, the FDIC, would make the decisions. In contrast, under bankruptcy reorganization, private parties, motivated and incentivized by profit and loss considerations, make key decisions about the direction of the new firm.

Another advantage of FIBA is that it would facilitate resolution planning under Dodd-Frank. Some of the resolution plans submitted by the large financial firms have been rejected by Fed and FDIC. With FIBA the plans would be feasible.

The issue of liquidity should be considered if FIBA were to replace Title II.  The new firm might need lender of last lender support. Section 13(3) of the Federal Reserve Act would be available in such circumstances.

International arrangements should also be considered if FIBA were to replace Title II.  For example, current European resolution authorities contemplate a parallel authority abroad. If Title II were repealed and there was no parallel authority in the U.S., then a way to cooperate internationally would have to be created.

In sum, reform of the bankruptcy law, such as FIBA, is essential for ending government bailouts and for creating a robust financial system supporting economic stability and growth.

 

Posted in Financial Crisis, Regulatory Policy | Leave a comment

A New Hearing and, Possibly, a New Phase in Monetary Policy

Today’s hearing of the House Monetary Policy subcommittee—the first of the new Congress with the new chair Andy Barr from Kentucky—provided a good opportunity to discuss policy in light of new and different decisions by the Fed, new and different speeches by FOMC members, and of course a new Administration. I testified along with John Allison, Marvin Goodfriend, and Joshua Bivens. It was a good, candid hearing, which moved reform efforts forward.

Though it may seem like a long time ago, it is crucial to remember that it was back in 2003-2005 that the Fed departed from the policy of the previous two decades of good economic performance by holding the federal funds rate “too low for too long.”  Along with a breakdown in the regulatory process, this policy decision was a key factor leading to the financial crisis and the terribly high unemployment that followed.  It is telling in this regard that Josh Bivens, in his more positive assessment of recent monetary policy at the hearing, did not even discuss the possible role of policy leading up to the crisis and the large increase in unemployment.

After the panic in the fall of 2008, Fed policy moved sharply in an unconventional direction. The Fed purchased large amounts of U.S Treasury and mortgage backed securities, and it held its policy interest rate near zero when indicators used in the 1980s and 1990s suggested that higher rates were in order.  Much research shows that these post-panic policies were not effective.  Economic growth was consistently below the Fed’s forecasts with the policies, and was much weaker than in earlier U.S. recoveries from deep recessions.

One clear lesson from this historical experience is that the Fed should normalize policy and get back to the kind of policy that worked well in the past.

Here there is some progress in recent months, including in the FOMC decision this week. The Fed has shown more determination to normalize policy, and the whole term structure of interest rates has adjusted. With the federal funds rate still below appropriate levels, a key step is to raise it gradually and strategically going forward. In my view, reserve balances should also be reduced to the size where the interest rate is market determined rather than administered by the Fed’s setting the rate on excess reserves.  (I know there is some disagreement here and I consider the issues in my written testimony.)

A second lesson is that the FOMC should adopt and explain its monetary strategy, put the strategy in its “Statement on Longer Run Goals and Monetary Policy Strategy,” and then compare that strategy with existing monetary policy rules in a transparent way.  John Allison and Marvin Goodfriend supported this view. Marvin testified that “the Fed should include in the ‘Statement’ its intention to improve legislative oversight by presenting the FOMC’s independently chosen monetary policy decisions against a familiar Taylor-type reference rule for monetary policy.”

Here too there is some progress in recent months.  In a speech in January, Fed Chair Janet Yellen compared current monetary policy with the original Taylor rule, with a Taylor rule which is more reactive to the state of the economy, and with a Taylor rule with inertia.  Vice-Chair Stanley Fischer gave two follow-up speeches in February and March which take a similar approach, focusing on decisions made in 2011 and more recently, and comparing those decisions with the same three rules. All these speeches take policy in the direction of the kind of policy transparency that is called for in the Fed Oversight, Reform and Modernization Act (FORM).

The hearing was also a chance for members to emphasize that there is nothing in the FORM act that would require the Fed to follow a mechanical formula.  The Fed would simply choose its own strategy, and could change it or deviate from it if circumstances called for a change.

A third lesson involves the international monetary system. Unconventional monetary policies have spread internationally as the Bank of Japan, the European Central Bank, and other central banks adopted policies similar to that of the Fed.  Thus the international monetary system has deviated further from a sound rules-based monetary system, and the results have not been good. I have been arguing that normalization by the Fed would lead other central banks to move away from unconventional policies, and eventually normalize.

Here too there is some progress. As the Fed has showed a more determined effort to normalize, the ECB has changed the pace of its unconventional policies, and the BOJ is increasingly mentioning problems with its own quantitative easing. There is clearly an increased understanding of a change at other central, and perhaps a new phase internationally as well.

Posted in Monetary Policy | Leave a comment

Milton Friedman on Freedom: A New Book

Milton Friedman on Freedom is a delightful new book of Friedman’s best works mf-on-f-coveron freedom compiled and edited by Robert Leeson and Charles Palm.  It is a delight to have these writings in one lean volume, and the book also highlights a new and much larger online collection consisting of all of Friedman’s published and significant unpublished writings, The Collected Works of Milton Friedman compiled by Leeson and Palm, together with selected audio-visual and other unpublished materials from the papers of Milton Friedman in the Hoover Institution Archives.

Both the book and online collection are sorely needed now. People often channel Friedman to support their own views, even if they are contrary to his actual views!  So he deserves to be read in the original, and readers will find the book refreshing even if they are already familiar with Friedman.

Leeson and Palm arrange the essays chronologically, starting with one of Friedman’s first articles on freedom from the 1950s where he notes that liberalism in the classical sense “takes freedom of the individual—really, of the family—as its ultimate value.”

Friedman wrote often about the connection between economic freedom and other freedoms, and he believed that “economic freedom, in and of itself, is an extremely important part of total freedom.”  What we sometimes forget, however, is that he thought that the loss of total freedom caused by restrictions on economic freedom were as much a concern as the loss of economic prosperity caused by such restrictions.  His essays provide many examples that show how reductions in economic freedom through government subsidies or regulations stifle freedom of speech. He refers to President Ford’s WIN (Whip Inflation Now) program, about which, though “pretty silly,” no business leader spoke against.  The reason, Friedman argued, was a concern about such things as the “IRS getting ready to come and audit” or “the Department of Justice standing only too ready to launch an anti-trust suit.”

In particularly interesting essays in the book, Friedman contrasts his views with other champions of freedom. He argues for an empirical approach—in which one is tolerant of views one disagrees with and tests them with data—as a way to help resolve differences. He writes that there is a “utopian strand in libertarianism. You cannot simply describe the utopian solution, and leave it to somebody else how we get from here to there. That’s not only a practical problem. It’s a problem of the responsibilities that we have.”

Friedman warns that periods of freedom are very rare in the long span of history, especially in his essay “The Fragility of Freedom.” Friedman, along with his wife Rose, argues that there is a close connection over time between ideas and practical policy applications, but there is a long lag between the two: The “Rise of Laissez-Faire” from 1840 to 1930 followed the “Adam Smith Tide” that began in 1776. The “The Rise of the Welfare State” from 1930 to 1980, followed the “Fabian Tide” that began in 1885. The “Resurgence of Free Markets” starting in 1980 followed the “Hayek Tide.”  fraser-chart But, as I write in the introduction to the Leeson-Palm book, this most recent resurgence appears to have been cut short as policy has moved away from the principles of economic freedom in recent years as shown by this table of United States data in economic freedom from the Fraser Institute.

Perhaps the tide is now turning again. I hope so. But, in any case, this timely book tells us there is never a time for complacency.

Posted in Teaching Economics | Leave a comment

Economic Policy Explains Growth Conundrum

“Growth Conundrum” sets the theme for the many fascinating articles in the latest issuecover-fd of the IMF’s quarterly magazine Finance and Development which includes an opening essay by Nicholas Crafts and a profile of Kristin Forbes. I was asked to write one of the articles summarizing my recent research on the recent slow growth in which I have been critical of the secular stagnation view. In this post I reprint that article, but also add references to articles and papers by me and others providing relevant background and support, which I could not put in the published article due to understandable space constraints.

The article is one part of a two-part “Point-Counterpoint: Secular Stagnation” in which Brad DeLong takes the other side. Brad has already responded to my article on his blog, but he apparently did not have the benefit of these references and he was completely off point in his counter point to the magazine’s point-counter point.

Here’s my article from F&D with the additional reference notes in italics:

Policy Is the Problem. Secular stagnation has been the subject of much debate ever since 2013, when Lawrence Summers proposed the hypothesis “that the economy as currently structured is not capable of achieving satisfactory growth and stable financial conditions simultaneously.” This quote and the one in the next paragraph is from Chapter 2: Low Equilibrium, Real Rates, Financial Crisis, and Secular Stagnation by Summers in the book Across the Great Divide
edited by Matin Baily and me, and is based on the October 2013 Brookings-Hoover conference where Larry first presented the idea. My chapter in the same conference volume includes an early critique of secular stagnation.

Speaking at a recent conference, Summers posited that for the past decade and a half, the economy had been constrained by a “substantial increase in the propensity to save and a substantial reduction in the propensity to spend and invest,” which were keeping equilibrium interest rates and economic growth low. See page 44, Chapter 2, of the Baily-Taylor volume from the same conference.

Few dispute that the economy has grown slowly in recent years, especially when the financial crisis is taken into account. But secular stagnation as an explanation for this phenomenon raises inconsistencies and doubts.

Low policy interest rates set by monetary authorities, such as the US Federal Reserve, before the financial crisis were associated with a boom characterized by rising inflation and declining unemployment—not by the slack economic conditions and high unemployment of secular stagnation. I summarized the facts in a January 2014 Wall Street Journal article The economic hokum of ‘secular stagnation‘ saying that “There were boom-like conditions, especially in residential investment, as demand for homes skyrocketed and housing price inflation jumped from around 7% per year from 2002-03 to near 14% in 2004-05 before busting in 2006-07. The unemployment rate got as low as 4.4%—well below the normal rate and not a sign of slack. Inflation was rising, not falling. During the years 2003-05, when the Fed’s interest rate was too low, the annual inflation rate for the GDP price index doubled to 3.4% from 1.7%.” The evidence runs contrary to the view that the equilibrium real interest rate—that is, the real rate of return required to keep the economy’s output equal to potential output—was low prior to the crisis. And the fact that central banks have chosen low policy rates since the crisis casts doubt on the notion that the equilibrium real interest rate just happened to be low. Here I refer to the empirical work in my 2016 Business Economics article with Volker Wieland Finding the Equilibrium Real Interest Rate in a Fog of Policy Deviations in which we explain how monetary policy decisions to chose low rates confound methods to determine the equilibrium rate  Indeed, in recent months, long-term interest rates have increased with expectations of normalization of monetary policy. The 10-year Treasury has increased from 1.2% to 2.4% since last July.

For a number of years going back to the financial crisis, I and others have seen a more plausible reason for the poor economic growth—namely, the recent shift in government economic policy. References to research by Lee Ohanian, John Cochrane, Steve Davis, me and others are found in my 2016 AER paper “Can We Restart the Recovery All Over Again.”   Consider the growth in productivity (output per hour worked), which along with employment growth is the driver of economic growth. Productivity growth is depressingly low now—actually negative for the past four quarters. But there is nothing secular about this. Indeed, there have been huge swings in productivity in the past: the slump of the 1970s, the rebound of the 1980s and 1990s, and the current decline.

These shifts are closely related to changes in economic policy—mainly supply-side or structural policies: in other words, those that raise the economy’s productive potential and its ability to produce. During the 1980s and 1990s, tax reform, regulatory reform, monetary reform, and budget reform proved successful at boosting productivity growth in the United States. In contrast, the stagnation of the 1970s and recent years is associated with a departure from tax reform principles, such as low marginal tax rates with a broad base, and with increased regulations, as well as with erratic fiscal and monetary policy. During the past 50 years, structural policy and economic performance have swung back and forth together in a marked policy-performance cycle.  Evidence for these swings in productivity growth and policy is presented in my 2016 AER paper as well as in the paper Slow Economic Growth as a Phase in a Policy Performance Cycle” published in the 2016 Journal of Policy Modeling.

To see the great potential for a change in policy now, consider the most recent swing in productivity growth: from 2011 to 2015 productivity grew only 0.4 percent a year compared with 3.0 percent from 1996 to 2005.

Why the recent slowdown? Growth accounting points to insufficient investment—amazingly, capital per worker declined at a 0.2 percent a year clip from 2011 to 2015 compared with a 1.2 percent a year increase from 1996 to 2005—and to a decline in the application of new ideas, or total factor productivity, which was only 0.6 percent during 2011–15 compared with 1.8 percent during 1996–2005.

To reverse this trend and reap the benefits of a large boost to growth, the United States needs another dose of structural reform—including regulatory, tax, budget, and monetary—to provide incentives to increase capital investment and bring new ideas into practice. These reforms are described in my 2012 book First Principles.  Such reforms would also help increase labor force participation and thus raise employment, further boosting economic growth.

While the view that policy is the problem stands up to the secular stagnation view, the ongoing debate suggests a need for more empirical work. The recent US election has raised the chances for tax, regulatory, monetary, and perhaps even budget reform, so there is hope for yet another convincing swing in the policy-performance cycle to add to the empirical database.  This will depend on whether the slew of reform proposals in the Congress and the Administration are passed and implemented, as discussed in this Q and A with John Cochrane and me. If they are, then we will have more empirical evidence with which to test the hypothesis that policy has been the problem.

Posted in Regulatory Policy, Slow Recovery | Leave a comment

Restoring Prosperity

During the past two days, economists from around the world gathered at the Hoover Institution to focus on the crucial problem of how to restore prosperity. They took stock of lessons from past experiences in the US and Europe, and considered possibilities with a new Administration in Washington. It was a follow up to a conference and book that Lee Ohanian and I organized 5 years ago with Ian Wright. This year Jesus Fernandez-Villaverde joined with Lee and me in the planning, adding important economic and political perspectives as well as views from Europe.

Needless to say, the need to restore prosperity is still with us, as illustrated by the chart on this cover of 2012 book (Government Policies and the Delayed Recovery) along side the updated version today—the employment-to-population ratio is still barely above the level at the end of the recession in 2009. We have go a long way to go.two-covers

A huge amount of useful new facts and ideas were put forth at the conference so a book is again planned. The conference was also notable for topics that did not come up. No one suggested secular stagnation, as introduced by Larry Summers in a 2013 Hoover-Brookings conference, as a factor in the recent slow growth. Nor did anyone suggest demand-side fiscal stimulus packages as a means of restoring prosperity. Rather, people focused on structural or supply-side economic policy as reasons for the low growth and how to implement such policies.

Slides presented at the conference will soon be available on the conference web page, and there are some really amazing charts to look at. In meantime here are some of the highlights from my perspective.

George Shultz led off—as in the previous conference—with a note of optimism and words of encouragement reviewing how changes in policy—tax reform, monetary reform—led to greatly improved economic performance in the 1980s and could do so again.

Kevin Murphy then examined the key role for education and training in raising productivity growth. He showed that returns to education have increased, but that supply has not responded leaving a great deal of growth potential on the table with special harm to those at the bottom.  Rick Hanushek reported on the amazing economic growth gains that could come from simply weeding out the lowest 5% of teachers based on teaching effectiveness. Flavio Cunha examined underlying causes for poor educational performance delving into very early childhood experiences, where researchers can now monitor, for example, how many words children actually hear at home. There is no question that the US is not exceptional in K-12 education.

Bob Hall usefully decomposed empirically the recent slow growth of earnings per “member of the population,” showing that a decline in productivity growth and a drop in the share of labor in total income are the primary culprits.  John Haltiwanger examined the marked decline in young firms (5 years and less) as a share of US output, and examined how that decline in dynamism might be related to the decline in productivity; he found a suggestive association for some, but not all, industries.  Ed Prescott noted how technological change has created a wedge between output and real GDP citing examples of Bill Nordhaus’ work on “The Economics of New Goods.” However, Bob Hall and others noted that recent work by David Byrne, John Fernald and Marshall Reinsdorf shows that such developments are not new and do not explain changes in trends over time.

A full session concentrated on the role of government economic reforms in restoring prosperity.  Harald Uhlig focused on the role government in health care examining the recent ups and downs in the price of pharmaceutical firm stocks in response to statements coming from Washington. restoring-prosperty-conference-s-slides-taylorI assessed whether the US is having a much needed turning point in economic policy by applying the ideas in my book First Principles.  This chart shows the potential gain in productivity growth if reforms such as regulatory reform and tax reform are implemented; in my view the response to such supply side reforms can be large.  Steve Davis then discussed recent trends in regulation, intervention and policy uncertainty, provided new data and making new reform proposal to contain the costs. He provided a chart—which I replicate below—which shows some correlation between the swings in policy uncertainty and historical swings in productivity growth from my chart.

A good part of the conference was about the slow growth in Europe. regulatory-complexity-and-policy-uncertainty-davisMuch of this fascinating discussion focused on how poor policy has pulled down growth. Fortunately, the discussion will be broadcast by Russ Roberts on his famous Econtalk Podcast and includes Nicholas Crafts, Luis Garicano, and Luigi Zingales—so watch for that.  Joel Mokyr talked about his new book: A Culture of Growth, and examined whether the development of a “market for ideas,” which he argues led to the Industrial Revolution, has lessons for the future.  Jesus Fernandez-Villaverde spoke at lunch on European lessons for the US.  To paraphrase the key lesson, coming from joint work with Lee Ohanian, is: “If the US does not get its policy act together it will surely follow Europe to stagnation.”  He reminded everyone how unemployment rates in Europe used be less than in the US, and they are now of course much greater.  He was not optimistic about reforms in Europe given the demographic situation in there.

Joel Peterson, Chair of JetBlue and a leader in venture capital funding, gave the dinner talk explaining how government policy actually affects business formation and growth with many examples from his own experience. He thereby provided a much needed connection—and I would say confirmation—between the empirical/theoretical work of the economists and what actually goes on in individual firms.  Some of Joel’s examples came from his fascinating new book, The Ten Laws of Trust.

Conferences like this are useful if they help bring ideas into practice. Let’s hope that when we have the next conference on economic growth and prosperity–say in another 5 years–that many of the ideas from this conference will have been applied in practice and that we might be able to title the conference “Prosperity Restored.”

Posted in Regulatory Policy, Slow Recovery | Leave a comment