Lessons Learned on the 30th Anniversary of the Plaza Accord

The Plaza Accord, which took place 30 years ago this month, provides two important (and fascinating) economic lessons essential for anyone interested in reforming, or simply teaching, the international monetary system.  First, sterilized exchange market interventions are largely ineffective. A newly published book, Strained Relations, by Mike Bordo, Owen Humpage, and Anna Schwartz makes this crystal clear using data from the period. They studied 129 separate interventions against the yen and mark and found “no support for the view that intervention influences exchange rates in a manner that might force the dollar lower, as under the Plaza Accord, or maintain target zones as under the Louvre Accord”

That the dollar depreciated across the board—as much against the mark as against the yen—suggests that it was part of a general reversal of the dollar appreciation experienced during 1981-1985 due to the monetary policy strategy the Fed had put in place.  As Alan Greenspan put it in an FOMC meeting in 2000, “There is no evidence, nor does anyone here [in the FOMC] believe that there is any evidence to confirm that sterilized intervention does anything.”

Second, the Plaza Accord had widely different effects on the monetary policies of the 5 participants: France, Germany, Japan, the US, and the UK.  Compare the US and Japan, for example. For Japan there was a clear effect on its monetary policy strategy. The following chart, which comes directly from a chart published by the IMF, shows the actual policy interest rate in Japan (the call money rate) for the years 1984 to 1992 along with the policy rate recommended by the Taylor rule as calculated by the IMF.  The chart shows how the interest rate was too high relative to the rules-based policy in late 1985 and throughout 1986. It also shows the swing with the policy rate set well below the rule from 1987 through 1990.japan taylor rule

The dates of the Plaza and the Louvre meetings are marked in the chart. Observe how the move toward an excessively restrictive policy starts at the time of the Plaza meeting. Indeed, as chart shows, the Bank of Japan increased its policy rate by a large amount immediately following the Plaza meeting, which was in the opposite direction to what macroeconomic fundamentals of inflation and output were indicating. Then, after a year and a half, starting around the time of the Louvre Accord, Japanese monetary policy swung sharply in the other direction—toward excessive expansion.  The chart is remarkably clear about this move. The policy interest rate swung from being up to 2¼ percentage points too high between the Plaza and the Louvre Accord to being up to 3½ percentage points too low during the period of time from the Louvre Accord to1990.

The evidence of an effect of the Plaza Accord on Japanese monetary policy goes beyond this simple correlation. The Plaza and Louvre communiques included specific commitments about Japanese monetary policy actions.  In the Plaza Accord Statement, the Government of Japan committed to “flexible management of monetary policy with due attention to the yen rate.”  In the Louvre Accord Statement, “The Bank of Japan announced that it will reduce its discount rate by one half percent on February 23.”  Thus, the policy deviations were clearly due to the way that Japan implemented the Plaza Accord and later the Louvre Accord.

In contrast, for the US, the Fed’s monetary policy strategy was not affected at all, as Chairman Paul Volcker readily admits. The communique simply confirmed that the overall strategy that the Fed was pursuing would continue.

As will I explain in a paper prepared for the 30th anniversary conference next week, these two lessons have important implications for the future. What is past is prologue.  Study the past.

Posted in International Economics

Time Inconsistency Is Only One of Many Reasons to Favor Policy Rules

Advocates of purely discretionary monetary policy frequently list Kydland and Prescott’s time inconsistency argument as the only reason for policy rules, and then they try to shoot that down or say it is outweighed by arguments in favor of discretion.  This is the gist of Narayana Kocherlakota’s recent argument for pure discretion.

But there are a host of reasons why a monetary policy based more on rules and less on discretion is desirable, and time inconsistency is only one. I listed these seven in a Harry Johnson Lecture that I gave back when Fed policy was more rules-based:

(1)  Time inconsistency.  The time inconsistency problem calls for the use of a policy rule in order to reduce the chance that the monetary policy­ makers will change their policy after people in the private sector have taken their actions.

(2) Clearer explanations. If a policy rule is simple, it can make explaining monetary policy decisions to the public or to students of public policy much easier. It is very difficult to explain why a particular interest rate is being chosen at a particular date without reference to a method or procedure such as would be described by a policy rule. The use of a policy rule can mean a better educated public and a more effective democracy. It can help to take some of the mystique out of monetary policy.

(3) Less short-run political pressure. A policy rule is less subject to political pressure than discretionary policy. If monetary policy appears to be run in an ad hoc rather than a systematic way then politicians may argue that they can be just as ad hoc and interfere with monetary policy decisions. A monetary policy rule which shows how the instruments of policy must be set in a large number of circumstances is less subject to political pressure every time conditions change.

(4) Reduction in uncertainty.   Policy rules reduce uncertainty by describing future policy actions more clearly. The use of monetary policy rules by financial analysts as an aid in forecasting actual changes in the instruments would reduce uncertainty in the financial markets.

(5) Teaching the art and science of central banking.  Monetary policy rules are a good way to instruct new central bankers in the art and science of monetary policy.  In fact, it is for exactly this reason that new central bankers frequently find such policy rules useful for assessing  their decisions.

(6) Greater accountability. Policy rules for the instrument settings allow for more accountability by policy-makers. Because monetary policy works with a long and variable lag, it is difficult simply to look at inflation and determine if policy-makers are doing a good job. Today’s inflation rate depends on past decisions, but today’s settings for the instruments of policy—the monetary base or the short-term  nominal  interest  rate—depend on today’s decisions.

(7) A useful historical benchmark. Policy rules provide a useful baseline for historical comparisons. For example, if the interest rate was at a certain level at a time in the past with similar macroeconomic conditions to those of today, then that same level would be a good baseline from which to consider today’s policy actions.

This list still applies today and it does not even include a key technical reason (call it number (8)) that I still stress to my Ph.D. students: The Lucas econometric policy evaluation critique implies that in a forward-looking world policy rules are needed simply to evaluate monetary policy.

Posted in Monetary Policy

Jobs, Productivity, and Jeb Bush’s Tax Plan

As a matter of accounting, there are two way to increase U.S. economic growth and thereby boost incomes of Americans: increase productivity and increase jobs.  As economists put it, the rate of economic growth equals the rate of productivity growth plus the rate of employment growth.  So if you want to evaluate whether a candidate’s tax reform—or any other economic policy reform—increases growth, ask whether—and how—it boosts productivity and jobs.

Jeb Bush has now put out the most detailed comprehensive tax reform plan of the 2016 presidential campaign.  (See the Tax Foundation’s list and comparison of current tax plans of candidates.)  His tax plan bodes well for future economic growth because it addresses, in many ways, both the jobs and the productivity problems. And these problems are severe: The employment-to- population ratio has not increased over that of the bottom of the recession, and productivity growth is less than half its historical norm.

First consider jobs.  Most significantly, the reform reduces income tax rates across the income distribution to 10%, 25%, and 28%, thereby lowering a key distortion that discourages employment, and it goes further by eliminating the marginal tax rate induced by the complex phase-outs of the personal exemption and the limit on itemized deductions. The reform also eliminates the employee portion of the Social Security payroll tax for people over the full retirement age;  this part of the reform is based on a proposal by John Shoven and George Shultz, and it reduces a distortion that discourages older people from remaining in the labor force. Jeb Bush’s reform also reduces the marginal tax rate for married couples who both work by allowing the individual with the lower wage and salary income to file a separate tax return using the tax schedule for single filers. By increasing the standard deduction to $11,300 for single filers and to $22,600 for joint filers, the reform lowers marginal tax rates and simplifies the tax code. And by extending the earned income tax credit beyond families with children, the reform further expands the cut in marginal tax rates on earnings from work to as little as zero.

Next consider productivity. The reform substantially reduces business tax rates: The federal corporate income tax rate is cut from 35% to 20% and the top personal income rate–cut from 39.6% to 28%–applies to many small businesses.  These business tax cuts would raise the expected profitability, and thus the incentive, for a firm to expand and invest in new facilities and equipment which directly increases labor productivity.  Because new technologies are embodied in this new investment, productivity increases further due to information, telecommunication and other innovations.  Importantly, the reform also allows businesses to expense their capital investments—rather than deduct scheduled depreciation over time—which creates another large incentive for businesses to expand and invest in capital, further boosting labor productivity.  The reform would move the US corporate tax to a territorial tax system (with a deemed repatriation tax of 8.75% during the transition) which would also increase capital investment and thus productivity. And the reform reduces the risk-inducing tax distortion between debt and equity financing of these investments by eliminating the deduction of interest on loans.

The plan is classic tax reform in the sense that the above tax rate reductions and others are accompanied by significant base-broadening.  The base-broadening proposal is remarkably specific and transparent compared to many tax reform proposals of the past. It comes largely from the elimination of the deduction for state and local taxes and a cap on the tax value of deductions other than charitable contributions at 2 percent of adjusted gross income. That the cap is on the tax value of deductions rather than on actual deductions effectively means that upper-income taxpayers get a smaller deduction as a share of their income than middle- or lower-income taxpayers.

When scored statically the tax rate reduction combined with the base broadening reduces taxes by $376 billion, or by 1.37 % of GDP, in the year 2025, and by roughly similar amounts in the preceding ten years. But for the reasons stated above the plan would increase economic growth, and even a modest increase in growth creates substantial tax revenue feedback. The Tax Foundation estimates about 60% feedback, which means that the reduction in tax revenue would be about $150 billion in 2025.

And the plan fits nicely into the current budget outlook with sensible restraint on the growth of outlays.  The CBO’s forecast for baseline revenues in 2025 is 18.3 percent of GDP. Without any feedback—again an unrealistic assumption—the reform would take this to 16.9% as a share of GDP. That is actually very close to the 17.6% federal outlay share of GDP in 2000 and 2001 at the end of the Clinton Administration.  It is also close to the 18.1% of GDP in 2025 called for in the 2016 Budget Resolution involving a gradual reduction in the outlay share compared to the CBO baseline, which I have testified would itself would be good for economic growth. So combined with sensible spending restraint comparable to that in the current budget resolution, the tax reform fits very well, and this is without giving any credit to the increase in economic growth which the plan is all about.

Posted in Uncategorized

Can We Restart This Recovery All Over Again?

Andy Atkeson, Lee Ohanian and Bill Simon recently published a nicely-reasoned article about why the US economy can acheive 4% growth.  They argue that with the right policies there is “far more room for the economy to rebound today than after previous recessions” because the recovery from the 2007-2009 recession has been so slow (“virtually non-existent”).

A graph of real GDP and alternative paths can help illustrate the situation. Consider the one below which I published in a special issue of the Journal of Policy Modelling edited by Dominick Salvatore. It is a few years old, but because the same slow (2.2 % growth) recovery has continued you can just move the years out: Make 2016 the starting point rather than 2014.  Catch-up graph - JPMThe thick red solid line shows real GDP. The blue line shows a 2.5% trend, the average growth rate from 2000 until the recession began. If the current recovery was like previous recoveries, including the early 1980s, then real GDP would be back at trend GDP.   Instead the gap remains very large. The light red line continues the slow growth rate experienced since the start of the recovery.

The dashed lines illustrate of the benefits of a policy reform program. There are two inter-related paths.  First, the recovery speeds up.  If it speeds up to 4 percent, then it will be 2021 before the economy has returned to normal.   Second, the long-run growth path speeds up. The dashed blue line shows trend GDP growth at 3.0%, or 0.5 percent higher than the recent 2.5% trend. These lines represent a reasonable goal for economic growth with a policy reform program such as the tax, regulatory, and budget policies that Atkeson, Ohanian and Simon recommend.

I have argued for some time that a change in policy could transform the not-so-great recovery into a great one of the kind experienced following earlier financials crises.  But many now say it’s too late: if you missed the fast growth of a V-shaped recovery at the start, you’re not going to get it now. In several respects, however, the current position of the economy is like the bottom of a recession. The labor force participation is lower than at the bottom of the recession and productivity growth is down.  From this position a change in policy could generate a post-recession-like boom for several years and a higher steady state growth rate thereafter, just as in the graphs.  At the least it is an issue for debate, and it looks like I’ll have a good one at the upcoming session on this topic at the AEA meetings in January with Blanchard, Feldstein, Fischer and Stiglitz.

Posted in Slow Recovery

The Incredible Shifting Model

Robert Tetlow has published a fascinating research paper in the International Journal of Central Banking on policy robustness with the Fed’s FRB/US model. Perhaps the most important part of the paper is his careful documentation of the enormous shifts in the coefficients and the equations of FRB/US over time. This chart from the paper illustrates these shifts. The solid black line plots changes in the estimated sacrifice ratio of unemployment to changes in inflation implied by the model over 64 vintages.

tetlow chart 1

Given these large shifts in the model it is not surprising that monetary policy rules calculated with the model shift over time. This is why robustness studies are so important.

A typical robustness study looks at different models. It takes policy rules that work well in one model and tries them out in other models.  If the rules also work well in other models, then the rules are considered robust. Volker Wieland’s macro model data base with 61 different models is ideal for such robustness studies.

Tetlow takes a related but different approach to robustness. Rather considering different models, he looks at the different vintages of FRB/US that have been used for policy work since the mid-1990s. He then compares a bunch of different policy rules derived from each model-vintage including a model-specific Taylor rule as a “benchmark for comparison.”

For the case where the policy makers know the model and calculate the rule based on that specific model, he finds that the Taylor rule “renders a very good performance with losses that are lower than nearly all of the alternatives” or “does pretty well on average” depending on the time period examined.

But he then looks at how rules calculated for one model-vintage work in a different model-vintage. Because of the huge shifts in the model, the policy rules change a lot with vintages. This chart from the paper shows how model-specific (optimized) Taylor rule coefficients change as the model-vintage changes.

Tetlow chart 2 Tetlow finds that performance with such a model-specific Taylor rule is about in the middle of the pack in terms of robustness, but he also finds that some rules like “pure inflation-targeting rules” are not robust,” that “adding an instrument smoothing term…contributes little to the robustness,” and that “notwithstanding problems of mismeasurement of output gaps, it generally pays for policy to feedback on some measure of excess demand.”

Of course, the model-specific rules in Tetlow’s study are not designed to be robust. An important question for future research is whether rules designed with robustness in mind would be more robust than the model-specific rules that Tetlow examines.

I note that the IJCB, which is an important outlet for serious policy-relevant applied research on central banking, is having its 10th anniversary this year. John Williams is now the managing editor and before him were Ben Bernanke, Frank Smets, and me.

Posted in Monetary Policy

Seeking a Way through the Fog of a Currency War

Many have speculated on the nature of, and the reasons for, the exchange rate regime change in China last week. While the central bank has issued press statements and answered questions about it intentions, it is useful to look at the data.

First note how this change in regime compares with the regime change ten years ago when China went off the peg with the dollar. The chart below plots the percentage change in the Chinese Yuan per dollar in the two periods.  Note that both regime shifts started out exactly the same way—with a 2 percent change—appreciation in 2005 and deprecation in 2015.   But then you begin to see a real difference.  cnyusdIn 2005 there was virtually no change in the exchange rate as China began a very slow appreciation over a period of months and years with a temporary halt during the financial crisis.  This time, however, they let the rate move again on day 2 and on day 3, and they even let it appreciate a bit on day 4.  Of course the rate is still being managed—and we will learn more in upcoming days—but it is clearly more flexible and will be more flexible than in the years following the move in the summer of 2005.

I recall the first G7 meeting that the Chinese central bank governor attended.  Zhou Xiaochuan was the governor then and, of course, he is still governor now. Regarding the end of the dollar peg in 2005, Governor Zhou was asked at that G7 meeting about when China would start letting its exchange rate move. He answered “That is a difficult question…We don’t have a timetable. There is an old Chinese story about crossing a stream by walking from stone to stone. You can’t set a timeline because you don’t know which stones will be secure enough to step on.  But please believe me when I say that China is going to do this.” That answer pertains today even though he is taking bigger steps.

Why is China taking these new steps? Perhaps the IMF’s recent SDR analysis saying that the yuan needs to be more flexible and market sensitive is a factor, or perhaps it is the slowing Chinese economy. However, the most significant factor, in my view, relates to the recent large exchange rate movements around the world which appear to be largely due to unconventional monetary policy shifts in the United States, Japan and Europe, and the accompanying depreciations that have followed in many emerging market countries.

This developing story probably begins with the impact of quantitative easing in the United States on exchange rates, especially the Japanese yen. Following the financial crisis and into recovery, the yen significantly appreciated against the dollar as the Fed repeatedly extended its zero interest rate policy and its large scale asset purchases. Concerned about the adverse economic effects of the currency appreciation, the Abe government urged the Bank of Japan to implement its own massive quantitative easing, and, with a new governor at the Bank of Japan, this is exactly what happened.  As a result of this change in policy the yen fully reversed its course and has depreciated to levels before the panic of 2008.  In this way the policy of one central bank appeares to have affected the policy of another central bank.

The moves of the ECB toward quantitative easing in the past year have similar motivations. An appreciating euro was in the view of the ECB a cause of the weak economy.  The response was to shift to lower rates in the Eurozone and the initiation of quantitative easing. Indeed, the shift and initiation was followed by a dollar strengthening and a weaker euro. The taper tantrum and the reversal of capital flows led to dollar strengthening against the emerging market countries.  With all these uncertain developments in the background—call them the fog of a currency war—the recent actions of the central bank of China to let the yuan move with other currencies and away from the dollar are understandable.

Posted in International Economics

On Why The Economist Should Rule Rules In, Not Out

An old, but forever crucial, question for monetary policy is whether it should be rules-based or purely discretionary. The Economist, in a Free Exchange article this week with the title “Rule It Out,” goes all in for pure discretion, abandoning rules-based strategy. It’s a new view compared to previous articles over the years in the magazine and, more importantly, not based on any new facts.

The article’s main mode of argument is to invent and then shoot down straw men.  It argues, for example, that “Algorithms…should not supplant central bankers” even though no proposal out there suggests anything of the kind. It asserts that a rules-based policy is an “unnecessary constraint” on central bank “autonomy,” when experience shows that clear strategies and principles help defend autonomy.  Strangely, it says that sensible flexibility built into rules-based policy demonstraes its “pitfalls” which are then never even mentioned.  The article is so infatuated with unlimited distretion that it even argues that if the Congress wants oversight it would be better to designate another purely discretionary body to give opinions about monetary policy than to ask the central bank to report on and be accountable for its own policy rule or strategy.

In trying to justify discretionary policy the article discusses the Taylor rule in detail, adding a Taylor rule chart cutely labeled “Dropping Stitches;” in doing so it repeats arguments that have been refuted or discounted many times over the years.  It says that “interest rates should be lower than the Taylor rule suggests” because “many economists suspect [the long-term real interest] rate is permanently lower,” and it says that “Estimates of slack are themselves the product of qualitative judgment…” Debates over the long-term rate and the degree of slack are fine to have, but the issues create no more difficulty for rules-based policy than for pure discretion. In fact, a rules-based policy is preferable in this regard because it provides a way to consider the implications of, and to resolve disagreements about, such issues without sweeping them under the rug. A policy rule is not “a recipe for disagreement,” as the article claims, it is a reasonable way to discuss and resolve disagreements.

I got a little bit of déjà vu about the discussion of the Taylor rule in the article, and decided to look back at previous articles in the magazine. In fact, The Economist has published quite a few articles and charts about the Taylor rule over the years, but, completely unlike the article this week, those articles used the rule in a constructive way to discuss and take positions about monetary policy.

I recalled a piece published in The Economist almost two decades ago entitled “Monetary Policy, Made to Measure” (August 1996) with a chart labeled (yes, you guessed it) “Well Taylored.”  It asked if there was a rule “which will tell central bankers how to adapt their policies…” and then described the Taylor rule as “one such rule…”  It discussed the uncertainty about the “neutral” interest rate and the “output gap” in the rule and recognized that the rule “includes only part of the information available to central banks,” which meant that such a rule should not be used mechanically without central bankers exercising discretion in its implementation—a view of rules-based policy that I expressed clearly when I introduced the rule a few years earlier and continue to hold. Nevertheless, the 1996 Economist article used the rule—as many others were doing—to make the case that monetary policy was about right in several countries at the time.

Then there was the famous period when The Economist criticized the Fed for deviating from rules-based policy.  The article “Nicely Taylored?” from November 2004 used the rule to suggest that interest rates ought to have been rising faster.  Then, later looking back at that period from the vantage point of August 2007, The Economist argued that “By slashing interest rates (by more than the Taylor rule prescribed), the Fed encouraged a house-price boom.” In “Tangled Reins” (September 2007), The Economist warned that “the Fed’s efforts to exude a cowboy confidence will be undermined by the suspicion that it is dealing with the consequences of its own errors.”

And in “Fast and Loose” (October 2007) it came down very hard on the Fed for deviating from rules-based policy. The article noted that “Of course the Taylor rule is only a rough guide. The neutral rate and the output gap, in particular, cannot be measured precisely.” And it then went on to say, “But the rule can tell you whether policy is roughly right or a long way out. The Fed missed by a mile.” The article illustrated the point with this “Loose Fitting” chart.

Loose fitting

None of this history of The Economist writings on the harm caused by discretionary deviations from rules-based monetary policy is mentioned in this week’s article except to say that Taylor “thinks” such a scenario was possible. My brief review here is not meant, of course, to be a call for the magazine to be consistent over time if facts change, and besides it was fun to recall the clever titles (Monetary Policy, Made to Measure; Well Taylored; Nicely Taylored; Tangled Reins, Fast and Loose, Loose Fitting, Rule It Out, Dropped-Stitching).  But what facts have changed to lead to such a change of view?  If anything there is more evidence that departures from rules-based policy are harmful (David Papell et al) or at best useless (John Hussman).  And I note with interest that Clive Crook, who wrote and was Deputy Editor for The Economist in those earlier years, recently wrote two Bloomberg View columns “The Fed Needs Some Guidance” and “Dreaming of Normal Monetry Policy” making the case for rules-based policy.

Posted in Monetary Policy