Learning from the Great Inflation

Many are worried that the exploding Federal debt and the expanded Federal Reserve Balance sheet will lead to a large increase in inflation. But when and how fast? A session at the recent American Economic Association meetings on the Great Inflation of the late 1960s and 1970s provides some historical perspective on the question. Andrew Levin of the Federal Reserve Board staff and I presented one of the papers. We looked at the timing of the inflation increase and the monetary responses.

As this chart from our paper shows, the increase in inflation was not sudden. The chart plots CPI inflation and a measure of inflation expectations based on the Livingston expectation survey. Inflation was in the 1-1/2 percent range through the early 1960s. Then, starting around 1965, it gradually started to rise and by the end of the 1970s it was in double digits. There were several boom-bust cycles during this period as the Fed fell behind the curve, attempted to catch up by raising interest rates, and then eased again before inflation returned to low levels. We found that monetary policy was significanly affected by political factors during the period. It was only after Paul Volcker was appointed Fed Board Chairman in 1979 that inflation was brought back down, but by then signficant damage had been done. And even then there was one more pull back from tightening during the 1980 election. So this is one plausible way that inflation might rise again. Of course it does not have to be this way. With increased globalization and interconnectedness of markets, inflation could rise more quickly. Or with a policy correction we could completely avoid an another great inflation.
Posted in Monetary Policy | Comments Off on Learning from the Great Inflation

Ben Bernanke’s AEA Speech

On New Years Day I wrote a piece on the surprising increase in the number of references to the Taylor rule in 2009. Little did I know that two days later Federal Reserve Board Chairman Ben Bernanke would start off 2010 with a speech with 50 more references at the American Economic Association (AEA) meeting in Atlanta. The speech was largly an attempt to refute the now commonly held view that the Fed held interest rates too low for too long in 2002-2005. I was in Atlanta and though I could not go to the speech I read it afterwards and expressed by disagreement first in a Bloomberg interview with Steve Matthews and later in a CNBC interview with Larry Kudlow and will follow up with more details; note that a working paper prepared by seven Fed Board staff was released just before the speech. Others have raised questions about the speech from a variety perspectives, including David Beckworth, Caroline Baum, David Leonhardt, Mike Shedlock, Judy Shelton. There is much for the Financial Crisis Inquiry Commission to digest.

Posted in Monetary Policy | Comments Off on Ben Bernanke’s AEA Speech

From Woodford to DeLong On Monetary Policy Rules

Surprisingly, the Taylor rule was referred to more frequently than ever in 2009. According to Google Scholar more articles referred to it than in any year since 1993 when John Lipsky, now First Deputy Managing Director at the IMF, first called the rule by that name. Many more pieces appeared in blogs or in the news media.

The increased commentary is surprising because the Fed did not change its interest rate target once during 2009. Most likely the reasons for the attention are that: (1) the rule is cited as evidence that interest rates were “too low for too long” in 2003-2005 thereby helping to cause the financial crisis, (2) the rule can be used to help determine when the Fed should or will increase its interest rate target above zero, (3) the rule is used by some to determine how much quantitative easing is needed.

Many excellent pieces were written, in my view, including several by Michael Woodford of Columbia and Vasco Curdia of the New York Fed on adjusting policy rules during financial crises. 2009 also saw the release of the 2003 FOMC transcripts with telling references to the Taylor rule by Ben Bernanke. Columns by Michael McKee of Bloomberg and Gene Epstein of Barron’s were clear and insightful.

At the other end of the spectrum was Brad DeLong’s recent Taylor rule blog post, which unfortunately contains serious errors. For starters, he asserts that “John Taylor in the long run wants ‘Taylor Rule’ to mean any statistically-fitted reaction function in which interest rates respond to inflation and the output gap and not to the one rule he fitted over 1987-1992.” In my original paper I did not “statistically fit” a reaction function over 1987-1992 or any other period for that matter. The coefficients of the rule in that paper were derived from monetary theory and models developed during the 1980s. As I explained in the paper, using a variety of quantitative economic models, I found, through stochastic simulations, that such a rule worked well in stabilizing inflation and real GDP. Statistically fitting such a rule over a short five-year span would make little sense and adding earlier years would have made even less sense because Fed policy during the 1970s was terrible. To illustrate how the rule would work in practice I pointed to episodes when the rule was similar and also to episodes when it was different from what the Greenspan Fed was doing. There are no reaction function regressions in that paper.

De Long also tries to make the case that a policy rule which was statistically fit by San Francisco Fed economist Glenn Rudebush is an improvement over the rule I proposed. That estimated rule has a larger coefficient on the output gap and therefore gives lower interest rate settings now. It implies that the interest rate will remain at zero for a very long period which is what DeLong advocates. He likes that estimated version, but curve fitting without theory is dangerous. In the case of policy rules, it can perpetuate mistakes: the higher coefficient on the gap may be due to periods when the funds rate was too low for too long. Also interpretation of the lagged interest rate in fitted regressions is very difficult.

DeLong provides no demonstration that a higher coefficient on the output gap is an improvement over my original proposal. The recent review paper by me and John Williams, Director of Research at the San Francisco Fed, reviews the debate over the size of that coefficient and shows why a higher coefficient is not robust. Moreover, others, such as Bob Hall, argue that the coefficient on the output gap should be lower, not higher, than in the Taylor rule because of uncertainty of measuring the output gap. DeLong does not mention such alternatives.

DeLong is wrong about what he claims “John Taylor in the long run wants.” Don’t we all want good monetary policy? If there is a better policy rule that improves economic performance, then I am all for it, and I don’t much care what you call it. In 2008 I proposed adjusting the Taylor rule with the Libor-OIS spread to deal with the turbulence in the financial markets. The Curdia and Woodford papers analyzed that proposal and improved on it by changing the coefficient on the Libor-OIS spread. Their analysis was not based on statistical curve fitting, but rather on good monetary economics, which is what we need more of right now.

Posted in Monetary Policy | Comments Off on From Woodford to DeLong On Monetary Policy Rules

Measuring the Impact of the Stimulus Package with Economic Models

It’s been nearly a year since the stimulus package of 2009 was passed. Unfortunately most attempts to answer the question “What was the size of the impact?” are still based on economic models in which the answer is built-in, and was built-in well before the stimulus. Frequently the same economic models that said, a year ago, the impact would be large are now trotted out to show that the impact is large. In other words these assessments are not based on the actual experience with the stimulus. I think this has confused public discourse.

An example is a November 21 news story in the New York Times with the headline “New Consensus Sees Stimulus Package as a Worthy Step.” Authors Jackie Calmes and Michael Cooper write that “the accumulation of hard data and real-life experience has allowed more dispassionate analysts to reach a consensus that the stimulus package, messy as it is, is working. The legislation, a variety of economists say, is helping an economy in free fall a year ago to grow again and shed fewer jobs than it otherwise would.”

As evidence the article includes three graphs, which are reproduced on the left of the chart below. Each of the three graphs on the left corresponds to a Keynesian model maintined by the group shown above the graph. All three graphs show that without the stimulus the recovery would be considerably weaker. The difference between the black line and the gray line is their estimated impact of the stimulus. But this difference was built-in to these models before the stimulus saw the light of day. So there are no new hard data or real life experiences here.

Now what about the so-called “consensus?” In fact, a number of other economic models predicted that the stimulus would not be very effective, and, using the same approach, those models now say that it is not very effective. To illustrate this I have added two other graphs on the right-hand side of the chart which did not appear in the New York Times article. The first one is based an a popular and well-regarded new Keynesian model estimated by Frank Smets, Director of Research at the European Central Bank, and his colleague Raf Wouters. Focus again on the difference between the black and the gray lines, which is what is predicted by that model, as shown in research by John Cogan, Volker Wieland, Tobias Cwik, and me. Note that the impact is very small. The second additional graph on the right is based on the research of Professor Robert Barro of Harvard University. As he explained last January, “when I attempted to estimate directly the multiplier associated with peacetime government purchases, I got a number insignificantly different from zero.” So according to that research, the difference between the black and the gray line should be about zero, which is what that graph shows. So there is no consensus.

Menzie Chen has a post on Econbrowser which mentioned the three graphs in the original New York Times article as an illustration of his excellent analysis of the use of counterfactuals (the gray lines in the graphs). The additional two graphs illustrate how important it is to go beyond a few models and establish robustness in policy analysis. Moreover, in my view, the models have had their say. It is now time to look at the direct impacts using hard data and real life experiences.

Posted in Stimulus Impact | Comments Off on Measuring the Impact of the Stimulus Package with Economic Models

Implications of the Crisis for Introductory Economics

People ask how I think introductory economics teaching should change as a result of the financial crisis. It’s an important question. At the upcoming American Economic Association Annual Meetings, my colleague Bob Hall, next AEA President and Program Director, has included on panel on the topic.

Clearly we need to include more on financial markets, but based on my experience teaching in the two-term introductory course at Stanford, I think the single most important change would be to stop splitting microeconomics and macroeconomics into two separate terms. The split has been common in economics teaching since the first edition of Paul Samuelson’s textbook, which put macro first. Many courses now have micro in the first term and then macro in the second.

But regardless of the order now used, I think a reform that integrates micro and macro throughout is worth considering. There were arguments for doing this before the crisis, including the fact that in research and graduate teaching the tools of micro have now been integrated into macro.

The financial crisis clinches the case for full integration in my view. The crisis is the biggest economic event in decades and it can only be understood with a mix of micro and macro. To understand the crisis one must know about supply and demand for housing (micro), interest rates that may have been too low for too long (macro), moral hazard (micro), a stimulus package (macro) aimed at such things as health care (micro), a new type of monetary policy (macro) that focuses on specific sectors (micro), debates about the size of the multiplier (macro), excessive risk taking (micro), a great recession (macro), and so on. It you look at the 22 items that the Financial Crisis Inquiry Commission has been charged by the Congress to examine, you’ll see that it is a mix of micro and macro. Defining the first term as micro and the second term as macro, or visa versa, is no longer the best way to allocate topics.

Moreover, the introductory course can be integrated in a way that makes economics more interesting for students. This year at Stanford we have been experimenting with such an integration in our principles course, and so far it seems to be working well. (The course, Economics 1, is taught this year by me (1A), Marcelo Clerici-Arias (1B), Gavin Wright (1A), and Michael Boskin (1B)). In 1A, which has been mainly micro until this year, I shuffled in macro concepts at various places. When I talk about aggregate investment demand I said it came right out of the micro demand for capital. Similarly aggregate employment and unemployment can be explained in the context of micro labor supply and demand. The proof that aggregate production (GDP) equals aggregate income can be stated at the time one defines profits as equal to revenues minus cost of labor and capital. In the second term we then go into such topics inter-temporal consumption which is at the heart of both micro and macro and time inconsistency, which has both macro and micro aspects. The demand for money as a function of the interest rate is easily explained with the opportunity cost concept.

Such curriculum changes incur some transition costs. For example, the economics textbooks are not quite ready for this. We are using my textbook with Akila Weerapana this year and it has the usual micro/macro split. But it is not too hard to mix and match pages, and many publishers custom design texts.

This approach also has the advantage that the traditional split does not have. It lends itself to a system where students can take a one term overview course in 1A (mainly non-econ majors) and not have to miss all of micro or all of macro. I hope that others can benefit from this approach and have constructive comments about it.

Posted in Teaching Economics | Comments Off on Implications of the Crisis for Introductory Economics

Financial Crisis Inquiry Commission Gets Started

Today the Financial Crisis Inquiry Commission announced its first public hearing, which will start at 9 am on January 13 and continue through January 14. The topic: Causes and Current State of the Financial Crisis.

That public hearings are about to start is excellent news. Without such an investigation, followed by a clear explanation to the American people of what went wrong, the Congress is unlikely to enact financial reforms that actually fix the problem. To repeat a phrase from the Chairman of the Brady Commission on the 1987 crash (their report took only 4 months to complete), “You cannot fix what you cannot explain.”

Though not part of its Congressional mandate, I recommend that the FCIC follow the approach of the Brady Commission and the 911 Commission and make some recommendatiions. It could then even issue a report card on how the recommendations are implemented. Such a Report Card was issued by the 911 Commission and it proved quite useful.

Posted in Financial Crisis | Comments Off on Financial Crisis Inquiry Commission Gets Started

Estimating the Impact of the Fed’s Mortgage Portfolio

Some of the big questions looming about the Fed’s exit strategy are if, when, and at what pace the Fed should draw down its huge portfolio of mortgage backed securities (MBS). At its meeting last week the Federal Open Market Committee announced that it is continuing its MBS purchases at a “gradually slowing pace,” but that will still leave $1,250 billion in MBS on its balance sheet at the end of the first quarter. Another, more long-term, question is whether such price-keeping operations—a term used by Peter Fisher who once ran the trading desk at the New York Fed—should be a regular part of monetary policy in the future. Brian Sack, who now runs the trading desk, concludes in a recent speech that they should be.

The answer to these important questions requires on an empirical assessment of the impact of the MBS purchase program. Unfortunately, publicly available assessments are sorely lacking. For this reason, Johannes Stroebel and I undertook an econometric study of the impact; the study is part of a larger research project by us and our colleagues on central bank exit strategies.

Such an assessment requires that one carefully consider other influences on rates on mortgage backed securities. We focused on two obvious ones: prepayment risk and default risk. If we control for prepayment risk using the swap option-adjusted spread, which is regularly used by MBS traders and investors, and if we control for default risk using spreads on senior or subordinated agency debt, we find that that the program has not had an economically or statistically significant effect on mortgage spreads. If we use other measures to control for prepayment and default risk we can see statistically significant effects, but they are small. Even in these cases it was the announcement or the existence of the program, rather than the size of the portfolio that mattered for spreads. We find that there is no statistically or economically significant effect of the size of the portfolio, a finding which we show is quite robust. If our estimates hold up to scrutiny, they raise doubts about such price-keeping operations and suggest that the Fed could gradually reduce the size of its portfolio without a significant impact on the mortgage market.

The graph illustrates our findings. It shows the swap option adjusted spread (with its prepayment risk adjustment) in red and the predictions of that spread using the agency debt spread (a measure of risk) in blue. The residual between these two, shown in green at the bottom of the graph, indicates that there is little left for the Fed’s MBS portfolio to explain. Details and other cases are in the paper.

Posted in Monetary Policy | Comments Off on Estimating the Impact of the Fed’s Mortgage Portfolio

David Wessel’s Doubts About “Whatever It Takes”

Big Think is conducting a series of video interviews with economists, market participants, journalists, policymakers and others on the financial crisis to try to answer the pressing question of “what went wrong.” This is an excellent idea. As former Treasury Secretary Nicholas Brady said last week “you can’t fix what you can’t explain.” The Financial Crisis Inquiry Commission should take note.

The interview with David Wessel of the Wall Street Journal was the first in the series. Among many good questions put to David, one of the most interesting was “Did Bernanke’s mantra of ‘whatever it takes’ lead us astray?” In David’s 575-word answer, he offers 5 positive words that it “got us through this crisis,” but gives no explanation for that and instead goes on for the remaining 570 words talking about the problems the approach has caused and is causing, including that “it can justify almost anything.” Among other problems he mentions the Bernanke-Paulson-Geithner “mistake” of “wasting the time after Bear Stearns” and “not coming up with a more articulated game plan for what they would do if they had to cope with a collapse with another financial institution.” In effect what you see in the video is a cogent argument that the approach may have seriously worsened the crisis even if it eventually got us through it. So it seems like the answer to the question is: yes, it led us astray. And since the problem has not been addressed–as David points out in the last few sentences—it is likely to continue to lead us astray.

Posted in Monetary Policy | Comments Off on David Wessel’s Doubts About “Whatever It Takes”

A Perfect Storm or It’s Not My Fault?

This past week we held a conference on Ending Government Bailouts As We Know Them. One of the biggest surprises coming out of the conference was the growing recognition that the bankruptcy process–perhaps amended with a new Chapter 11F–is quite viable for financial institutions, and that a new FDIC-like resolution process that goes beyond banks may not be neeeded. In addition, all three keynote speakers, former Treasury Secretaries George Shultz and Nick Brady as well as former Fed Chairman Paul Volcker spoke in favor of constraining the activities of banks that have access to Fed loans and guaranteed deposits. The biggest concensus item, however, was that Congress and the Administration should wait for a report explaining the causes of the crisis before moving ahead on reform legislation. And Brady shot down a common explanation very effectively. “The least convincing explanation [of the crisis] is one floating around the industry that attributes the events to ‘a perfect storm.’—i.e., it’s not my fault.”

Posted in Financial Crisis | Comments Off on A Perfect Storm or It’s Not My Fault?

Sin Rumbo

The just-released Spanish translation of my book, Getting Off Track, is titled Sin Rumbo, which translates back to English as “without direction” or “aimlessly.” Although the Spanish title has a somewhat different connotation than the English, it is actually an excellent title because it portrays another problem with government policy during the financial crisis, namely that there is no coherent strategy—no direction—for dealing with the crisis once it flared up in August 2007. The Spanish translation of the subtitle is more straightforward: De cómo las acciones e intervenciones públicas causaron, prolongaron y empeoraron la crisis financiera.

Posted in Teaching Economics | Comments Off on Sin Rumbo