Ten Amusingly Irreverent Tweets at Conference on Fed

So many members of the financial press were having a good time tweeting at the Fed Centennial conference last week at Stanford that, according the “TweetReach Report,” about 1 million Twitter accounts were reached and 10 million tweets were delivered about the hashtag #hooverfedconference.  Here is a photo of the post-conference press conference

Embedded image permalink

Many tweets were serious and formed the basis for longer articles, but most everyone got a kick out of the funny sarcastic ones. Here is my list of 10 with the most amusingly irreverent sense of humor.





Posted in Teaching Economics

An Essay on Gary Becker for the Hoover Digest

I wrote this short essay on Gary Becker for the Hoover Digest where it will appear in a forthcoming issue:

Gary Becker was “the greatest social scientist who has lived and worked in the last half century.” So declared Milton Friedman a decade ago, and when Gary Becker died earlier this month at the age of 83 the outpouring of praise from his friends and colleagues reminded us why: His unique style of economic analysis, firmly rooted in facts, yielded a host of truly amazing ideas and predictions from the growth effects of investment in human capital to recent changes in the distribution of income and intergenerational mobility. Many of his ideas—including that free competitive markets help combat discrimination and that simple cost-benefit calculations applied to children help determine fertility rates—were originally controversial, but are now widely accepted. I regularly teach them to beginning students in the Economics 1 course at Stanford.

In the rush to describe Gary’s contributions to economics we sometimes forget his deep interest in economic policy. He took economics very seriously, no less so when he applied it to public policy.  For Gary, more than for most economists, economics and economic policy were inseparable.  When he talked to a politician running for office or to a public official already in office, his policy recommendations would be exactly the same as if he were speaking to a student, a colleague, or the readers of his Business Week columns, blogs, and research papers.  There was no difference between his economics and his school of economics.

This close connection between economics and economic policy was most apparent to me during the times of year that he was in residence at the Hoover Institution, which itself has had a focus on policy. For several decades Gary would spend a number of weeks or months of each year at Hoover, and he kept in touch with Hoover policy research projects at other times, from joining in oped columns with other Hoover fellows to commenting on their ideas. His Hoover office was next to mine, and I will miss him, his advice, and our conversations greatly as will many of his other good friends.

Gary’s association with the Hoover Institution began in the 1970s when he served on the influential Domestic Policy Advisory Committee along with Milton Friedman, George Stigler, and James Buchanan all who would also become Nobel prize winners.  He officially became a Hoover Senior Fellow in 1990.

During his stints on the west coast Gary regularly attended the annual Economists Weekend at Villa Cyprus on the Monterey Peninsula hosted by George Shultz.  There he would interact and vigorously debate the hot policy topics of the day with Shultz, Friedman and other economists, but also with practical business people engaged in economics and finance like Walter Wriston of Citibank and Dick Kovacevich of Wells Fargo.  Breakfasts, lunches and dinners became serious seminar-like policy conversations with rejuvenating breaks to play tennis or hike along the rocks and surf. Policy topics would change over the years, but the seriousness with which Gary confronted them did not.

Another example of Gary’s focus on economic policy was the 1996 presidential campaign where Gary was a key economic adviser to candidate Bob Dole.  He focused mainly on education and training issues, but weighed in on all other economic issues from the budget to tax policy. From my vantage point as another adviser, I can tell you that Gary’s advice could not have been more closely aligned to his economic research, with absolutely no hedging or bending if politics threatened to push out good economics. In a campaign memo he wrote: “The value of education, training, and other human capital is no less than that of machines and other physical capital, and almost certainly it is larger,” adding to another memo “We have seen income distribution widen in the United States and other countries” and that reflects “a particular problem with the education and training of those at the lower end of the income distribution.”  He advised that “The aim of policy reforms in this field should be to help stimulate economic growth by encouraging better quality and more effective schooling and training, especially for those at the bottom and middle of the human capital distribution.”  This “will both raise economic growth and also reduce inequality in earnings.”

So long before its recent popularity in policy and political circles, Gary was diagnosing and looking for solutions to income distribution problems.  Indeed some of his most recent work at the Hoover Institution was on income distribution. In a paper presented at the Hoover Economic Policy Working Group last year he applied his unique approach to the problem—one could say he Beckerized the problem—and uncovered a natural connection between changes in the cross sectional distribution of income and changes in intergenerational income mobility.

The recent financial crisis led many to question basic economic principles, but Gary fought back. In the a September 2011 Wall Street Journal article headlined “The Great Recession and Government Failure,” he said that “The origins of the financial crisis and the Great Recession are widely attributed to ‘market failure’….government behavior also contributed to and prolonged the crisis. The Federal Reserve kept interest rates artificially low in the years leading up to the crisis….Regulators who could have reined in banks instead became cheerleaders for the banks….’government failure’ added greatly to its length and severity, including its continuation to the present. In the U.S., these government actions include an almost $1 trillion in federal spending that was supposed to stimulate the economy.”

The “blame the markets not the government” mantra was enough to discourage anybody.  I remember going into his office and griping about it. But Gary could see people’s perceptions changing, and he was pleased that the revival of a highly interventionist approach to economic policy had not captured all of the profession. Gary remained optimistic to the end, and that should be an inspiration to us all.

Posted in Teaching Economics

New Conference on Fed and Rules-Based Policy: A Preview

Today starts a two day conference at Stanford’s Hoover Institution on monetary policy. It’s part of the Fed Centennial. Here is the full agenda which includes talks and commentary by Esther George, Tom Sargent, Charles Plosser,  John Williams, Jeff Lacker, Ed Prescott, Allan Meltzer, Niall Ferguson, Maury Obstfeld, Barry Eichengreen, George Shultz, Monika Piazzesi,  Athanasios Orphanides, Otmar Issing, Martin Schneider and others.

The main purpose of the conference is to put forth and discuss a set of policy recommendations that are consistent with and encourage a more rules-based policy for the Fed, and would thus improve economic performance, especially in comparison with the past decade.  The idea is to base these recommendations as much as possible on economic theory, on data, and especially on the history of the past century. It is natural to do so at the time of the Centennial of the Fed.

The recommendations in the technical papers prepared in advance of the conference have set the stage for the discussion and the panels to come. Here is quick summary of the recommendations of the papers prepared in advance.   Links are on the agenda.

In his paper for the first session, John Cochrane recommends three things: first that the short-term interest rate should be in the future determined by setting interest rate on reserves; second, that the rate should adjusted according to a policy rule; and third, that the resulting large reserve balances at the Fed should not be used for discretionary interventionist policies, such as quantitative easing, which he argues have done little good.  We may hear some discussion about whether political economy considerations render the third point hard to achieve in practice.

David Papell’s paper provides a statistical foundation for the overall theme. He uses formal statistical techniques to determine when in history monetary policy was rule-like, and he finds the rule-like periods coincide remarkably well with periods of good economic performance.  A clear policy recommendation emerges directly from his statistical findings.

Marvin Goodfriend’s historical review of the Fed’s first century leads him to recommend a new “Fed-Treasury Credit Accord” which would have a “Treasuries only” asset acquisition policy with exceptions only in the case of well-specified lender of last resort actions.  This would deal with the recurrent mission creep problem where a limited purpose institution takes on other actions for which it was not granted independence.

Michael Bordo’s key policy recommendation nicely dovetails with Marvin Goodfriend’s. He recommends, again based on an examination of the history of the Fed, that, in order to prevent and deal with crises, the central bank needs to lay out and to announce a systematic rule for its lender of last resort actions, linking his policy recommendation to what has worked and what has not worked in practice.

Lee Ohanian puts monetary policy in the context of big real economic shocks that are caused in large part by other economic policies—a situation which many have argued characterizes the economic situation today.  He finds that discretionary Fed policy responses to these major shocks have in some cases, negatively impacted the economy. Also timely is his warning that overestimating the risks of deflation can lead monetary policy astray.

Andrew Levin recommends that a good communications strategy for systematic monetary should recognize that that the reference policy rule may change over time.  He usefully focuses on the possibility of  a change in the terminal or equilibrium federal funds rate, such as a decline from 4 percent now assumed by most FOMC members to perhaps as low as 2 percent as Richard Clarida has argued in a new paper with Bill Gross his colleague at PIMCO.

And we will also hear from Richard Clarida that policy rules work quite well in an international setting and lead to a smoother operating global monetary system with smaller spillovers.  He also argues that a policy rule framework will “re-emerge as the preferred de facto if not de jure construct for conducting, evaluating, and ultimately for communicating monetary policy”   Here his use of the word de jure is important for it suggests that some legislation may be needed to bring these reforms about and keep them in place.  That is an important and timely issue which will be discussed in the next two days.

Posted in Monetary Policy

Market Failure and Government Failure in Leading Economics Texts

A new review of 23 leading Principles of Economics texts reveals huge differences in the coverage of government failure versus market failure.  Jim Gwartney, who is the author of a leading text with a strong emphasis on public choice, along with his colleague Rosemarie Fike, conducted the review and posted the results here.

Jim and Rosemarie went through each of the 23 introductory texts looking for and tabulating references to various types of market failure and government failure. As they explain in the paper they also categorized the explanations of market failure (say due to externalities, public goods, market power,…) and government failure (say due to special interests, short-sightedness, rent-seeking,…). Different people can have different views about the criteria and about models of public choice and government developed by James Buchanan, George Stigler, and others.  But the paper endeavors to describe the methodology carefully, and I recommend reading it to get an understanding of the results.

The following is Table 3 of the paper. It shows the ratio of page coverage on government failure to page coverage on market failure in each text.  Other metrics reported in the paper give similar results.Gwartny Fike Table 3

The Paul Krugman-Robin Wells book is tied with the lowest ratio (0.00) along with the Robert Hall- Marc Leiberman book.  With more references to government failure, Gwartney, Cowen-Tabarrok, Arnold, and McEachern have much higher ratios.  It is interesting that the ratios in Baumol-Blinder and Mankiw are quite low, especially in comparison the Samuelson-Nordhaus ratio which is just a bit below the average. The ratio in my book with Akila Weerepana is a bit above the average.

Reviews like this can affect future texts and revisions as authors and users become more aware of the overall coverage in comparison with the market.  My guess is that future reviews will show an increase in the average ratio and some small reduction in the variance which preserves the overall diversity.


Posted in Teaching Economics

Deleting Vice and Other Revisions in Monetary Lectures

Yesterday I finished my course on Monetary Theory and Policy for this year’s 1st year PhD students at Stanford.  I have been teaching in the 1st year PhD core for a long time and it gets more interesting each year.  (Technically speaking I first taught in the 1st year graduate course in 1968 as a student at Stanford. Lorie Tarshis, author of the first Keynesian textbook, was the professor, and he asked me to give the lecture on dynamic stochastic models of the business cycle saying he did not know much about it.)

Of course the lectures have changed enormously over the years, especially during the 1970s and 1980s with the emergence of new Keynesian modeling (rational expectations with sticky prices). But the past few years of crisis and slow recovery have also seen big changes, for example, bringing in preferred habitat or affine equations for the term structure to the macro models in order to assess quantitative easing and forward guidance.  But I also teach that the basic macro models are still ok and that it was policy what went off track leading up to the crisis.

For the past two years I have been assigning and discussing some of Janet Yellen’s work such as this April 2012 talk relating to policy rules, so little update was required there—simply deleting Vice in Vice Chair as in the attached slide from lecture one.  I’ll be posting revised versions of all the lectures on my web page soon.slide 20


Posted in Teaching Economics

Debate Heats Up: Re-Normalize or New-Normalize Policy

Last week’s IMF conference on Monetary Policy in the New Normal revealed a lot of disagreement on the key issue of where policy should be headed in the future. A dispute that broke out between me and Adair Turner is one example.  I led off the first panel making the case that central banks should re-normalize rather than new-normalize monetary policy. At a later panel Turner, who headed the UK Financial Services Authority during the financial crisis, “very strongly” disagreed.

Turner took issue with the  view that a departure from predicable rules-based policy has been a big factor in our recent poor economic performance, essentially reversing the move to more predictable rules-based policy which led to good performance during the Great Moderation. I used the following diagram (slide 5 from my presentation), which is an updated version of a diagram Ben Bernanke presented in a paper ten years ago.

slide 5

The diagram shows a policy tradeoff curve (called the Taylor Curve by Bernanke in his paper following fairly common usage). I argued, as did Bernanke in that paper, that the improved performance from point A to point B was mainly due to monetary policy, not a shift in the curve. In my view, the recent deterioration in performance to the red dot at point C was also due to a departure from rules based policy, rather than a shift in the curve.

And this is what Adair Turner disputed. Here is a transcription of the relevant 1 minute of his talk (from 25.10 to 26.10 in this video): “I basically end up disagreeing very strongly with something that John Taylor said on his fifth slide. He basically argued for a rules—a fully rules-based approach—to what central banks do.  He argued that one had moved to a better tradeoff—a Bernanke tradeoff on that chart, because of rules, between the variance of output and the variance of inflation. And he suggested that we had then moved to his red dot, which was the post-2006 red dot, because we had moved away from those rules. I disagree. I think we moved to post-2006 and in particular post 2007-08 period precisely because we had those rules.  Because we fooled ourselves that there existed a simple set of rules with one objective a low and stable rate of inflation—and the inflation rate alone, we ignored a buildup of risks in our financial sector that produced the financial crisis of 2008 and the post crisis recession.”

But as I showed in my presentation (23.30-38.00 min) and in the written paper, monetary policy did not stick to those rules. The Fed deviated from its Great Moderation rules by holding interest rates too low for too long in 2003-05 thereby creating that “buildup of risks in our financial sector that produced the financial crisis of 2008” as Turner puts it. In addition, financial regulators and supervisors set aside safety and soundness rules. And in the post-panic period monetary policy has been anything but rule-like and predictable.

Turner is also incorrect to suggest that the simple rules in question, such as the Taylor rule, are so simple that react only the rate of inflation. They respond to developments in the real economy too.

If the IMF conference and other events last week are any guide, this debate is heating up.  At one extreme Adam Posen argued at the IMF conference for Quantitative Easing Forever, but Jeremy Stein, Brian Sack, and Paul Tucker were skeptical.  And at her speech in New York last week Janet Yellen referred to the Taylor rule, and some commentators here and here saw signs of laying the ground for a return to more rules-based policies.


Posted in Monetary Policy

A First Meeting of Old and New Keynesian Econometric Models

Lawrence Klein who died last October at age 93 is most remembered for the “creation of econometric models and the application to the analysis of economic fluctuations and economic policies” as the Nobel Prize committee put it in the 1980 citation.  But in these days of “macroeconomists at war” it is worth remembering that Klein was also a pioneer in exploring the reasons for differences between macro models and the views of the economists who build and estimate them.  The Model Comparison Seminar that he ran during the 1970s and 1980s brought macroeconomists and their models together—macroeconomists at peace?—to understand why their estimates of the impact of fiscal and monetary policy were different.   In my view there is too little of that today.

I will always be grateful to Lawrence Klein for inviting me to join his Model Comparison Seminar and enter into the mix a new kind of model with rational expectations and sticky prices which we were developing at Stanford in the mid-1980s.  The model was an estimated version of what would come to be called a “new Keynesian” model, and the other models in the comparison would thus logically be called “old Keynesian.” They included such famous workhorse models as the Data Resources Incorporated (DRI) model, the Federal Reserve Board’s model, the Wharton Econometric Forecasting Associates (WEFA) model, and Larry Meyer’s Macro Advisers model.  It was probably the first systematic comparison of old and new Keynesian models and was an invaluable opportunity for someone developing a new and untried model.

The performance comparison results were eventually collected and published in a book, Comparative Performance of U.S. Econometric Models. In the opening chapter Klein reviewed the comparative performance of the models, noting differences and similarities: “The multipliers from John Taylor’s model…are, in some cases, different from the general tendency of other models in the comparison, but not in all cases….Fiscal multipliers in his type of model appear to peak quickly and fade back toward zero. Most models have tended to underestimate the amplitude of induced price changes, while Taylor’s model shows more proneness toward inflationary movement in experiments where there is a stimulus to the economy.”

Klein was thus shedding light in why government purchases multipliers were so different—a controversial policy issue that is still of great interest to economists and policy makers as they evaluate the stimulus packages of 2008 and 2009 and other recent policies as in the paper “New Keynesian versus Old Keynesian Government Spending Multipliers,” by John Cogan, Tobias Cwik, Volker Wieland and me.

Posted in Teaching Economics