The Raisin Case: A Breeze not a Wind of Economic Freedom

The famous raisin case officially closed last week. As part of an old and ongoing government program to intervene in the raisin market to support the price, the government tried to take raisins away from a California raisin grower, Marvin Horne, and thus off the free market. When Horne and his family refused, the government assessed a huge fine and penalty. But Horne wouldn’t pay, and he went to court. His case eventually went to the Supreme Court. Last week the Court decided that “the Hornes should simply be relieved of the obligation to pay the fine and associated civil penalty they were assessed when they resisted the Government’s effort to take their raisins.”

It took ten years, but it is an important victory for the Hornes, and for my colleague Mike McConnell who represented the Hornes at the Supreme Court. And it is also a victory for economic freedom because it prevents the raisin program from intervening in the market in the way it has been for years (the program began with the Agricultural Marketing Agreement Act of 1937).  For these reasons the case has attracted a lot of attention (see Wall Street Journal opinion and news articles, and my blog pieces here with more references).  And it is kind of exciting that this is happening in this 800 anniversary year of Magna Carta which we celebrated at the Hoover Institution last week. Chief Justice John Roberts noted in the majority opinion that “The principle reflected in the [Takings] Clause goes back at least 800 years to Magna Carta, which specifically protected agricultural crops from uncompensated takings. Clause 28 of that charter forbade any ‘constable or other bailiff’ from taking ‘corn or other provisions from any one without immediately tendering money therefor, unless he can have postponement thereof by permission of the seller’….The colonists brought the principles of Magna Carta with them to the New World, including that charter’s protection against uncompensated takings of personal property.”

Unfortunately, however, the decision itself does not mean the end of the government program.  As Roberts stated in the majority opinion “A physical taking of raisins and a regulatory limit on production may have the same economic impact on a grower. The Constitution, however, is concerned with means as well as ends. The Government has broad powers, but the means it uses to achieve its ends must be “consist[ent] with the letter and spirit of the constitution.” McCulloch v. Maryland, 4 Wheat. 316, 421 (1819).”

In other words, according to Roberts, it is not inconsistent with the Constitution to regulate the amount of land that can be used in the production of raisins and thereby try to affect the price.  So, as legal scholar Richard Epstein (also a colleague) said at the conference on Magna Carta, regulatory limits are still permitted, and “we will have to wait and see” what other cases might be brought. Indeed, many interventions in agriculture involve regulatory limits or set asides of land rather than outright takings of crops.  So in this more fundamental way economic freedom can still be infringed upon, creating inefficiency and deadweight loss, as we show in Economics 1.

For many years in Economics 1, I have used the California raisin program to illustrate the impacts of government intervention, dressing up as a California raisin and dancing to Marvin Gayes’ “Heard It through the Grapevine” to show how crazy the policy is. Perhaps I should get into my California Raisin outfit again and try to turn this breeze into a wind, and really end the program.

 

Posted in Regulatory Policy, Teaching Economics

Growth Accounting for a Liberated Recovery

For several years I’ve argued that economic policy is holding the economy back and that a return to the principles of economic freedom would recreate a fast-growing recovery. It’s the subject of my book First Principles, of blogs and a recent Wall Street Journal column A Recovery Waiting to Be Liberated.

Because the economy has crawled along at such a slow pace for so long during this recovery, it has features of an economy at the bottom of a recession ready for a post-recession acceleration. The resulting gap of unrealized potential creates the possibility of rapid growth for at least 5 catch-up years, if there is a change in policies. And at this stage in the cycle, this means largely supply-side policies.

To see how this would add up, one can use basic growth accounting, noting simply that the growth rate of real GDP is the sum of two components: employment growth and labor productivity growth.

Reversing the decline in the labor force participation rate—it fell every year of the so-called recovery from 66.0% in 2008 to 62.9% in 2014—would cause a 5 percent increase in employment, or 1% annual growth for 5 years. Adding in about 1% for population growth from Census projections, gives employment growth of 2% per year. Some argue that the recent decline in labor force participation is simply due to the baby boom generation retiring, but the decline is larger for teenagers and young adults and has even increased for those of retirement age.

Reversing the recent productivity slump—it’s been growing at barely 1% recently—would bring productivity growth of 2.5% per year, the average over the past 20 years. Some argue that faster productivity growth is a thing of the past. But the IT revolution, which has been key driver of productivity growth during high investment periods, is not over as is clear from the innovative changes coming out of the high tech sector.

If we add these two components together—productivity growth of 2.5% and employment growth of 2%, we get real economic growth of 4.5%, at least for a number of catchup years, or more than double the average growth during the recovery. Economic policy–and again it’s mainly supply side policies now–should focus on these two components.

Posted in Slow Recovery

Stanford’s Economics 1 Course Open and Online

This summer Stanford will be offering an open online version of my on-campus course Principles of Economics.  People can find out more and register for the course, Economics 1, on Stanford’s free open on-line platform. The course starts on Monday (June 22). The  first week’s lecture and study materials are now posted.

The course is based on my lectures in the on-campus Stanford course. Each day after giving a 50-minute lecture, I recorded the same lecture divided into smaller segments for easier online viewing. Graphs, photos, and other illustrations appear as in the lecture. Moreover, we captioned and indexed the videos and added  study material, readings, reviews, quizzes, and discussion groups to the platform to make it a complete self-contained course.

The course covers all of economics at a basic level. It stresses the key idea that economics is about making purposeful choice with limited resources and about people interacting with other people as they make these choices. Most of those interactions occur in markets, and this course is mainly about markets, including the market for bikes on campus, or labor markets, or capital markets.  We will show why free competitive markets work well to improve people’s lives and how they have removed millions from people from poverty around the world, with many more, we hope, still to come.

People who participate in the open online course and take the short quizzes following each video will be awarded a Statement of Accomplishment, or a Statement of Accomplishment with Distinction.

The course also runs parallel with a for-credit Stanford Economics 1 course that also includes a midterm test, a final exam, problem sets, and homework, which are all graded and count toward a final grade and Stanford course credit. The for-credit course is offered online to matriculated Stanford students, incoming freshman, and visiting students in the Stanford Summer School.

As explained in this Wall Street Journal article, Stanford’s experience with Econ 1 online has been very good, and, of course, that’s the reason for offering this summer.

Posted in Teaching Economics

Too Low For Too Long Or Global Saving Glut? Both!

In an interesting new paper, “The U.S. Housing Price Bubble: Bernanke versus Taylor,” forthcoming in Journal of Economics and Business, Abrar Fitwi, Scott Hein, and Jeffrey Mercer examine two possible causes of the housing price boom that preceded the financial crisis. One, which I explored in a paper for the 2007 Jackson Hole conference, argues that the Fed’s unusually low interest rate during 2003-05 was a factor. The second, put forth by Ben Bernanke in a speech to counter that view given at the 2010 American Economic Association meetings, argues instead that a global saving gut led to a capital inflow driving down U.S. interest rates, including mortgage rates.

Fitwi, Hein, and Mercer step back from the debate as impartial referees—noting that there is “no unanimous agreement on either claim” and that it is important to know “more about the factors that led to one of the worst financial crises of modern time”—and proceed to test both hypotheses.

What do FHM find?  That both hypotheses are right! Expanding on a statistical regression analysis used by George Kahn they show that “Taylor rule deviations” are a statistically significant factor in the housing price acceleration, but they also add “capital inflows” from abroad to the regression and show that these are also significant. So they “find evidence consistent with both factors’ contributing significantly to the recent macro-housing price behavior in the U.S.”

The paper has a good literature review, and clearly exposits the methodology, clarifying, for example, that the objective of the original Taylor rule was “normative, to characterize how the Federal Reserve SHOULD adjust interest rates.” As the authors argue and empirically show in this case, it is possible that there were two factors behind the housing boom, and of course there could have been more than two: Peter Wallison points to the role of Fannie and Freddie and I have pointed to lax regulation.

Moreover, the various causes could be inter-related and this raises a question about the paper that suggests additional future research: Could the capital inflows examined by the authors have been caused, at least in part, by the low interest rates, as much as, or even more than, a global savings glut in ways that the regression might not pick up?

To see this consider the following chart which shows the US current account balance along with the capital inflow series.  The capital inflow series in the chart comes directly from BEA—called incurrence of liabilities—because the St. Louis Fed FRED data base has discontinued the series that the authors used. There is definitely a bulge in the capital inflow series during 2003-2006, the time of acceleration in the housing price boom, which corresponds to the statistical significance in the FHM regressions.

i and ca

But much of that bulge can’t be explained by the current account deficit (saving glut from abroad in Bernanke’s terminology) which is on a long downward trek during this period.  Claudio Borio and Piti Disyatat have made similar points. As an accounting matter, the capital inflow must also be related to capital outflows—acquisition of assets—as shown in the next chart by the blue line.

i and a

There is a bulge in capital outflows also about the time of the low federal funds interest rate and could reflect a “search for yield” outside the US.  The largely matching changes in capital inflows could reflect efforts of emerging market countries to prevent their exchange rate from appreciating by intervening in the exchange markets and buying dollar denominated debt, including mortgage backed securities.

In my view, we need more empirical work of the kind in the Fatwi, Hein, and Mercer paper to resolve these intricate causality issues.

Posted in International Economics, Monetary Policy

A Reawakening of Monetary Policy Research

Last May a group of economists, central bankers, market participants, and financial journalists convened at Stanford’s Hoover Institution “to put forth and discuss a set of policy recommendations that are consistent with and encourage a more rules-based policy for the Federal Reserve and would thus improve economic performance…”  Here’s the agenda, the published volume, and my summary.

Since then much has happened:  The House Financial Services Committee passed a policy rules bill out of committee, the Senate Banking Committee proposed a similar bill with other structural reforms (which also passed out of Committee), the Bank of England instituted significant communication reforms, a slew of economists and Fed officials weighed in (both pro and con) on proposals to make central bank policy rules more transparent, and Congress held several public hearings.

To analyze these new developments, many of the experts from last year’s conference and others convened last week to present papers and discuss key issues. All the papers are posted here. They were novel, on point, and rigorous whether using equations, regressions, history, legal analysis or political theory. The discussion was candid, with new questions raised about the effectiveness of the Fed’s deliberations. In my view it was kind of a reawakening—part of a broader reawakening—of monetary policy research. A written record of the whole conference is planned. In the meantime, here’s a quick summary:

Paul Tucker opened the conference. His paper showed that a systematic strategy for setting the instruments of policy is desirable, but that integrating that strategy with a more discretionary lender-of-last resort function is difficult and still needs to be worked out. He argued that “the central bank…should publish the Operating Principles” (the rules for the instruments), even stressing that simply doing this “is more important than that any particular set of principles or any particular instrument-rule be entrenched in a law that is justiciable via the courts.”

John Cochrane, the lead discussant of Paul’s paper, had different views about the focus on discretion in emergency lending saying that “Crisis-response and lender-of-last-resort actions need rules, or ‘regimes,’ even more than monetary policy actions need rules.” He then went on to propose that bailout problems be addressed through reforms in which “all fixed-value demandable assets had to be backed 100% by our abundant supply of short-term Treasuries,”

Next, David Papell presented a paper which used statistical methods to explore how recent policy rules bills would work in practice. Employing a counterfactual hypothesis and going back to the 1950s, he found that no single policy rule would have had the federal funds rate within a 2 percentage point band of the rule for every year, though some rules can be adjusted to fit certain periods. This suggests that attempts to use rules in seemingly arbitrary ways to justify policy in one period could backfire in later periods.  Except to say that such legislation would make policy more predictable, his paper did not draw conclusions about whether the legislation might affect the responsiveness of policy in one direction or another, such as during the early years of the Volcker disinflation period or during the 2003-2005 period when the policy deviations were quite large. Nevertheless, his is the first paper to apply formal econometric methods to these legislative questions.

In his discussion of David’s paper, Mike Dotsey of the Philadelphia Fed, argued that if one considered rules with the lagged interest rate on the right hand side then the actual funds rate comes well within a 2 percent band, though some noted that using lags this way can simply perpetuate monetary policy past errors.  There was also a discussion of the interesting new report (co-authored by Dotsey) at the Philadelphia Fed on using policy rules for benchmarking without being required by Congress.

Carl Walsh’s paper also used macroeconomic research methods in a novel way to assess the recent bills that require Fed reports on instrument rules. He employed both a simple calibrated new Keynesian model and a more complex estimated model to investigate whether such a rules-based requirement could improve on a goals-based requirement in which the central bank is simply required to achieve a goal of 2% inflation. More specifically, he asked how much weight should be placed on each of the two alternative requirements. In the case where the required rule is optimal (for the model used) his conclusion is to put all the weight on the rules-based requirement, but if the required rule is not optimal then the weight depends on whether shocks are demand-side or supply-side. In each case, the gain is the improvement in output stability and employment stability compared with the discretionary solution.

In discussing Carl’s paper, Andy Levin argued that the gains from using and reporting on the central bank’s strategy for the instruments go well beyond the calculations in Carl’s paper which he worried paid too little attention to model uncertainty. He argued that the Fed could do much more to clarify its strategy for the instruments, noting as evidence that the Fed’s recent Statement on Longer-Run Goals and Monetary Policy Strategy is all about goals and nothing about strategy.

Kevin Warsh’s paper and, even more so, his oral presentation on the lack of effective deliberations at the FOMC was one of the most surprising findings at the conference especially for people who have never attended an FOMC meeting. As Binyamin Appelbaum, attending from the New York Times, tweeted, there were “fascinating reflections from Kevin Warsh on the absence of real debate inside the FOMC.”  Binyamin also asked why publication of the transcripts years later would affect what people say inside.  Although Warsh, John Williams and Charles Plosser all indicated that it did not affect them, they all thought that it affected others.

Peter Fisher, former Fed, Treasury and Blackrock official, now at Dartmouth, began his discussion of Kevin’s presentation by noting the refreshing candor, and then read out a fascinating list of ways to judge whether a policy committee was functioning well. Among other things he listed the willingness of committee members to change priors in a Bayesian fashion when presented with new arguments or data. Another highlight of the discussion was a disagreement between Paul Tucker and Kevin Warsh about what went wrong on Lehman weekend.  Much more investigative and financial research is needed here.

The final paper of the day was by Michael Bordo whose broad historical sweep demonstrated the value of diversity of opinion coming from the district Fed banks and their presidents, and the danger of centering more power in Washington.  He argued for a more rules-based monetary policy as in the House and Senate bills, but was against having the presidents being appointed by the President of the United States.

In discussing Mike’s paper, Mary Karr, the General Counsel at the St, Louis Fed, reviewed the ways in which there are checks and balances in the current system of appointing district bank presidents, adding that the process is not a source of regulatory capture.  George Shultz, in one of his many helpful interventions at the conference, said he was reassured by Mary’s explanations but still worried, especially about the process at the NY Fed.  A common answer he got was “well, the NY Fed is different.”

The concluding policy panel featured John Williams, Charles Plosser and George Shultz. John Williams built on his recent speech on policy rules legislation, adding among other things that the Fed minutes were becoming too detailed and thereby detracted from Fed deliberations. Charles Plosser indicated the FOMC deliberations were more constructive when viewed from a multi-meeting perspective than by the conversation at a single meeting. Plosser also emphasized the importance of the Fed being a limited purpose institution, to which George Shultz responded at the start of his presentation with the word “Amen.”  George Shultz went on to argue that the Fed was in dire need of developing and communicating a strategy for the policy instruments making novel analogies with foreign policy and drawing on specific examples from his experience as Secretary of Labor, State, and Treasury. I expect more to be written about this analogy.

In sum there were many agreements about the importance of a rules-based policy or strategy at a general level, but disagreements about how a central bank should deliberate, implement and communicate about such policies, and thereby the need for more research. In this regard I noted at the conference a healthy respect for good economic research that must underlie effective policy, along with a great deal of congeniality as represented, for example, by John Williams and me exchanging T-shirts at the conference.

T-shirt exchange

Posted in Monetary Policy, Regulatory Policy

The Senate Moves Ahead on a Policy Rules Bill

Today the Chairman of Senate Banking Committee, Richard Shelby, released a draft bill entitled “The Financial Regulatory Improvement Act of 2015” covering a wide range of reforms. Like the widely-discussed House policy rules bill (Section 2 of HR 5018 of last year), this Senate bill (in the first section of Title V) would require that the Fed report on monetary policy rules. In fact, the Senate bill contains important principles regarding policy rules that are in the House Bill and should be ripe for compromise in conference.

Recall that the House bill, as I described in testimony before the Senate Banking Committee in March, “would require that the Fed ‘describe the strategy or rule of the Federal Open Market Committee for the systematic quantitative adjustment’ of its policy instruments. It would be the Fed’s job to choose the strategy and how to describe it. The Fed could change its strategy or deviate from it if circumstances called for a change, but the Fed would have to explain why.”

The Senate bill is quite similar in these essentials.

First, it would require that the Fed report each quarter to Congress “a description of any monetary policy rule or rules used or considered by the Committee that provides or provide the basis for monetary policy decisions, including short-term interest rate targets set by the Committee…” with the stipulation that “such description shall include, at a minimum, for each rule, a mathematical formula that models how monetary policy instruments will be adjusted based on changes in quantitative inputs…”

Second, it would require in each quarterly report “a detailed explanation of any deviation in the rule or rules…from any rule or rules…in the most recent quarterly report.” And to emphasize that rules and strategies have similar meanings, the bill includes a corresponding requirement to report on any monetary policy strategy or strategies. In other words, as in the House bill, the Fed could change strategies or rules, but it would have to explain why.

Neither the House bill nor the Senate bill would require the Fed to follow any particular rule—mechanical or otherwise. There is precedent for both the Senate bill and the House bill in previous legal requirements for the Fed to report on the monetary aggregates. Neither bill would reduce the Fed’s independence; based on my experience in government, they would bolster the Fed’s independence.

There are, of course, some differences between the bills.  For example, the Senate bill only requires the Fed to report a rule if such rule provides the basis for policy decisions. However, as is well known from Fed transcripts, the Fed regularly uses policy rules and discusses deviations from such rules, so the bill would require the Fed to report them. It is mainly a matter of transparency and thus hard to object to.

The House bill, but not the Senate bill, would also require that the Fed compare its reported rule with a so called reference rule, which turns out to be the Taylor rule in the House bill. However, since the House bill does not require the Fed to follow any rule, including the Taylor rule, this is a small difference in practice. Moreover, since it is common to compare rules or strategies with that rule, others will likely do that comparison anyway.

Another difference is that the Senate Bill does not have a role for the GAO in determining whether the Fed is complying with the law.  This would be the job of the members of Congress and their staffs based on the quarterly reports submitted by the Fed. This change makes the legislation closer to what I originally proposed, and taking the GAO out of the bill will likely remove some objections.

In sum, the Senate policy rules bill endeavors to install transparency and “responsible oversight,” as Senator Shelby puts it, without trying to micromanage the Fed. It maintains the same key reporting principles that are in the House bill in a way that should encourage bipartisan discussion and constructive input from the Fed. It has made the most of an opportunity presented by extensive legislative work and commentary during the past year and moves ahead on needed reform legislation.

Posted in Monetary Policy, Regulatory Policy

Surprising Findings at the Macro Handbook Conferences

In order to further progress on the new Handbook of Macroeconomics, which will be published next year, Harald Uhlig and I, the co-editors of the Handbook, hosted two conferences at Stanford and Chicago in April. Harald and I attended both conferences—three days in each venue—where we heard distinguished macroeconomists present 35 draft chapters and critical commentary on each of those chapters. With many chapters having coauthors there were about 85 presentations in all–way too much to summarize in a short blog though Harald and I plan to write such a summary in the volume’s introduction. Many of the preliminary drafts are posted on the conference web sites at the Hoover Institution at Stanford and and the Becker Friedman Institute at Chicago. Comments for the authors are welcome as final drafts will be prepared in the coming months.

The conferences displayed an amazing range of new and different ideas, which is understandable given all that has happened since the first Macro Handbook was published in 1999. The range of topics appeared surprisingly wide, extending well beyond traditional macro, and including such topics as Family Macro, Natural Experiments in Macro, Environmental Macro, and the Macroeconomics of Time Allocation. Of course there were both real business cycle chapters (Prescott, Ohanian, Hansen) and monetary business cycle papers (Christiano, Eichenbaum, Trabandt) and treatments of macro-prudential policy and fiscal policy at the zero lower bound on interest rates.  There were also the essential chapters on the latest and estimation and solution (in continuous and discrete time) techniques, and well as helpful displays of the key facts of economic growth and economic fluctuations both at the aggregate and individual level. The representative agent was not the only type of agent represented!

Though the new Handbook is by no means finished, there is already a very noticeable difference from the first Handbook. Perhaps the most important is that many authors included examinations of the role of financial frictions and the financial sector more generally in macro models. Of course, since the Global Financial Crisis and the Great Recession most people view the lack of such frictions to be a major gap in macro, but how that gap will most effectively be filled in remains to be seen.

The formal models in the chapters in the Handbook can help answer that question in ways that informal policy debates cannot, and I hope that this may be an important accomplishment of the Handbook in the end. Between the two conferences I attended an IMF conference in Washington, Rethinking Macro III, and participated in such debates (one with Ben Bernanke) which, while valuable, could not settle key issues without such formal modelling work as I think Olivier Blanchard made clear in his summary.

The first Handbook had the famous chapter by Bernanke, Gertler and Gilchrist on the financial accelerator, but the ideas in that research now appear in many chapters. One surprising finding,—clear in the Linde, Smets, and Wouters chapter—is that when you add such financial factors to the mainline macro models used at central banks, they do not help that much in explaining the financial crisis. To paraphrase simply, they can change the financial crisis from something like a 6-sigma event in the models to a 3-sigma event—an improvement but not ready to help much help in the next crisis. Look for more surprising and even debate-settling findings in future drafts.

Here is a group picture of authors and discussants from the conference at the Becker Friedman Institute in Chicago

Group Photo Chicago side

and here is one from the Hoover Institution at Stanford.

Group Photo Stanford

Both institutions provided a gallery of action shots too here and here

Posted in Budget & Debt, Financial Crisis, Fiscal Policy and Reforms, International Economics