My teacher, colleague, and good friend Ted Anderson died this week at the age of 98. Ted was my Ph.D. thesis adviser at Stanford in the early 1970s, and later a colleague when I returned to teach at Stanford in the 1980s. He was a most important influence on me and my research. He taught me an enormous amount about time series analysis, and about how to prove formal theorems in mathematics. I am grateful to him for sharing his wisdom and for endeavoring to instill his rigorous approach to econometric research. His lectures were clear and insightful, but it was from interacting with him in his office or in the Econometrics Seminar that one really learned time series analysis.
The Stanford Econometrics Seminar in the early 1970s was an amazingly fertile place for developing new ideas. Fortunately for me, the seminar focused on several new books which explored formally the problem of optimal economic policy formulation in statistical or probabilistic settings. We read and each presented chapters from Peter Whittle’s Prediction and Regulation, Masanao Aoki’s Optimization of Stochastic Systems, and Box and Jenkins’ Time Series Analysis: Forecasting and Control. It was a terrific way to learn about the latest opportunities for pushing research frontiers. We each presented and critiqued chapters from these books and freely discussed possible extensions and implications.
The title of this post is a variation on the title of Ted Anderson’s 1971 classic book, The Statistical Analysis of Time Series, which was the first textbook I used to study time series analysis. My Ph.D. thesis was on policy rules, and in particular on the “joint estimation and control” problem. The problem was to find an optimal economic policy in a model in which one does not know the parameters and therefore has to estimate and control the system simultaneously.
An unresolved issue was how much experimentation should be incorporated into the policy. Could one find an optimal way to move the policy instrument around in order to learn more about the model or its parameters? While costly in the short run, the improved information would pay off in the future with better settings for the instruments.
At that time everyone was approaching this problem through a dynamic programming and optimal control approach as in Aoki’s book and, in a special case, in Ed Prescott’s Ph.D. thesis at Carnegie Mellon which appeared in Econometica in 1972. This approach was very difficult for any reasonably realistic application because of the curse of dimensionality of the backward induction method of dynamic programming.
Confronting these difficulties at Stanford in the early 1970s, we looked for a different approach to the problem. The approach was much closer to the methods of classical mathematical statistics as in Ted’s book. The idea was to start by proposing a rule for setting the instruments, perhaps using some heuristic method, and then examine how the rule performed using standard statistical criteria adapted to the control rather than estimation problem. The rules were contingency plans that described how the instruments would evolve over time. The criteria for evaluating the rules stressed consistency and speed of convergence.
In my Ph.D. thesis as well as in a series of later papers with Ted, various convergence theorems were stated and proved. Later, statisticians T.L. Lai and Herbert Robbins proved more theorems that established desirable speeds of convergence in more general nonlinear settings.
The main finding from that research was that following a simple rule without special experimentation features was a good approximation. That made future work much simpler because it eliminated a great deal of complexity. This same basic approach has been used to develop recommendations for the actual conduct of monetary policy in the years since then, especially in the 1980s and 1990s. The interaction between actual policy decisions and the recommendations of such policy rules provides ways to learn about how policy works. In reality, of course, gaps develop between actual monetary policy and recommended policy rules based on economic and statistical theory. By focusing on these gaps one can learn more about what works and what doesn’t in the spirit of the approach to policy evaluation developed in that research with Ted at Stanford in the early 1970s.