Discussion Paper
No. 2007-49 | November 26, 2007
Masanao Aoki and Hiroshi Yoshikawa
Non-Self-Averaging in Macroeconomic Models: A Criticism of Modern Micro-founded Macroeconomics

Abstract

Using a simple stochastic growth model, this paper demonstrates that the coefficient of variation of aggregate output or GDP does not necessarily go to zero even if the number of sectors or economic agents goes to infinity. This phenomenon known as non-self-averaging implies that even if the number of economic agents is large, dispersion can remain significant, and, therefore, that we can not legitimately focus on the means of aggregate variables. It, in turn, means that the standard microeconomic foundations based on the representative agent has little value for they are expected to provide us with accurate dynamics of the means of aggregate variables. The paper also shows that non-self-averaging emerges in some representative urn models. It suggests that non-self-averaging is not pathological but quite generic. Thus, contrary to the main stream view, micro-founded macroeconomics such as a dynamic general equilibrium model does not provide solid micro foundations.

JEL Classification:

C02, E10, E32, O40

Cite As

Masanao Aoki and Hiroshi Yoshikawa (2007). Non-Self-Averaging in Macroeconomic Models: A Criticism of Modern Micro-founded Macroeconomics. Economics Discussion Papers, No 2007-49, Kiel Institute for the World Economy. http://www.economics-ejournal.org/economics/discussionpapers/2007-49


Comments and Questions



John Seater - Critique of the paper
December 17, 2007 - 03:59 | Author's Homepage
This paper may have something useful to say, but as it stands it fails to make a convincing argument to that effect. The paper asserts that most modern macro theory is misguided because it is based on misspecified micro foundations. I see two problems with the paper’s argument. (1) The paper uses a model that has lots of math but little or no economics. Certainly it has much less microeconomic content than most of the macroeconomic theory it criticizes. As a result, its argument is unconvincing at best and merely routine tedium at worst. The fact that it has less micro foundation than the body of work that it criticizes along with a lot of preaching about how superior it is makes it downright irritating to read. The paper starts by explaining the state of current macro theory. (It does a rather poor job of that, by the way, as I mention below, misstating some important elements of the literature and totally ignoring others.) It then says that if we drop the “crucial assumption” that all the micro agents face the same stochastic process and replace that assumption with another more to the authors’ liking, then we get results that are not favorable to macro theory. First, as a matter of logic, dropping one assumption and replacing it with another does not show a flaw in anything. At most, it shows that the results are sensitive to the assumptions made. Mirabile dictu. Second, and more important, the authors’ preferred stochastic process is given no micro foundation but rather is imposed out of thin air. The stochastic process they choose supposedly describes innovations. How does that process work? Well, we are told that innovations come in two types, one being essentially a variety-expansion type and the other being essentially a quality-ladder type. Both are exogenous processes, not driven in any way by economic choice and not dependent in any way on anything endogenous. The authors say the model contains an endogenous element, which is that the probability of a new invention depends positively on the number of inventions already made, but that is just a mechanical, exogenous, self-enhancing structure that no more derives from economic decisions than does any other part of the innovation process. It is as arbitrary as the rest of the model. It also directly contradicts the line of argument underlying the semi-endogenous growth models of Jones, in which invention becomes harder as knowledge accumulates. Now, I happen not to be a fan of semi-endogenous growth models for both theoretical and empirical reasons, but a lot of people don’t agree with me. The authors need to explain why we should abandon a popular line of thought in favor of theirs. Instead, they simply ignore the whole literature. Calling anything endogenous in this model seems odd - to say the least - when everything important is driven by exogenous forces. Indeed, the entire model seems devoid of any economic content whatsoever. There is no rational choice, no R&D spending decision, no profit maximization, no entry into R&D or production. All we have is a mechanical exogenous stochastic process driving everything. What is endogenous here? Where is the microeconomic foundation for anything? Where is economics? Endogenous growth theory at least has *some* micro foundation. The authors’ characterization of endogenous growth theory misstates the structure of much of the first-generation models. Contrary to the authors’ remarkable assertion, in the basic quality ladder model (see Chapter 7 in Barro and Sala-i-Martin), the probability of success is not exogenously given and is not the same for every firm (at least not a priori), but rather depends on how much each firm spends on R&D. The authors seem unaware of the second-generation endogenous growth literature due mostly to Peretto and Howitt, which has far more micro structure than the first-generation models and in which probabilities still depend on firm decisions. Also, in both the first- and second-generation models, the important actors are not representative agents. There is a representative household in all these models, but the important decisions are made by monopolistically competitive firms, each doing something different from the others. In both the first- and second-generation models, at one point a symmetry assumption is imposed that the monopolistic competitors are all alike. That gives the appearance of introducing representative agents, but it actually doesn’t do that. The reason is that the symmetry assumption is imposed *after* the firms’ choice problem is solved, not before. That makes a profound difference in the model’s behavior. Furthermore, in the second-generation models, there is entry of new firms, which means new agents appear. The increase in the number of agents is central to the dynamics of the model (it eliminates the scale effect, for example), and it cannot happen in a purely representative agent framework. It also means that the second-generation models, even more than the first-generation models, avoid the pitfalls of the representative agent approach. The authors mention none of this and seem unaware of it. Now, everyone knows that the symmetry assumption is a simplification. Does it make any substantive difference? That is an open question that the authors do not address. The next point raises an important issue for providing a useful answer to that question. (2) There is an important continuity argument that the authors need to address. We know that under some assumptions, standard macro theory is “self-averaging,” to use the authors’ terminology. The authors themselves say so. The authors’ exercise consists of showing that under other assumptions, self-averaging disappears. My immediate reaction is, “Is the difference significant empirically?” My reaction is based on a kind of continuity argument. Unless the economy has some sort of strange discontinuity in its assumptions, then a slight deviation from the standard macro assumptions should lead to only a slight deviation of behavior from the standard results. Is the deviation emphasized by the authors slight or large? They don’t explore that issue. Until they do, they simply don’t have much to say. Everybody knows that macro models rely on simplifications. (So do micro models. Micro people are just less honest about it.) So the models are approximations. Are they good approximations? That’s the $64,000 question. The authors have done nothing to help answer that question, even if all the problems with their argument that are noted in part (1) above are eliminated. When all is said and done, I find this paper of little value as it now stands. I did not know about the “self-averaging” concept, so that is a contribution. However, it doesn’t seem very valuable because it apparently doesn’t tell us anything at all about the magnitude of the approximation error that macro theory makes. We all know macro theory makes such an error, for lots of reasons. That’s old news. How big is the error? That’s what we need to know. I rate the paper as a 1. With a lot of work, maybe it could be brought up to a 2. I doubt it could earn more than that from me because I don’t see how it can tell us anything more than what we all have known for decades, which is that macro models make an approximation error. I see no hope for any insight into the magnitude of that error or for any help in reducing it. Perhaps the authors can prove me wrong with a suitable revision that addresses the points above. If so, they will have made an interesting contribution.

Masanao Aoki and Hiroshi Yoshikawa - Rejoinder to Prof. Seater´s Comment
January 02, 2008 - 11:07
see attached file

Günther Rehme - Report on DP 2007-49
January 01, 2008 - 20:35 | Author's Homepage
This paper applies the concept of ”non-self-averaging” from statistical mechanics to address the issue whether economic models based on the assumption of a ”representative agent” yield predictions that are adequately captured by the sample means of realizations of economic variables of interest.

Masanao Aoki and Hiroshi Yoshikawa - Partial Response
January 04, 2008 - 09:44
see attached file