Should economic models be based just on the choice patterns of individual economic agents? Or can macro-variables or macro-parameters like the inflation rate and the economic growth rate be taken as underived primitives? This is a classic debate in economics and the social sciences, and its importance has not decreased over time. In our BJPS article, we explain why we think that a new and especially compelling way of resolving this debate is through appealing to methodological considerations of complexity.
To understand this, it is best to begin by noting that the most popular attempts to resolve this debate are currently metaphysically driven (see, for example, Hoover ; Epstein , ). At the crux of these attempts is the idea that macro-level social entities or events (like the inflation rate or the Great Recession) cannot be properly metaphysically accounted for by individual-level facts. Without appealing to the collectivist or emergentist nature of these social phenomena (in one way or another), their ontology will not be correctly specified. Because of this, it may be thought that economic models that try to reduce macroeconomic facts to facts about individuals should be seen to be suspect: we cannot hope to get at the true nature of the inflation rate (say) by reducing it to the choices of individuals.
However, while these sorts of metaphysically driven interventions in the micro-foundations debate are undoubtedly sophisticated and deserve significant further discussion, by themselves they cannot fully resolve the methodological question of whether economic models need to be micro-founded. Given that economic reality is complex, the fact that an account of economic reality is metaphysically compelling need not imply that it is also compelling for the social sciences. To be useful in practice, economic models generally need to involve idealizations and abstractions. Therefore, despite the fact that a given economic phenomenon may have some metaphysical feature, an economic model may be more practically compelling if it assumes that the phenomenon does not have this metaphysical feature (see, for example, Cartwright ; Wimsatt ; Morgan ; Weisberg ; Parker ).
Because of this, metaphysical or social ontological considerations, no matter how sophisticated, are unlikely to bring the debate on micro-foundations to an end. Instead, we propose a reconceptualization of this debate so that the methodological considerations become central. In particular, our argument turns on the methodological principle that overly complex models are to be avoided unless the increase in complexity is sufficiently compensated by a better fit to the data.
This reconceptualization begins by noting that the question of whether economic models require micro-foundations should really be seen to consist of two graded questions: (a) When do economic models require micro-foundations? (b) How deep do these micro-foundations need to go? Question (a) reflects the fact that there is no reason to think that either all economic models need micro-foundations or none do. Question (b) reflects the fact that it is not obvious what the micro-foundationalist ‘bedrock’ is on which all economic models are meant to rest. Are firms genuine economic agents, or do these need to be reduced further too? What about government entities like central banks? What about households? Again, there need not be one answer that holds for all cases. (One of us has investigated this issue further in the context of the theory of the firm; see, for example, Schulz .)
Making explicit that the micro-foundations debate is gradualist in nature is important, as it paves the way for the next—and central—step in the reconceptualization of this debate. This step concerns the fact that the extent to which an economic model is micro-founded will tend to correlate with the complexity of that model. Endogenizing aspects of a model typically means adding parameters or variables to that model; we need to add choice-theoretic derivations for those aspects of the model that were not part of the choice-theoretic foundation to begin with. Hence, moving away from micro-foundations generally means summarizing or otherwise aggregating individual agential effects, and adding micro-foundations requires spelling out the individual agential effects that are connected with the relevant social phenomena. This is bound to increase the number of parameters and variables the model relies on: if a variable or parameter really aggregates individual agential effects, there have to be several such effects to be aggregated.
This matters as, ceteris paribus, less complex models are methodologically preferable to more complex ones (see, for example, Forster and Sober ; Hitchcock and Sober ; Rochefort-Maranda ). Other things being equal, models with fewer parameters or variables are methodologically superior to ones with more: model simplicity is a methodological virtue. The main reason for this is that more complex models (that is, models with more parameters or variables) have more degrees of freedom in fitting to a given set of data. Assuming that the data have been generated by a probabilistic process—a nearly universally plausible assumption—this makes it more likely that the model ‘overfits’ the data. More complex models are in danger of missing the forest for the trees: they fail to exclude the noise in the data generating process, and thus fail to get an accurate representation of the ‘signal’ of that process.
While an increased chance of mistaking signal and noise is problematic in and of itself, it brings further difficulties in its wake. Most importantly, it implies that the relevant models are also less likely to accurately predict unknown (future) data. In order to make concrete, empirically testable predictions, abstract economic models generally need to be fitted to a given data set. Given that more complex models are more likely to confuse signal and noise in such a data set, more complex models have less guidance in the prediction of future data—and are thus more likely to make wrong predictions.
In short, both for misrepresenting the core features of the data generating process and for failing to be predictively compelling, more complex models are methodologically suspect relative to less complex ones—ceteris paribus. The ceteris paribus qualification here is important, in that it is (of course) not the case that a simpler model is always preferred to a more complex one. In particular, the model’s goodness of fit to the data also matters: we want our models to fit any given data set as well as possible. What this means is that the key point here is not that we should always opt for the simplest available model. Rather, the upshot to take away from these points is that from a methodological point of view, our task as researchers is to balance a model’s goodness of fit with its complexity. While the nature of this balancing is a point of contention in the literature, and might well depend on the details of the case, what is key here is just that it is widely acknowledged that there is this trade-off between goodness of fit and model complexity (see, for example, Levins ; Orzack and Sober ; Weisberg ).
This is crucial, as acknowledging it brings the methodological reconceptualization of the micro-foundations debate to its conclusion. When assessing whether a given economic model must be micro-founded, we need to ask if the extra complexity of the micro-founded model is needed, given the fit to the data that the model can achieve. If this fit is not significantly better, we have a reason to avoid adding micro-foundations to the model. By contrast, if the fit is much better, then we do have a reason to add these foundations (one of us has developed a related argument in an evolutionary biological context; see Schulz .).
In short, we want to argue that in order to resolve the micro-foundations debate in economics, the key question to ask is not whether macro-variables or macro-paramaters are metaphysically compelling. Rather, the key question to ask is how the complexity of a given model—whether micro-founded or not—relates to its ability to fit a given set of data.