DOES ECONOMICS NEED MICRO-FOUNDATIONS?

Nadia Ruiz and Armin W Schulz

Should economic models be based just on the choice patterns of individual economic agents? Or can macro-variables or macro-parameters like the inflation rate and the economic growth rate be taken as underived primitives? This is a classic debate in economics and the social sciences, and its importance has not decreased over time. In our BJPS article, we explain why we think that a new and especially compelling way of resolving this debate is through appealing to methodological considerations of complexity.

To understand this, it is best to begin by noting that the most popular attempts to resolve this debate are currently metaphysically driven (see, for example, Hoover [2009]; Epstein [2014], [2015]). At the crux of these attempts is the idea that macro-level social entities or events (like the inflation rate or the Great Recession) cannot be properly metaphysically accounted for by individual-level facts. Without appealing to the collectivist or emergentist nature of these social phenomena (in one way or another), their ontology will not be correctly specified. Because of this, it may be thought that economic models that try to reduce macroeconomic facts to facts about individuals should be seen to be suspect: we cannot hope to get at the true nature of the inflation rate (say) by reducing it to the choices of individuals.

However, while these sorts of metaphysically driven interventions in the micro-foundations debate are undoubtedly sophisticated and deserve significant further discussion, by themselves they cannot fully resolve the methodological question of whether economic models need to be micro-founded. Given that economic reality is complex, the fact that an account of economic reality is metaphysically compelling need not imply that it is also compelling for the social sciences. To be useful in practice, economic models generally need to involve idealizations and abstractions. Therefore, despite the fact that a given economic phenomenon may have some metaphysical feature, an economic model may be more practically compelling if it assumes that the phenomenon does not have this metaphysical feature (see, for example, Cartwright [1999]; Wimsatt [2007]; Morgan [2012]; Weisberg [2013]; Parker [2020]).

Because of this, metaphysical or social ontological considerations, no matter how sophisticated, are unlikely to bring the debate on micro-foundations to an end. Instead, we propose a reconceptualization of this debate so that the methodological considerations become central. In particular, our argument turns on the methodological principle that overly complex models are to be avoided unless the increase in complexity is sufficiently compensated by a better fit to the data.

This reconceptualization begins by noting that the question of whether economic models require micro-foundations should really be seen to consist of two graded questions: (a) When do economic models require micro-foundations? (b) How deep do these micro-foundations need to go? Question (a) reflects the fact that there is no reason to think that either all economic models need micro-foundations or none do. Question (b) reflects the fact that it is not obvious what the micro-foundationalist ‘bedrock’ is on which all economic models are meant to rest. Are firms genuine economic agents, or do these need to be reduced further too? What about government entities like central banks? What about households? Again, there need not be one answer that holds for all cases. (One of us has investigated this issue further in the context of the theory of the firm; see, for example, Schulz [2020].)

Making explicit that the micro-foundations debate is gradualist in nature is important, as it paves the way for the next—and central—step in the reconceptualization of this debate. This step concerns the fact that the extent to which an economic model is micro-founded will tend to correlate with the complexity of that model. Endogenizing aspects of a model typically means adding parameters or variables to that model; we need to add choice-theoretic derivations for those aspects of the model that were not part of the choice-theoretic foundation to begin with. Hence, moving away from micro-foundations generally means summarizing or otherwise aggregating individual agential effects, and adding micro-foundations requires spelling out the individual agential effects that are connected with the relevant social phenomena. This is bound to increase the number of parameters and variables the model relies on: if a variable or parameter really aggregates individual agential effects, there have to be several such effects to be aggregated.

This matters as, ceteris paribus, less complex models are methodologically preferable to more complex ones (see, for example, Forster and Sober [1994]; Hitchcock and Sober [2004]; Rochefort-Maranda [2016]). Other things being equal, models with fewer parameters or variables are methodologically superior to ones with more: model simplicity is a methodological virtue. The main reason for this is that more complex models (that is, models with more parameters or variables) have more degrees of freedom in fitting to a given set of data. Assuming that the data have been generated by a probabilistic process—a nearly universally plausible assumption—this makes it more likely that the model ‘overfits’ the data. More complex models are in danger of missing the forest for the trees: they fail to exclude the noise in the data generating process, and thus fail to get an accurate representation of the ‘signal’ of that process.

While an increased chance of mistaking signal and noise is problematic in and of itself, it brings further difficulties in its wake. Most importantly, it implies that the relevant models are also less likely to accurately predict unknown (future) data. In order to make concrete, empirically testable predictions, abstract economic models generally need to be fitted to a given data set. Given that more complex models are more likely to confuse signal and noise in such a data set, more complex models have less guidance in the prediction of future data—and are thus more likely to make wrong predictions.

In short, both for misrepresenting the core features of the data generating process and for failing to be predictively compelling, more complex models are methodologically suspect relative to less complex ones—ceteris paribus. The ceteris paribus qualification here is important, in that it is (of course) not the case that a simpler model is always preferred to a more complex one. In particular, the model’s goodness of fit to the data also matters: we want our models to fit any given data set as well as possible. What this means is that the key point here is not that we should always opt for the simplest available model. Rather, the upshot to take away from these points is that from a methodological point of view, our task as researchers is to balance a model’s goodness of fit with its complexity. While the nature of this balancing is a point of contention in the literature, and might well depend on the details of the case, what is key here is just that it is widely acknowledged that there is this trade-off between goodness of fit and model complexity (see, for example, Levins [1966]; Orzack and Sober [1993]; Weisberg [2006]).

This is crucial, as acknowledging it brings the methodological reconceptualization of the micro-foundations debate to its conclusion. When assessing whether a given economic model must be micro-founded, we need to ask if the extra complexity of the micro-founded model is needed, given the fit to the data that the model can achieve. If this fit is not significantly better, we have a reason to avoid adding micro-foundations to the model. By contrast, if the fit is much better, then we do have a reason to add these foundations (one of us has developed a related argument in an evolutionary biological context; see Schulz [2018].).

In short, we want to argue that in order to resolve the micro-foundations debate in economics, the key question to ask is not whether macro-variables or macro-paramaters are metaphysically compelling. Rather, the key question to ask is how the complexity of a given model—whether micro-founded or not—relates to its ability to fit a given set of data.

Listen to the audio essay

Subscribe to podcast

FULL ARTICLE

Ruiz, N. and Schulz, A. W. [2023]: ‘Microfoundations and Methodology: A Complexity-Based Reconceptualization of the Debate’, British Journal for the Philosophy of Science74,
doi: 10.1086/714805

Nadia Ruiz
University of Kansas
naruiz@ku.edu

and

Armin W. Schulz
University of Kansas
awschulz@ku.edu

References

Cartwright, N. [1999]: The Dappled World: A Study of the Boundaries of Science, Cambridge: Cambridge University Press.

Epstein, B. [2014]: ‘Why Macroeconomics Does Not Supervene on Microeconomics’, Journal of Economic Methodology, 21, pp. 3–18.

Epstein, B. [2015]: The Ant Trap: Rebuilding the Foundations of the Social Sciences, Oxford: Oxford University Press.

Forster, M. and Sober, E. [1994]: ‘How to Tell When Simpler, More Unified, or Less Ad Hoc Theories Will Provide More Accurate Predictions’, British Journal for the Philosophy of Science, 45, pp. 1–35.

Hitchcock, C. and Sober, E. [2004]: ‘Prediction versus Accommodation and the Risk of Overfitting’, British Journal for the Philosophy of Science, 55, pp. 1–34.

Hoover, K. D. [2009]: ‘Microfoundations and the Ontology of Macroeconomics’, in D. Ross and H. Kincaid (eds), The Oxford Handbook of Philosophy of Economics, Oxford: Oxford University Press, pp. 386–409.

Levins, R. [1966]: ‘The Strategy of Model Building in Population Biology’, American Scientist, 54, pp. 421–31.

Morgan, M. [2012]: The World in the Model, Cambridge: Cambridge University Press.

Orzack, S. H. and Sober, E. [1993]: ‘A Critical Assessment of Levins’s “The Strategy of Model Building in Population Biology” (1966)’, The Quarterly Review of Biology, 68, pp. 533–46.

Parker, W. [2020]: ‘Model Evaluation: An Adequacy-for-Purpose View’, Philosophy of Science, 87, pp. 457–77.

Rochefort-Maranda, G. [2016]: ‘Simplicity and Model Selection’, European Journal for Philosophy of Science, 6, pp. 261–79.

Schulz, A. [2018]: ‘By Genes Alone: A Model Selectionist Argument for Genetical Explanations of Cooperation in Non-human Organisms’, Biology and Philosophy, 32, pp. 951–67.

Schulz, A. [2020]: Structure, Evidence, and Heuristic: Evolutionary Biology, Economics, and the Philosophy of Their Relationship, New York: Routledge.

Weisberg, M. [2006]: ‘Forty Years of “The Strategy”: Levins on Model Building and Idealization’, Biology and Philosophy, 21, pp. 623–45.

Weisberg, M. [2013]: Simulation and Similarity: Using Models to Understand the World, Oxford: Oxford University Press.

Wimsatt, W. [2007]: Re-engineering Philosophy for Limited Beings, Cambridge, MA: Harvard University Press.

© The Author (2021)

FULL ARTICLE

Ruiz, N. and Schulz, A. W. [2023]: ‘Microfoundations and Methodology: A Complexity-Based Reconceptualization of the Debate’, British Journal for the Philosophy of Science74,
doi: 10.1086/714805