Virtual Issue

Philosophy of Psychology and Cognitive Science

Daniel Weiskopf

Psychology emerged within the last two centuries from a long tradition of philosophical speculation about the mind, and it has to a large degree remained entangled with that tradition. Psychological theorizing overlaps with philosophical discourse at many points, and has also produced a host of concepts, methods, and models that shed new light on some of philosophy’s old problems. This combination has made it one of the most fertile sources of material for philosophers of science. The emergence of cognitive science as an organizing conception for the interfield study of the mind is a testament to the reciprocal influence of philosophy on scientific theorizing. As increasing attention has been paid in recent years to the analysis of the practices of particular sciences, the philosophy of psychology, neuroscience, and cognitive science have flourished.

In this virtual issue, we have assembled a set of papers from the Journal that highlight its contributions to the philosophy of psychology and the wider field of cognitive science. These papers are organized under topical headings: within each section an older article has been paired with one or two more recent contributions. The race to keep up with the most current trends can lead to an unhealthy neglect of possibly fruitful approaches that have already been explored, and often unjustly forgotten. We hope that juxtaposing these historical pieces with their descendants will reintroduce them into the conversation by highlighting the ways in which they are continuous with contemporary work, and also complimentary to it.


Cybernetics and Control

It is unsurprising to find that some of the earliest work published in the Journal deals with cybernetic theory. Norbert Wiener’s Cybernetics was published in 1948 and the 1950s saw the growth of research within the broad field of mathematical control theory. A number of papers on cybernetic theory and its application to human intelligence were published in BJPS during its early years. These included not only D. M. MacKay’s ‘Mindlike Behaviour in Artefacts’ ([1951]), which appears in this issue, but also J. O. Wisdom’s ‘The Hypothesis of Cybernetics’ ([1951]) and R. Thomson and W. Sluckin’s ‘Cybernetics and Mental Functioning’ ([1953]). MacKay’s paper is of particular interest because it directly takes up the question of whether an artificial cybernetic creature could display humanlike thought—an analogous question to the one Alan Turing had addressed in his ([1950]) paper in Mind, but one that receives a strikingly different treatment within a control-theoretic framework.

Until its recent renaissance, cybernetic theory remained on the margins of much work in cognitive science. The revived influence of cybernetic models can be seen in work such as Rick Grush’s ‘Emulation Theory of Representation’ ([2004]) and Susan Hurley’s ‘Shared Circuits Model’ ([2008]), as well as some forms of dynamical systems modelling that use ideas of ‘circular causality’ initially developed in a cybernetic context. C. A. Hooker, H. B. Penfield, and R. J. Evans’s paper, ‘Control, Connectionism and Cognition: Towards a New Regulatory Paradigm’ ([1992]) is a somewhat earlier attempt to reframe some basic cybernetic ideas about the control of complex systems using tools borrowed from connectionist theory, which, as the authors note, generally lacks a robust notion of what it means for a system to be self-controlling.


Levels of Explanation

The term ‘level’ has some claim to being one of the most frequently used but least well-theorized terms in metaphysics and philosophy of science. The notion that natural phenomena are organized into levels has perennial appeal, though how to characterize levels themselves has only recently become an actively contested question. In the philosophy of biology and neuroscience, much effort has been spent on unpacking the concept of a level within a framework of mechanistic explanation (by Carl Craver ([2007]) and Craver and William Bechtel ([2007]), for example). There are also attempts to give a more general account of hierarchical levels of organization, notably by William Wimsatt ([1994]) and Herbert Simon ([1962]). James K. Feibelman’s paper ‘Theory of Integrative Levels’ ([1954]) presents an early sketch of such a hierarchical conception, connects the notion of a level with topics such as emergence and the proper scope of explanations, and looks forward optimistically to the development of a ‘meta-empirical’ field of study that elucidates the principles of interlevel interactions.

In cognitive science, the most frequently cited notion of a level is Marr’s, which corresponds not to a type of mereological or mechanistic hierarchy, but rather to a type of analysis: computational, algorithmic/representational, and physical/implementational. Marr’s levels provide something like a default lens through which cognitive modelling is often viewed. However, Bradley Franks argues in ‘On Explanation in the Cognitive Sciences: Competence, Idealization, and the Failure of the Classical Cascade’ ([1995]) that the Marrian model is often unsatisfied in practice due to a tension between the need to have levels of description align correctly with one another and the need to apply idealizations to cognitive capacities in order to render them explanatorily tractable. His paper is not merely a critique of the dominant framework but also makes contact with recent work on idealization in explanation, much of which also points towards the need for a more nuanced conception of interfield or interlevel relations.



The blossoming profusion of philosophical and psychological studies of consciousness in the last forty years at times threatens to obscure its roots; but, peering back through the thickets, we can perceive many early insights that were later independently rediscovered. One example, reprinted here, is George Watson’s ‘Apparent Motion and the Mind-Body Problem’ ([1951]), in which he uses the phi phenomenon of illusory motion to argue against the psychophysical identity theory, on the grounds that there is no obvious way to reconcile the perceptual order of events with the neurophysiological one. It is notable that, forty years later, Daniel Dennett also appealed to the phi phenomenon as evidence for his multiple drafts model of consciousness. Watson’s sceptical conclusions about the existence of any simple mapping between conscious states and neural states is a distant antecedent to Dennett’s own views along these lines.

Kathleen Wilkes is undoubtedly known to many for her groundbreaking work on personal identity. Her Real People ([1988]) shifted the terms of the personal identity debate by bringing carefully organized empirical case studies to bear on often-unconstrained imaginative scenarios. Several of her early papers (for example, Wilkes [1980]; [1981]) were published in the Journal, but here we have reproduced her ([1984]) paper ‘Is Consciousness Important?’ In this piece, she catalogues various senses in which phenomena might be conscious and makes what would now be regarded as an argument that consciousness as such is not a ‘natural kind’, and will not play an interesting role in future psychological theorizing—in her terms, it is ‘a prima facie unpromising phenomenon for systematic investigation’.

Nicholas Shea and Tim Bayne share Wilkes’s worries as applied to the everyday tests that we use to ascertain the presence of consciousness. In ‘The Vegetative State and the Science of Consciousness’ ([2010]), they consider clinical cases involving persistently comatose patients and argue that, in this frustratingly liminal situation, our commonplace methods of ascribing consciousness are too conservative to be useful. These methods withhold consciousness in cases where neurophysiological studies suggest it may well be present. In the spirit of a scientifically revisionary response to Wilkes’s scepticism, they suggest that our standard taxonomic practices should be applied to refine the concept of consciousness so that it captures something closer to a scientifically well-defined cluster of properties.



Notions of innateness have a long history in philosophy, biology, and everyday thought. With the cognitive revolution came a spate of strong nativist hypotheses covering domains such as syntax, concepts, object perception, and theory of mind. Philosophers have wrestled with what it means to call these various phenomena ‘innate’, and with the larger question of whether the notion of innateness is too confused to be useful. Nicholas Rescher’s ‘A New Look at the Problem of Innate Ideas’ ([1966]) places the debate in its historical context and proposes that the core of innateness is the developmental invariance of a trait over different environmental conditions. Interestingly, a similar invariantist proposal was made more formally by J. H. Woodger in his BJPS paper (not included here) ‘What Do We Mean by “Inborn”?’ ([1953]). Invariantism continues to have adherents today, such as Ron Mallon and Jonathan Weinberg ([2006]).

The varying fortunes of nativist doctrines are illustrated by the other two papers in this section. In ‘Mad Dog Nativism’ ([1998]), Fiona Cowie calls into doubt Jerry Fodor’s famous claim that (almost) all (lexical) concepts are innate, where innateness is understood in terms of a structure being ‘triggered’ by the environment, rather than having some more complex developmental history, for example, being learned. Attempts to salvage such nativist doctrines by appealing to naturalistic semantic theories, she suggests, will prove untenable. On the other side of the coin, in his ‘Nature and Nurture in Cognition’ ([2002]), Muhammad Ali Khalidi gives a detailed account of how the triggering notion of innateness might be rehabilitated by using the notion of informational content, along the way providing a critique of Cowie’s own arguments against this possibility.



Modularity is a relative newcomer to the psychological scene, though it has extensive affinities with historical notions concerning mental faculties. Here, as in the case of nativism, theoretical work divides into attempts to give a classification of various concepts of modularity, and attempts to determine the extent to which cognitive systems are in fact modular. A founding doctrine of evolutionary psychology is the claim that the mind is massively modular: it is composed entirely of a host of special-purpose computational devices. Richard Samuels, in his ‘Evolutionary Psychology and the Massive Modularity Hypothesis’ ([1998]), argues that all of the evidence presented by evolutionary psychologists is compatible with an alternative picture of cognitive architecture on which the mind contains many specialized databases, but not distinct processing systems. He proposes a metaphorical shift from thinking of cognition as a computing cluster to thinking of it as more akin to a library.

Jack Lyons’s ‘Carving the Mind at Its (Not Necessarily Modular) Joints’ ([2001]) revisits a fundamental question underlying debates over modularity and mental structure, namely, what criteria we ought to use to divide cognition into separate subsystems. Modularity is one form that cognitive systems can take, but there are other possible nonmodular forms as well. Lyons aims to develop a general conception of systems that will both enable us to state the modular hypothesis with more precision and also develop moderate alternatives to modular theories of mind. Both Samuels and Lyons are opening up theoretical room for hypotheses about cognitive structure that go beyond the terms of the modularity debate as it was originally conceived.


Further Directions

In the past decade, the Journal has published an increasing number of papers in philosophy of psychology, neuroscience, and the allied cognitive sciences. This is in line with the growing importance of these fields within philosophy of science and the discipline of philosophy as a whole. Future retrospectives will have an even richer trove of papers to mine in mapping the ways that BJPS authors have shaped the discussion of philosophical issues arising within the sciences of the mind and brain.

 Find the virtual issue here.