A long-standing datum in cognitive science is that people make semantic inferences, which draw on the meaning of concepts, as well as purely syntactic inferences, which don’t. That contrast is puzzling since the representational theory of mind (RTM) assumes that all inferences are a matter of causal transitions between representational vehicles in virtue of non-semantic properties. Semantic inferences used to be picked out as those that draw on the internal structure of a concept. However, experimental work on concepts has produced a near-consensus that, for a typical lexical concept, there is no single representational structure which is always involved in thinking with that concept. At the same time, the recent conspicuous success of deep convolutional neural networks in modelling various categorisation tasks suggests that much of the information we draw on when using a concept is not conceptually or explicitly represented at all, but is instead implicit in dispositions to apply the concept on the basis of non-conceptual representations. This new landscape has many attractions, however the old contrast between syntactic and semantic inferences seems to have been squeezed out. Can we still explain the contrast, within the strictures of RTM? This paper argues that we can, not by appealing to conceptual structure, but by making a novel distinction between two types of representational processing in which concepts are involved.
Recorded on 19 July 2019 at the BSPS Annual Conference 2019, Durham University