Jobs for the Theory of Semantic Information
In contrast to Claude Shannon’s () theory of communication, which defines information in terms of the probabilities of receiving particular messages, the job of the theory of semantic information is to provide an account of the conditions under which tokens of information are true, accurate, or verisimilar. Thus, a correspondence theory of semantic information should elucidate these satisfaction conditions in terms of correspondence of information and what this information is about. The troubles begin.
First, suppose we understand correspondence in terms of resemblance. Didn’t Nelson Goodman () show that anything could be similar to anything else to an arbitrarily large degree? If Goodman is right, then the correspondence account is doomed. Second, even if we could avoid trivialization of resemblance, what is supposed to be similar to what? What are vehicles of semantic information, and how could they be similar to what they are about?
To solve these problems, and remain as general as possible, I rely on two guiding ideas. The first is that the resemblance in question is structural. This implies that the problem of defining correspondence is redefined: one must provide an account of structures that stand in the correspondence relation. The second guiding idea is that theories of information flow are also theories of correspondence.
Building Informational Structures
Finding building blocks of informational structures is relatively easy. Dennis Gabor (), later awarded the Nobel Prize for his work on holography, defined a ‘logon’ as a unit of information. Any physical vehicle has a number of degrees of freedom: it can vary in a certain number of ways. If you count them, you get logons. Interestingly, these logons are what we’re interested in when we buy computer storage media. It’s not Shannon information; it’s the number of potential degrees of freedom that we’re after: we expect new media to be empty, so there is no Shannon information content when this is the case.
But how do these building blocks come together to form structures? Here’s the second idea that comes in handy. In their important book, Information Flow (), Jon Barwise and Jerry Seligman provide a logical theory of information flow in distributed networks. They start from the basics to say when information flows. The first step is to understand information vehicles as tokens classified into types.
These classifications are as complex as you wish, which provides a foundation for their analysis of the flow of information in terms of a mapping between classifications. This mapping is dubbed ‘infomorphism’, and as long as it obtains, you could say that information flows from one classification to another. This same mapping could be interpreted as a correspondence that constitutes semantic contents.
The problem is, infomorphism is strict. Unfortunately, Barwise and Seligman do not analyse noisy channels. Compare the Canaletto painting with later photos: there are surely a lot of changes in how things look, but these are still photos that are informative about how the statue of King Sigismund looked. To account for such distortions, I suggest that their definition of infomorphism be relaxed in two ways: First, assume that tokens could be classified as types in a fuzzy manner. Fuzzy set theory is an intuitive solution for this problem, although admittedly correspondences become more widespread. Second, allow for mappings to hold at least some tokens and types of one classification that corresponds to another. I dub this kind of mapping ‘infocorrespondence’.
This is a generic definition of correspondence and many diverse types of correspondence are encompassed by this definition. I do not exclude anti-symmetrical types of similarity, nor do I require that similarity is symmetrical, for example. Depending on the kind of infocorrespondence, one can distinguish different types of correspondence-based semantic information.