Since its publication in 1964, Bell’s theorem has been the subject of many discussions in physics and philosophy. In physics, the result appears to rule out certain types of theory with a particular property, known as Bell’s locality, which has its roots in special relativity; in philosophy, it has consequences for our notions of causality, explanation, and the nature of the physical world.
So, what did Bell prove? Using some mild assumptions, he showed that for certain experiments, any theory that forbids correlations between events that are not the result of influences travelling below or at the speed of light will yield predictions that differ from those of quantum mechanics. In particular, he showed that the predictions of all such local theories satisfy the so-called Bell’s inequalities, which are violated by quantum mechanics. A few years later, Alan Aspect performed the first of many experiments that confirm the experimental violation of the inequalities. Since local theories do not make correct predictions for actual experiments, it seems sensible to conclude that our universe features some non-local aspects.
To some people, this non-locality seems so wrong that they are willing to explore the possibility of circumventing Bell’s theorem, even if the price to pay is high. Premises of the theorem that have otherwise been considered uncontroversial have, in this context, been dissected and challenged. One of these assumptions is settings independence (also known as measurement independence, statistical independence, and the free choice assumption). This is the assumption that the settings of measuring devices and the states of the systems to be measured are statistically independent. In the context of a clinical trial, for example, settings independence would be the equivalent of assuming that there are ways to sample the population under study such that the samplings do not undermine the results of the trial.
One way to deny settings independence is superdeterminism: the postulation of a common cause in the (possibly distant) past that means measurement settings and the states of the systems to be measured are correlated. This possibility is not something new. Bell acknowledged and discussed it early on. In recent years, however, it has gained some notoriety because of Gerard ’t Hooft’s endorsement. At first glance, this possibility may not seem unreasonable. After all, the measuring devices and the systems to be measured do share a common past. Nevertheless, a serious complication arises when we learn of actual experiments in which the settings of measuring devices were chosen using the digits of pi and popular TV shows and movies, photons from Milky Way stars or distant quasars, or using input from around 100,000 volunteers from all over the world. Then, we see that superdeterminism implies that the particles we measure must be correlated with the input from all of the above experiments (and any others that we might perform in the future).
Hence, superdeterminism suggests that nature somehow ‘conspires’ to make it impossible for us to create an experiment in which the settings of measuring devices and the system in question are independent of one another. As expected, many people have raised objections to this possibility. In particular, it has been argued that even if superdeterministic models could be constructed, such models would be too complex to be manageable, that they would lack all explanatory power, and that a rejection of settings independence would amount to a kind of scepticism that would jeopardize all experimental science.
The lack of a workable superdeterministic model is a problem. If there is no model, then determining what superdeterminism would necessarily imply turns out to be a rather complex enterprise. Moreover, as far as we know, it is unclear whether a complete, manageable superdeterministic model can in fact be built. All this has undoubtedly hindered discussion of the viability of superdeterminism.
In our article, we present a local, superdeterministic model that, by explicitly violating settings independence, can reproduce all the predictions of quantum mechanics. Our model is a modified version of the pilot-wave theory. In it, instead of describing an N-particle system by its wave function and the positions of the particles—as one does in standard pilot-wave—there is a complete N-particle pilot-wave system at each point in space. That is, each point in physical space contains a set of internal degrees of freedom in the form of a wave function over an internal ‘configuration space’ and a ‘particle configuration’. The complete state of the system is then represented by a wave function field and a position field. Finally, the dynamics are given by a Schrödinger equation for the wave function field and a guiding equation for the position field.
The dynamics of our model are ultra-local, meaning that all the information needed to evolve the degrees of freedom at a point is fully contained within it. In general, it neither violates settings independence nor reproduces Bell’s inequalities. It can be shown, however, that a very special set of initial conditions—a set that seems to be of measure zero for any reasonable measure over the space of initial conditions—will exactly reproduce the behaviour of standard, non-local pilot-wave theory. Thus, the Bell inequalities will be violated. If the initial conditions happen to be homogeneous (that is, if the internal wave function and position field are exactly the same at every point in physical space), then the behaviour of the system exactly mimics standard pilot-wave (provided that an appropriate ontology is defined). In the article, we consider three different ways to obtain the homogenization of the internal variables required to reproduce the predictions of standard pilot-wave theory: entering it by hand, imposing it via a law-like constraint, and achieving it dynamically.
In our view, the most important asset of our model is that it provides the ideal framework to advance the debate over violations of statistical independence via the superdeterministic route. We show that, contrary to widespread expectations, our model is able to break settings independence without an initial state that is too complex to handle, without visibly losing all explanatory power, and without outright nullifying all of experimental science. Still, we do not believe that this vindicates superdeterminism as a reasonable response to Bell’s theorem. Although our model preserves locality, it has other undesirable characteristics.
For instance, it seems to require an absolute rest frame, which wouldn’t leave it in good shape regarding its compatibility with relativity. Moreover, it might lead to a dangerous disconnect between ontology and epistemology. And it involves an enormous ontological increase, without any gain in explanatory power. We end up concluding that the model simply ‘feels unnatural’ and does not seem to possess the theoretical virtues that would render it a serious contender to its non-local counterparts. The price to pay to bypass the non-locality conclusion via the superdeterministic route is simply too high.
At this point, it could be objected that we have not proven that any superdeterministic model must have the undesired characteristics of our model, so it is still worth pursuing superdeterminism. There is some truth to this, but only to a certain extent. We think that what lies behind many of the sceptics’ objections to superdeterminism is that for superdeterminism to work, there must be some hidden, all-encompassing mechanism that establishes the required correlations in all the intricate ways that Bell-type experiments have been, and could be, performed. Our model accomplishes this by explicitly inserting a copy of the entire universe at each point in physical space—a kind of Monad or Aleph, if you like. Are there other ways of doing it? Perhaps. But given the multiple ways that the settings have been selected, and all the possible ways we might conceive of doing so in the future, it seems that any such model would have to involve a similar approach.
In any case, we hope that our model serves as a framework for further discussion of superdeterminism.