The Limits of the Numerical

Stephen John

There are many good reasons to want social policy to be based, where possible, on numerical evidence and indicators. If the data clearly shows that placing babies on their back reduces the risk of cot death, this information should guide the advice which midwives give to new parents. On the other hand, not everything that matters can be measured, and not everything that can be measured matters. The care a midwife offers may be better or worse in ways that cannot be captured by statistical indicators. Furthermore, even when we are measuring something that matters, numbers require interpretation and explanation before they can be used to guide action. It is important to know if neo-natal mortality rates are rising or falling, but the proper interpretation of this data may require subtle analysis. To make matters worse, many actors aren’t interested in proper interpretation, but in using the numbers to achieve some other end; as a stick with which to beat the midwifery profession, say.

Anna Alexandrova and I are co-PIs on a project funded by the ISRF and based at Cambridge trying to think through such issues around the ‘Limits of the Numerical’ in the context of healthcare policy. As the sketch above suggests, there are lots of different senses in which the ‘numerical’ might be (or should be) limited: it may be impossible to measure some things accurately; it may be possible, but politically, morally, or socially inadvisable to measure others; some measures may be fine in some contexts but there may be limits to their use in others. Therefore, our team includes expertise not only in philosophy of science, but in political philosophy (Gabriele Badano) and anthropology (Trenholme Junghans). We are also lucky to have sister projects looking at the uses and abuses of numerical indicators in two other domains of social life: climate change policy (based at Chicago), and higher education policy (UCSB).

Molecular cafe copy

One reason to explore these topics is their obvious practical relevance; another is more theoretical. The use of numerical indicators is often praised as a way of ensuring that policy is ‘objective’. However, there are at least two senses of ‘objectivity’ at play in such claims: we might think that using numerical indicators, as opposed to human judgement, means policy is more likely to be based on an understanding of the world as it really is. Alternatively, we might think that using numerical indicators is more likely to ensure that policy is not swayed by idiosyncratic interests and biases. These two concerns can come apart: rolling a die to make a treatment decision may ensure that the decision is not swayed by a doctor’s interests, but it does not increase our chances of identifying what is, in fact, the ‘best’ treatment. On the other hand, even if relying on trained judgement is more likely to get us to the truth, such reliance may seem to leave us at the mercy of a physician’s whims. Philosophers of science spend a lot of time worrying about whether measurement tools are objective in the sense of mirroring nature; many political debates, however, are more concerned with ensuring that measures are objective in the sense of being fair or impartial. Showing a policy-maker that her shiny new evidence hierarchy is epistemologically flawed may not speak to the reasons she values that tool: that it cannot be ‘gamed’ by big pharma.

To make these general comments a bit more concrete, consider a case study that has fascinated our team: the work of the National Institute of Health and Clinical Excellence (NICE) in the UK. Very roughly, NICE’s role is to advise NHS Trusts as to which drugs to buy. As part of this process, NICE (in)famously calculates the amount of health benefit (measured in the metric of Quality Adjusted Life Years, QALYs) that can be expected to result from purchasing a drug. In turn, NICE typically recommends that drugs should not be purchased when they cost more than £30,000/QALY. (Strictly, for the purists, per incremental QALY—but leave that to one side). This ‘threshold’ is, of course, incredibly controversial, because it means that NICE often recommends against buying drugs that would, undoubtedly, benefit some patients, but only at great cost. Where, then, does the number come from?

The official justification appeals to the fact that the NHS only has a limited budget, and, as such, every decision to purchase drugs has some opportunity cost; in rough terms, if you are spending more than £30,000 to get one QALY, the money could be spent somewhere else in the system to get more benefit. Of course, this form of reasoning raises deep and important questions in moral and political philosophy. Note, however, that these questions only seem interesting if £30,000/QALY does reflect the ‘true’ opportunity cost. Does it? It seems not. Rather, recent research from health economists implies that the ‘true’ threshold should be much lower—around £13,000/QALY. To put it another way, NICE is green-lighting very many treatments that, given its purported aims, it should not be.

This seems to be a scandal! After all, regardless of the ethical questions around health resource allocation, it seems that if (part of) NICE’s job is to allocate resources efficiently, they should do it properly. However, the official response to the studies has been rather surprising. The chief executive of NICE replied by pointing out that reducing the threshold would have a detrimental effect on the UK’s pharmaceutical industry. There is something fascinating about this response. It makes no sense at all if we think that the function of numbers in public life is to try to measure some fact about the world (in this case, the ‘true’ opportunity cost). Consider, however, the other role that numbers play: they provide a kind of stability, allowing different actors—the pharmaceutical industry, patient advocacy groups, and so on—to plan their strategies and policies. Changing the number would be like changing the rules of football halfway through the game. Would it be unfair to do that? This may seem like an odd question to ask, but it’s not clear that we get very far in thinking about numbers and objectivity in policy without understanding that fairness—or at least, the impression of fairness—is, often, a key concern. Oddly, even if the £30,000/QALY threshold is unmoored from reality, it can play this second role, much as the rules of football can be arbitrary but enable fair competition.

It would be nice if all good things came together, but they don’t. Our research into particular tools in health policy opens up, then, a far larger question for philosophers of science: which forms of objectivity matter?

Stephen John
University of Cambridge
sdj22@cam.ac.uk