by Mariah Kelly, Business Support Assitant (Events)

Last year, I wrote an essay on ‘evidence-based policy’ for my MSc Health Policy programme. This focused on the political nature and application of evidence and argued that this phrase doesn’t accurately represent the policymaking process. For several years, SCSN have also been interested in the role of knowledge and evidence in policy and practice. COVID-19 has made this topic especially pertinent for us all, and I’ve been struck by the number of government references to following ‘the science’ and the ‘evidence’.

The UK government’s rhetoric is part of a broader consensus that policies should always be ‘evidence-based’. On the surface, this is perfectly rational and desirable: who doesn’t want objective, effective policy responses? In reality, however, this is neither possible nor necessarily preferable. The UK’s coronavirus response has claimed to follow ‘the science’ throughout, but this obscures the reality that evidence comes in many, often conflicting, forms, and is usually filtered through a political lens.

The first question we should ask about the strategy of following ‘the science’ is: what science? Scientists don’t tend to agree on everything, and there are many different scientific disciplines to choose from. And even if there is a broad consensus on quantitative research evidence, an effective policy response should not necessarily prioritise this above qualitative data or social science. For instance, a pandemic response strategy can follow epidemiological science but it will not be effective if individuals, communities and societies cannot or will not behave in the ways that policymakers hope. Because people are involved, policymaking needs to be attentive not only to biomedical and/or quantitative science but to social science, and to the needs of communities themselves. When these conflict with each other, some types of evidence inevitably get prioritised.

Although government rhetoric has not made this clear, certain kinds of evidence have been prioritised throughout the COVID-19 response. The UK Government used a behavioural science theory to justify the decision not to ‘lockdown’ until 23rd March. It was argued that to ‘lockdown’ too early would lead to ‘behavioural fatigue’: the idea that public adherence to quarantines might wane over time. Whilst this appears logical, the science followed here was far from uncontested at the time. More than 600 UK behavioural scientists signed an open letter to the government to express their concern about perceived government inaction. They argued that not enough is known about the concept of ‘behavioural fatigue’ for COVID-19 policy responses to be built around this hypothesis. How can policymakers ever purport to follow ‘the science’, then, when we consider that scientific evidence, like all evidence, is not unified, stable, objective or universally agreed upon?

Claiming to follow ‘the science’ can be used as a way of deflecting political accountability when policies go wrong. Being ‘led by the science’ obscures the fact that deciding which science to follow is a fundamentally political exercise. Policymakers and politicians sometimes follow (or even commission) the kind of evidence that supports their existing ideological preferences, a process some have referred to as ‘policy-based evidence’[1]. This makes it all the more clear why the type of ‘evidence’ used in policy responses must be scrutinised. To do this, policymakers must be transparent about the precise evidence that is being followed.

Vague references to ‘science’ and ‘evidence-based policy’ belie the complexity of the policymaking process.[2] Policymaking involves the prioritisation of competing types of evidence and opinions, and quantitative scientific research evidence can and should only constitute one type of evidence. Perhaps evidence-informed policymaking is a more realistic way of thinking about the process. This captures the complexity of evidence, the decision-making process and the reality that policies are shaped by political (and public) opinions, not just academic research.

Throughout 2020/21 SCSN are considering what ‘evidence’ means for community safety and are planning a bigger piece of work during this and next year. We’d love to hear from you so please get in touch with us if this is something you’re interested in too.

[1] Hunter, D.J. (2009) ‘Relationship between evidence and policy: a case of evidence-based policy or policy-based evidence?’ Public Health 123(9), pp. 583-586.

[2] See Cameron, A., Salisbury, C., Lart, R., Stewart, K., Peckham, S., Calnan, M., Purdy, S. and Thorp, H. (2011) ‘Policy Makers’ Perceptions on the Use of Evidence from Evaluations.’ Evidence & Policy 7(4), pp. 429-447