Two of the most common categories of objections to evidence based policy or scientific approaches to policy development are:
Science is anti some of my values or hostile to my sense of moral community.
Science is only about objectivity of facts and is value neutral, so is or should be separate from or balanced/ compromised with the ethical processes of policy.
The general outline of our answers to concerns (a) and (b), may be:
‘Science’ is a set of methodological principles, and, by extension, the current best available evidence or consensus of the scientific community. Science’s strength is in the extent to which it is self-disciplined about not claiming unrealistic levels of certainty and in being continuously self-critical and reflexive about possible levels and types of uncertainty in its current conclusions which might not have been fully accounted for yet.
‘Evidence’ is more than just raw data and more than just statistical results. Evidence synthesis requires data and statistical analysis, but it also inevitably requires a theoretical framework or ‘model’ to make sense of data into conclusions about specific questions. The forming of questions or hypotheses and the theoretical models used to interpret statistical results into scientific conclusions introduce often harder to see types of uncertainty. Definitions of measures for independent and dependent variables are also often problematic, as they relate to the theoretical model used to formulate the test hypothesis in the first place and then the interpretation of results, with the risk of circular faulty logic invalidating both the definitions of measures and interpretation of results from them.
Scientific logic is mainly about testing causal inferences, or in other words, about making causal inferences in a slower, more conscious, more logically disciplined way, in the hope that we will come to understand more reliably and precisely. Correlation and contiguity (touching in time) are not in themselves sufficient evidence of causation, so we use contingency tests. Contingency or ‘experimental manipulation’ means, in the simplest possible terms: ‘if I poke this, what happens?’, or even better ‘if I remove what I think is the cause, does the effect stop (or decrease, if there are multiple causal mechanisms)?’ Showing a correlation or contiguity plus proposing a plausible mechanism for a hypothesised causal connection is stronger evidence than merely showing a correlation, but in complex systems (e.g. adaptive immunity in mammals) there are often multiple causal mechanisms counterbalancing or backing up the others, in which case proposing a plausible mechanism is often harder and less convincing by itself, so much more detailed mechanistic investigations and empirical tests are needed.
These are principles and methods which you can learn to use and benefit from yourselves. You do not have to just trust or rely blindly on experts, although giving the benefit of the doubt to specialists and those who have invested a lot of careful effort investigating a subject is also sensible sometimes. Scientists generally have chosen to work in research because they’re curious, interested, intellectually enthusiastic sorts of people, and are usually more than happy to discuss evidence and the logic of causal inferences with you. Public engagement even gets points in research funding assessments now, so why not try tweeting your favourite or least favourite scientists if you have a good question? Public libraries now more and more have paid for institutional subscriptions to online scientific journals. There is a big movement, especially among younger scientists, in favour of Open publishing, i.e. no paywall, but it is tricky to balance that with maintaining quality (‘Open’ journals usually take money from the authors. That isn’t necessarily always dodgy, e.g. PLoS journals are highly reputable and all Open, but there are some journals which are unscrupulous about quality checking and peer review processes before publication and will accept literally even computer-generated random nonsense writing as ‘research’ if given the money.) ‘Impact factor’ indices were created to help non-specialists get a rough indication of the credibility of a journal or author, but these are also not a perfectly reliable guide because highly controversial articles can have ‘high impact’ but often for all the wrong reasons.
When research publications have come from and gone through all the processes of scientific discipline: undergraduate training, postgraduate training and supervision, mathematical or computational theoretical modelling of a system, test hypothesis formation, preliminary correlational testing, experimental testing, descriptive statistics, statistical modelling and model analysis, rigorously logical interpretation of statistical results through the best theoretical model(s) available into conclusions, quality checking by the editors of scientific journals, peer review by anonymous independent scientific reviewers, published critiques of particular papers, 5-10 yearly reviews of the state of the field and-or quantitative statistical meta-analyses of results from many previously published results (see The Lancet for excellent examples), and multiple rounds of these processes in multiple independent teams resulting in many publications, many academic discussions and probing questions at conferences, then, if after usually several decades of all these processes the scientific community as a whole settles on a consensus view of a question or topic, we are as confident as we can be it is reasonably certain. That is not to say it is ever absolutely certain; scientific revolutions and sudden total theoretical paradigm switches happen, as well as gradual incremental improvement of scientific knowledge, but i) there tend to be signs in the apparent ‘consensus’ before that it was a bit wobbly and held up partly by anti-scientific authoritative reasoning (the motto of the Royal Society (of Sciences) is “nullius in verba” ~ ‘accept nothing from authorities’, i.e. scholastic authorities), ii) some scientific paradigm revolutions are quite subtle – the postgenomic evolutionary biology ‘revolution’ was really only ground-shaking for evolutionary and developmental biologists!
Abstract as it sounds, these are the kind of general principles and disciplined reasoning processes that we’re primarily enthused about and want to communicate and promote. We’re only as attached to particular conclusions about particular policy areas as we are currently convinced by the empirical and logical strength of the evidence, which is published and open to your scrutiny as well as ours, and is there to discuss not as a claim to authority. A core scientific attitude is being genuinely willing to find oneself mistaken and surprised by real data. ‘Look at these data’ is a challenge no scientist will ever refuse or ignore, unless they think they’ve seen that kind of argument before, checked through it before and found it hollow or hopelessly illogical. Please try not to feel offended when scientists tend to reply with a barrage of references to journal papers and books, that’s just the way we’ve been trained to have a worthwhile discussion, it’s not meant to feel like the fight sound-effects in a Batman cartoon on the receiving end!
In the past, naive objectivism was more of a fault in the scientific community than it generally is now. Throughout, famous scientists and philosophers of science, including Einstein, Weber, Durkheim, and definitively Kuhn, have argued against naive objectivism and the myth of value-neutrality often associated with it. As humans, naturally we cognitively operate on both facts and values all the time, whether we are conscious and explicit about that or not. Pragmatist philosophers of science, such as William James and Hilary Putnam, would prefer we be scrupulously explicit and upfront about our values and moral framing narratives and terminology, and then make rigorously logical and careful distinctions and connections between claimed facts and values involved in our views and advice, so that our potential framing biases are more open to scrutiny and improvement, and some valid data or evidence can still be salvaged from our published works even by those who disagree with some of our theories or methods.
Why we prefer ‘evidence based policy’ or ‘scientific approaches to policy making’ or why these ultimately matter is an ethical question. Relatively objective and rigorously logical approaches to public policy matter because real outcomes matter, not just our social identity or tactical political positions. Real outcomes matter, in the end, because people and living systems ultimately matter in themselves. To care for other people and for our world genuinely and effectively requires being reasonably objective and reflective about claimed facts, claimed or assumed causal inferences and idealistically pragmatic about what strategy or set of strategies might be the best and most effective interventions to actually help and improve real outcomes, not just to serve our own self-image or social identification.
Our attitude to why science matters is essentially ethical. Science matters because real people and real outcomes for others and for the real world matter in themselves, not just as pawns in a political or social game which often occurs between those of us who are actually among the least affected by the issues and so little affected by whether policies are actually effective or counter-productive or not that we can afford to play identity politics with policies which actually hurt or help others and the real outside world.
Facts and values are worth distinguishing logically for sound pragmatic reasons, but they are not actually ever separate in practice. We are not arguing for a separation of facts and values, nor for privileging facts over values, nor for a different balance in a compromise between them. Those are all real misunderstandings we have actually met so far. Facts and values should always be integrated, but with clarity and precision first about what is a fact and what is a value, being very careful to check that what we think is a ‘fact’ actually has been established empirically and logically and what we think is a ‘value’ really is precisely a value (value = why things matter, not what matters) rather than involving unrealistic assumptions about how inputs will connect to outcomes or illogical assumptions about how to measure success or failure of our policies, and making very clear, explicit, honest, and logical connections between our claimed facts and values, in ways that invite scrutiny and high quality, constructive discussion, rather than trying to ‘win’ a particular argument but lose our principles in the process or to win on a trivial level but lose the trust and respect of those whom we should collaborate with to actually improve real outcomes in the policy area which we are at least implicitly claiming to care about.
What happens when policies are not scientifically informed and disciplined is that policies may be totally irrelevant to the actual problem(s), disproportionately expensive (in financial and-or environmental costs) for their actual benefits, not measured and evaluated in ways which allow the policy implementation structures and systems to be progressively refined and improved (or defended empirically), or, at the worst, actually counter-productive relative to their stated aims and values, but passionately defended nonetheless as taking such an ostensibly ‘ethical’ stand serves our own social identification needs.