5 simple chemistry facts that everyone should understand before talking about science

The Logic of Science

One of the most ludicrous things about the anti-science movement is the enormous number of arguments that are based on a lack of knowledge about high school level chemistry. These chemistry facts are so elementary and fundamental to science that the anti-scientists’ positions can only be described as willful ignorance, and these arguments once again demonstrate that despite all of the claims of being “informed free-thinkers,” anti-scientists are nothing more than uninformed (or misinformed) science deniers. Therefore, in this post I am going to explain five rudimentary facts about chemistry that you must grasp before you are even remotely qualified to make an informed decision about medicines, vaccines, food, etc.

1). Everything is made of chemicals

This seems like a simple concept, but many people seem to struggle greatly with it, so let’s get this straight: all matter is made of chemicals. You consist entirely of chemicals. All food…

View original post 1,807 more words

Advertisements

If the Green Party gets elected, will the last biologist to leave Britain turn off the lights!

Houston, we have a problem.

Carbon Counter

Many scientists are more than a little skeptical of the Green Party. But, it is difficult to not welcome their announcement that they would raise science spending to 1% of GDP.

So, if elected, the Green Party would greatly raise spending on science. Sadly, they would also destroy vast areas of science at the same time, and essentially force many, if not most, biologists to shut up shop and find jobs in other countries.

View original post 507 more words

What ‘Evidence Based Policy’ is Not

Two of the most common categories of objections to evidence based policy or scientific approaches to policy development are:
Science is anti some of my values or hostile to my sense of moral community.
Science is only about objectivity of facts and is value neutral, so is or should be separate from or balanced/ compromised with the ethical processes of policy.
The general outline of our answers to concerns (a) and (b), may be:
‘Science’ is a set of methodological principles, and, by extension, the current best available evidence or consensus of the scientific community. Science’s strength is in the extent to which it is self-disciplined about not claiming unrealistic levels of certainty and in being continuously self-critical and reflexive about possible levels and types of uncertainty in its current conclusions which might not have been fully accounted for yet.

‘Evidence’ is more than just raw data and more than just statistical results. Evidence synthesis requires data and statistical analysis, but it also inevitably requires a theoretical framework or ‘model’ to make sense of data into conclusions about specific questions. The forming of questions or hypotheses and the theoretical models used to interpret statistical results into scientific conclusions introduce often harder to see types of uncertainty. Definitions of measures for independent and dependent variables are also often problematic, as they relate to the theoretical model used to formulate the test hypothesis in the first place and then the interpretation of results, with the risk of circular faulty logic invalidating both the definitions of measures and interpretation of results from them.

Scientific logic is mainly about testing causal inferences, or in other words, about making causal inferences in a slower, more conscious, more logically disciplined way, in the hope that we will come to understand more reliably and precisely. Correlation and contiguity (touching in time) are not in themselves sufficient evidence of causation, so we use contingency tests. Contingency or ‘experimental manipulation’ means, in the simplest possible terms: ‘if I poke this, what happens?’, or even better ‘if I remove what I think is the cause, does the effect stop (or decrease, if there are multiple causal mechanisms)?’ Showing a correlation or contiguity plus proposing a plausible mechanism for a hypothesised causal connection is stronger evidence than merely showing a correlation, but in complex systems (e.g. adaptive immunity in mammals) there are often multiple causal mechanisms counterbalancing or backing up the others, in which case proposing a plausible mechanism is often harder and less convincing by itself, so much more detailed mechanistic investigations and empirical tests are needed.

These are principles and methods which you can learn to use and benefit from yourselves. You do not have to just trust or rely blindly on experts, although giving the benefit of the doubt to specialists and those who have invested a lot of careful effort investigating a subject is also sensible sometimes. Scientists generally have chosen to work in research because they’re curious, interested, intellectually enthusiastic sorts of people, and are usually more than happy to discuss evidence and the logic of causal inferences with you. Public engagement even gets points in research funding assessments now, so why not try tweeting your favourite or least favourite scientists if you have a good question? Public libraries now more and more have paid for institutional subscriptions to online scientific journals. There is a big movement, especially among younger scientists, in favour of Open publishing, i.e. no paywall, but it is tricky to balance that with maintaining quality (‘Open’ journals usually take money from the authors. That isn’t necessarily always dodgy, e.g. PLoS journals are highly reputable and all Open, but there are some journals which are unscrupulous about quality checking and peer review processes before publication and will accept literally even computer-generated random nonsense writing as ‘research’ if given the money.) ‘Impact factor’ indices were created to help non-specialists get a rough indication of the credibility of a journal or author, but these are also not a perfectly reliable guide because highly controversial articles can have ‘high impact’ but often for all the wrong reasons.

When research publications have come from and gone through all the processes of scientific discipline: undergraduate training, postgraduate training and supervision, mathematical or computational theoretical modelling of a system, test hypothesis formation, preliminary correlational testing, experimental testing, descriptive statistics, statistical modelling and model analysis, rigorously logical interpretation of statistical results through the best theoretical model(s) available into conclusions, quality checking by the editors of scientific journals, peer review by anonymous independent scientific reviewers, published critiques of particular papers, 5-10 yearly reviews of the state of the field and-or quantitative statistical meta-analyses of results from many previously published results (see The Lancet for excellent examples), and multiple rounds of these processes in multiple independent teams resulting in many publications, many academic discussions and probing questions at conferences, then, if after usually several decades of all these processes the scientific community as a whole settles on a consensus view of a question or topic, we are as confident as we can be it is reasonably certain. That is not to say it is ever absolutely certain; scientific revolutions and sudden total theoretical paradigm switches happen, as well as gradual incremental improvement of scientific knowledge, but i) there tend to be signs in the apparent ‘consensus’ before that it was a bit wobbly and held up partly by anti-scientific authoritative reasoning (the motto of the Royal Society (of Sciences) is “nullius in verba” ~ ‘accept nothing from authorities’, i.e. scholastic authorities), ii) some scientific paradigm revolutions are quite subtle – the postgenomic evolutionary biology ‘revolution’ was really only ground-shaking for evolutionary and developmental biologists!

Abstract as it sounds, these are the kind of general principles and disciplined reasoning processes that we’re primarily enthused about and want to communicate and promote. We’re only as attached to particular conclusions about particular policy areas as we are currently convinced by the empirical and logical strength of the evidence, which is published and open to your scrutiny as well as ours, and is there to discuss not as a claim to authority. A core scientific attitude is being genuinely willing to find oneself mistaken and surprised by real data. ‘Look at these data’ is a challenge no scientist will ever refuse or ignore, unless they think they’ve seen that kind of argument before, checked through it before and found it hollow or hopelessly illogical. Please try not to feel offended when scientists tend to reply with a barrage of references to journal papers and books, that’s just the way we’ve been trained to have a worthwhile discussion, it’s not meant to feel like the fight sound-effects in a Batman cartoon on the receiving end!

In the past, naive objectivism was more of a fault in the scientific community than it generally is now. Throughout, famous scientists and philosophers of science, including Einstein, Weber, Durkheim, and definitively Kuhn, have argued against naive objectivism and the myth of value-neutrality often associated with it. As humans, naturally we cognitively operate on both facts and values all the time, whether we are conscious and explicit about that or not. Pragmatist philosophers of science, such as William James and Hilary Putnam, would prefer we be scrupulously explicit and upfront about our values and moral framing narratives and terminology, and then make rigorously logical and careful distinctions and connections between claimed facts and values involved in our views and advice, so that our potential framing biases are more open to scrutiny and improvement, and some valid data or evidence can still be salvaged from our published works even by those who disagree with some of our theories or methods.

Why we prefer ‘evidence based policy’ or ‘scientific approaches to policy making’ or why these ultimately matter is an ethical question. Relatively objective and rigorously logical approaches to public policy matter because real outcomes matter, not just our social identity or tactical political positions. Real outcomes matter, in the end, because people and living systems ultimately matter in themselves. To care for other people and for our world genuinely and effectively requires being reasonably objective and reflective about claimed facts, claimed or assumed causal inferences and idealistically pragmatic about what strategy or set of strategies might be the best and most effective interventions to actually help and improve real outcomes, not just to serve our own self-image or social identification.

Our attitude to why science matters is essentially ethical. Science matters because real people and real outcomes for others and for the real world matter in themselves, not just as pawns in a political or social game which often occurs between those of us who are actually among the least affected by the issues and so little affected by whether policies are actually effective or counter-productive or not that we can afford to play identity politics with policies which actually hurt or help others and the real outside world.

Facts and values are worth distinguishing logically for sound pragmatic reasons, but they are not actually ever separate in practice. We are not arguing for a separation of facts and values, nor for privileging facts over values, nor for a different balance in a compromise between them. Those are all real misunderstandings we have actually met so far. Facts and values should always be integrated, but with clarity and precision first about what is a fact and what is a value, being very careful to check that what we think is a ‘fact’ actually has been established empirically and logically and what we think is a ‘value’ really is precisely a value (value = why things matter, not what matters) rather than involving unrealistic assumptions about how inputs will connect to outcomes or illogical assumptions about how to measure success or failure of our policies, and making very clear, explicit, honest, and logical connections between our claimed facts and values, in ways that invite scrutiny and high quality, constructive discussion, rather than trying to ‘win’ a particular argument but lose our principles in the process or to win on a trivial level but lose the trust and respect of those whom we should collaborate with to actually improve real outcomes in the policy area which we are at least implicitly claiming to care about.

What happens when policies are not scientifically informed and disciplined is that policies may be totally irrelevant to the actual problem(s), disproportionately expensive (in financial and-or environmental costs) for their actual benefits, not measured and evaluated in ways which allow the policy implementation structures and systems to be progressively refined and improved (or defended empirically), or, at the worst, actually counter-productive relative to their stated aims and values, but passionately defended nonetheless as taking such an ostensibly ‘ethical’ stand serves our own social identification needs.

Kester Ratcliff

Scientific Greens: Aims and Strategies

Promote deeper understanding of scientific general principles and processes in the Green Party.

‘Evidence-based policy’ is already a popular buzzword in the party, but levels of understanding of its meaning and the application of it in practice vary widely. We seek to improve the quality of disagreement first, not just to win superficial support.

The sort of depth and breadth of understanding and application that we’re aiming for we expect to take decades to become so embedded that our group can lay down its concern and let its principles carry on in the general membership. We will not be seeking signatures for an ‘evidence based policy pledge’ or anything like that; rather, if more people disagree with us but with better, clearer reasons we would consider that to be real progress.

Maintain a focus on general scientific principles and processes, to counter-balance fixation on particular controversies and misimpressions of what ‘science’ is for.

We will keep a balanced focus on communicating general scientific methodological principles and how they apply to policy development and implementation processes, avoiding the old approach of focussing too much on particular controversies, which we have observed can be obstructive or counter-productive.

Without such a balance, those on the other side of an argument may sometimes feel like the scientific approach is a well-armed attack against their group, rather than an informative, principled and reasonable discussion within their group.

Especially older members may have learnt to distrust terms and ideas related to “evidence based policy” or “what works”, because in the past (80s-90s) these terms were used to disguise rather amoral nihilistic attitudes to politics and society under claims of scientific ‘objectivity’ and ‘value neutrality’.

‘Policy led evidence-making,’ meaning going out selectively looking for, or even concocting, evidence to back-up a policy actually based on ideological prejudices, anecdotal experiences and other grossly unreliable sources of knowledge is still a major problem in current uses of evidence in policy development generally in national and international politics, and our own party has much to learn to become more immune to it.

We do not defend naive objectivism or the notion of a separation of facts and values associated with it. We acknowledge that both those patterns have happened in previous discussions, but we are not aiming to do either of them. (For more detailed responses to those two concerns, see our separate essay here.)

Highlight and create more good examples of evidence based and experimentally designed policy

We will point towards existing good examples of evidence based policy (e.g. our current Health policy) and create new policy drafts ourselves to exemplify how general scientific principles can be used in policy development and implementation.

As well as using existing evidence better, the scientific approach to policy means designing new policies like experiments, using pilot studies to test assumptions before risking them on a wider population or area than is necessary for a statistically valid test, and with monitoring and evaluation always built-in in from the beginning. (See The Geek Manifesto for further explanation.)

We will advocate well-established methods for linking science and policy processes such as the Logical Framework Approach (as used and developed by all the United Nations’ agencies and UN Monitoring Agency since 1973).

Challenge and support both the Green Party and the wider public to be more courageous in educating themselves and engaging in public policy discussions in depth and detail.

Investigating the facts before deciding between policies is fundamentally important to becoming more responsible and competent electors and is essential to developing and maintaining a genuinely democratic society. Voting should be seen as more like jury duty than like individual consumer choices.

Whilst there is much room for improvement in quality and equality of public education, it is not a valid excuse at all to accept the status quo of educational inequality and democratic disempowerment as if it were a normal state and hence to communicate with the public as if they were naturally stupid rather than just not yet fully informed or trained in critical thinking. That is really elitist. By challenging and educating members of our party and the wider public about the scientific aspects of policy issues, we are seeking to regard and talk to people’s full, as yet unknown, potential.

Details are almost always what makes the most real difference between a good and a bad policy or structure or process. Framing language that manipulates a hasty unconsidered reaction from one sector of society versus another faction’s similarly unconsidered prejudices for or against a term has little or no practical meaning without its details.

To show that a scientific approach to policy is ethically required.

We aim to spread and deepen understanding of how doing politics for the Common Good ethically requires the scientific discipline of investigating the facts first and logical use of evidence. To care for other people and the outside world as effectively as possible we must investigate objectively the actual or most likely causes of problems and then design and test evidence-based or realistic strategies to help and intervene as cost-effectively as possible.

Cost-effectiveness is ethically important because there are virtually infinite moral demands from global and long-term social and environmental needs depending on limited public resources. Even our policy agendas influence the wider public political discussion and the distribution of public funds, before we are even elected to implement them.

We should be careful not to choose policies or be misled uncritically into policy agendas because they serve our own self-image projections or social identification needs, nor because they serve the marketing needs of premium niche sectors of industries competing with mainstream producers by the cheapest marketing tactic – spreading a mysterious sense of fear.

Total circumspection and judicious impartiality are required to truly do politics in the common interest, not acting on unrealistic enemy images representing outgroups (such as ‘private’ or ‘corporations’) as more homogenous and bad than they really are, combined with credulous unexamined support for terms identifying our in-group(s) (e.g. ‘natural’).

Science treats hard data as the ultimate authority, not personal or institutional authority, not strength of emotion, size of majority opinion or a mystical intuition of ‘what is right’. This radical challenge to all authorities, including ourselves, is a good check and balance to have in any public policy process to ensure that we are actually working for the Common Good and not unconsciously slipping into partisan policies, tactically appealing to the fashions or prejudices of a sector of society and thereby disregarding the real common good.

We are all naturally prone to such fast, unconscious, unprincipled and unreasoned judgements, but we should recognise the fact and take responsibility for how we can risk or do real harm, or fail to do as much good as we could and should do, to others and the real world outside our in-group awareness, if we fail to discipline ourselves to look and think more carefully and objectively than comes naturally to us, especially when we are discussing and influencing really important public policy.

written by Kester Ratcliff
co-edited by Gregg Bayes-Brown, Stuart Gallemore and Stuart Bower