I am always in favour of evidence-driven policy making. Who wouldn’t want their country’s decisions made on facts?
However, I often worry if the public, politicians and policy makers really understand or, at least, are accurately interpreting the science behind the evidence.
In the UK, we have seen the recent launch of a report from the ‘What Works Centres’, designed to make the best evidence of what works available to our decision makers.
With top findings from the UK so far including:
- The use of peer tutoring in schools, where young people work together in small groups, has a high positive impact on achievement
- Putting more policemen on the beat does not necessarily reduce crime, unless officers are carefully targeted
- More lives could be saved or improved if people with acute heart failure were routinely treated by specialist heart failure teams
As engineers and researchers, it is our job to ensure that we send a clear message of what our work means, why it matters, where it is applicable and how and when it should be used.
I recently came across a Nature article discussing ‘Twenty tips for interpreting scientific claims’ which made me evaluate the message I am sending to decision makers.
From this I have put together a list of just ten things I think we need to tell people know to ensure they understand and correctly interpret the science behind our chemical engineering message:
1. No measurement is exact
No matter how hard we work, all measurements have errors, this is why we repeat our work. Depending on what we measure, the error associated with it varies. For example; if you are told that the economy grew by 0.13% last month, there is a chance that it may have shrunk. People should feel comfortable asking; how big is the error associated with this measurement?
2. Chemical engineers are only human
We all have an interest in promoting our own work, often to ensure status and further research funding, and sometimes for financial gain. This can lead to the selective reporting of results. To ensure understanding the whole picture, multiple sources of research should be used. People should question; who is funding this research?
3. Generalising work depends on how it was tested
How applicable a study is, is reliant on the conditions under which it is done resembling the conditions of where you want to apply it. A good example of this is the common practise of chemical engineers scaling up processes to mass produce goods; it is not easy to translate work from the lab bench to the factory floor. People should ask; how applicable is this work to my problem?
4. Significance is significant
Often I find that policy makers miss the point with significance. Statistical significance is a measure of how likely a result is to occur by chance. A significance of 0.01 means there is a 1-in-100 probability that what looks like an effect of the treatment could have occurred randomly, and in truth there was no effect at all. A good example is if a bridge has a 99% chance of breaking would you still cross it? People should question; am I ignoring a significant result?
5. Correlation is not causation
I often see correlations reported in the press as the ‘cause’ and it is easy to assume that one pattern causes another. Correlation can be coincidental or be influenced by both patterns being caused by a third factor – a “confounding” factor, see here for some amusing examples. People should ask; are these factors influencing each other?
6. Extrapolating is risky
Research normally occurs over a limited data range, thus we should not assume that our finding can be applied outside this range. It is very difficult to predict changes and responses; an example of this is climate change, which is very difficult to predict due to the unprecedented changes we are seeing. People should question; what assumptions have been made?
7. Bigger is better
The average taken from a large number of observations will usually be more informative than the average taken from a smaller number of observations. The more evidence we have, the better our conclusions are. People should consider; is the dataset big enough to draw any conclusions?
8. Not significant is different to no effect
The lack of a significant result does not mean that there is no effect; it just means no effect was detected. A small study may not have the power to detect a real difference. It is just as bad to conclude that there is no relationship where there is one, as it is to assume that there is a relationship where there isn’t one. People should ask; was this study good enough answer the question?
9. Public feeling influences risk perception
An individual’s risk perception is influenced by many factors including; how likely an event is, how much control they believe they have, how bad the outcome is; as I discuss in my blog ‘When 99.9 per cent just isn’t good enough’ different situations require stricter limits. People need to consider; what are the acceptable risks we can take?
10. Don’t pigeon-hole our expertise
Chemical engineers don’t just work in oil and gas. We are leading the way in revolutionising practices in the health, food, renewables, water and manufacturing industries. People should expand their chemical talent pool and embed into business practice questions like: ‘how can chemical engineering make this approach more sustainable’?
It is important that we remember that the public, politicians and policy makers are smart people, but very few of them are engineers and scientists. They haven’t been taught to critically examine data and research as we have.
Thus they are depending on us to take the lead and show them the what, why, where, when and how of its use.
But most of all we need to be clear in making sure that the public really understands that chemical engineering matters.
2 thoughts on “The true meaning of our science (Day 208)”
Point 2. It is not merely selective reporting, it is selective publishing, both in refereed journals and more extremely in the popular press. Who wants to read a paper saying that something had no discernible effect? Much better to fix on a tiny hope that it might ultimately lead to a cure for cancer etc. As an editor and reviewer of educational journals, notably Education for Chemical Engineers, I often see articles about (sometimes supposed. but known to me) novel techniques with little other than hope as evidence that it is an improvement.
I like the list and if enough people had this list they may be gain about understanding of the information that it released via the meda.
It would be even better if the media commentators had this list as well so that more of their comments were meaningful.