Azara Blog: Trying to quantify our uncertainty

Blog home page | Blog archive

Date published: 2010/01/18

The 2010 Darwin College Lectures are about Risk. The first lecture occurred on 15 January and was by David Spiegelhalter, from the university, talking about "Risk: Trying to quantify our uncertainty".

Spiegelhalter is not only with the MRC's Biostatistics Unit, he is also the Winton Professor of the Public Understanding of Risk at Cambridge, and because of the latter role he gets some attention in the media. You would also expect him to thus be a reasonable public speaker, and he did well enough in that regard.

He started by saying that there were a zillion and one definitions of "risk" and he was going to take it as "anything to do with situations where 'bad' or 'good' things might happen".

He said that people often used "gut feelings" to deal with these situations and pointed out that perhaps this works pretty well since humans didn't invent probability until a few hundred years ago.

And he said that "gut feelings" can be unreliable when there is manipulation by others, or the reasoning is complex, or a lot depends on the decision.

Most of the rest of the lecture was looking at specific examples, starting with a precisely mathematically defined situation, calculating odds in the lottery, and progressing onto situations where there is no precise mathematical analysis, for example, the risk of fatal accidents when driving or walking.

He said that his doctor had told him that he had a 1/10 chance of having a heart attack within the next 10 years, but that if he took statins this risk would be reduced by 30%. It's usually a good idea to think in terms of quantities rather than percentages. So think of 100 people like Spiegelhalter. Of these, on average 90 would not have a heart attack, and 10 would, if none of them took statins. If everybody took statins then instead of 10 having a heart attack, only 7 would, and so 3 people would have been spared. But the outcome for 90+7=97 people did not change. So if all 100 took the statins (and obviously up front nobody knows who would be in the subset of 3 who would benefit) then the odds are 97 to 3, i.e. 32 to 1, that it was a complete waste of time.

It is well known that how you present a risky situation will affect how people judge the risk. And here he said that you could draw a picture with 100 coloured disks laid out in various ways, and how you laid out the disks determined how seriously people judged the risk. Well that is usually deemed to mean that people are just silly and do not know how to judge risk, and that seemed to be his interpretation as well, although he didn't say so explicitly.

But another interpretation is that in this circumstance there is no correct exact quantification of risk, and so a range of values associated with the risk is perfectly acceptable.

And part of the issue here is also that the statement about what statins might do to prevent heart attacks does not at all mention what negative side effects might come about from taking statins. So should a "rational" person take or not take the statins, who knows.

Spiegelhalter was asked by Radio 4 to try and estimate the outcome of various football matches, given data about past results. Well he had to come up with a model based on the results, and his model seemed to do ok on one specific weekend.

The point here was that because a precise mathematical analysis was not possible, next best was the use of historical data and a model. He quoted the statistician George Box as saying: "Essentially, all models are wrong but some are useful".

So this is relevant to both weather prediction and climate change. On weather prediction, the Met Office has recently come in for some criticism because it had claimed that the summer of 2009 would be a "barbecue summer" (it was not) and that the winter of 2009 was likely to be mild (it was not). Part of the problem is that the Met Office often does not quote probabilities, and even when they do the media can easily ignore that subtlety. It seems that the Met Office is going to introduce probabilities in their forecasts somehow, but they are still discussing how to present the information.

The same issue arises with climate change. So, the models are not perfect and the data underlying the models is not perfect, but are the models good enough that they are useful? Spiegelhalter didn't take on that discussion.

Instead he introduced the term "micromort", which equates to a one in a million chance of dying suddenly (so by accident rather than by some disease). There was a recent controversy where David Nutt, the (now ex-)chairman of the Advisory Council on the Misuse of Drugs (ACMD), claimed that taking ecstasy was not much different, in terms of risk, than horse riding".

Spiegelhalter quoted (amongst other micromort stats) that taking one ecstasy tablet is equivalent to 1 micromort, and that horse riding for a year is perhaps equivalent to 1/2 micromort. Well, those numbers are similar, but that is all Spiegelhalter noted. He did not note that it's comparing apples and oranges because of the one tablet versus one year. In any case, the reason there was a hysterical reaction to Nutt's claim was that the ruling elite think that horse riding is "noble" whereas ecstasy taking is for hooligans. And risk taking for "noble" activities is apparently fine and dandy.

His final comments were that we should try and quantify uncertainty, and that scientists need to "honestly communicate" the deeper uncertainties to a sometimes sceptical public.

_________________________________________________________
All material not included from other sources is copyright cambridge2000.com. For further information or questions email: info [at] cambridge2000 [dot] com (replace "[at]" with "@" and "[dot]" with ".").