October 2014
M T W T F S S
« May    
 12345
6789101112
13141516171819
20212223242526
2728293031  

Margins for error—let’s see more of them please!

Margins for error are an immensely important part of data analysis, yet are frequently ignored or misunderstood. When we make guesses, it’s very impossible to be completely certain about our guess, so they’re usually a “ballpark figure”. But similarly, making a guess isn’t usually an admission that we haven’t got the faintest idea and that we’re plucking a number at random. The question is not only “what’s the ballpark figure?” but also “how big is the ballpark?”.

Errors come in all forms. Possibly the simplest is rounding error. It’s tempting to think of seeing a figure of, say, £7.8million rounded to one decimal place and think “a-ha! this project cost exactly £7,800,000″ but that isn’t quite the case. To write that the project cost £7.8million actually means to say that it cost somewhere between £7,750,000 and £7,849,999.99. That’s a whole range of £100,000 in which the exact cost could lie. Often rounding errors don’t lead to misleading figures by themselves, but add up a lot of them and the margins for error can soon mount up.

I say often they don’t mislead on their own, but last week there was a headline that did exactly that. “The UK economy shrank by more than previously thought during the last three months of 2010″ reported the BBC, with similar stories in Reuters and other places. It turns out that this was due to a revision in GDP growth figures from -0.5% to -0.6% for the fourth quarter of 2010. But presented like that, we actually have very little idea what the actual extent of the revision was. It could have been from -0.54999% to -0.55001%, and would still have come up as a revision from -0.5% to -0.6%. In other words, there could have been hardly any movement at all. To be fair, by the same token it could also have been a revision of nearly two percentage points, but the point still rests: a difference of 0.1 in two figures that are rounded to one decimal place could still actually mean (to all intents and purposes) no difference at all.

Other sorts of error exist. Another one that applies to the GDP revision figures is standard error: given that we’re guessing the value of a figure based on a sample, what sort of margin for error do we expect from that? Was GDP shrinkage of 0.6% within the margin for error of the first guess? Without any information on this, it’s impossible to tell.

Another example of where knowing about a margin for error would be terribly useful but is frequently omitted as though it doesn’t matter is in the use of technology in sports officiating. A prime example arose in England’s World Cup cricket match with India earlier this week. England batsman Ian Bell was given not out despite the prediction software predicting that the ball which struck his pad would have gone on to hit the stumps. The reason for this was that the ball struck him more than 2.5 metres away from the stumps, and therefore was deemed “too far away” for the accuracy of the system to be trusted.

This rule, to a non-statistician, looks utterly ridiculous. In fact, to me, as both a statistician and an occasional sports referee, this also looks ridiculous but for different reasons. It’s all to do with margins for error.

The Hawkeye system (and similar technologies) work on a basic statistical principle. I couldn’t comment on the amazing technological wizardry they use to collect the data, but essentially, they collect lots of data on where the ball was at a lot of moments in time, judge when the ball hit an obstruction (in this case, Ian Bells leg-pad) and use that data to predict where the ball would have gone had it not hit that obstruction. There are an awful lot of moments where errors can creep into the analysis—and they will creep in. That part isn’t in question. The question is how big or small the accumulative effect of all those errors actually is. Are we talking micrometres or centimetres?

Firstly, errors can creep in during the data collection process. This process takes place between the point at which the ball lands on the floor and the point at which it hits the pad. This is the data that is subsequently used to predict the path of the ball. There will be some sort of margin for error for each time the tracking device detects where the ball is; the more times the tracker is able to do this, the more these errors will be smoothed out. In fact, there is a second rule governing when the results from the software may be called into question: the distance between where the ball hits the pitch and where it collides with the pad must be less than 40cm.

Secondly, the software has to know when to stop collecting data and start predicting. In other words, it has to accurately be able to predict when the ball collided with the batsman’s pad.

Finally, the software then has to use the data it has collected to churn out more data as to what the flight of the ball would have looked like had someone’s leg not got in the way. This means that the software would have had to apply some kind of function (I’m not a physicist, so I have no idea what that would be) to the data collected in the first stage, in order to get the predicted flight of the ball, and decide whether it would have gone on to hit the stumps. Errors may creep in here, as the function used will only be an approximation of what would have happened. Furthermore, any errors that crept in during the first two stages will be exacerbated: if the margin for error was 1mm based on the data alone then this margin will have crept up to several millimetres by the time the ball’s predicted flight gets to the stumps. This may not seem much, but given that the ball is only about 70mm wide, that’s a reasonable amount of doubt.

So it seems as though the rules-makers have brought in these “40cm” and “2.5m” limits in to try and account for margins-for-error. This is a case of right idea, wrong way of achieving it. In Ian Bell’s case, the predicted flight of the ball would have almost hit the dead centre of the wicket. Are we to assume that there is more doubt in this case than if the ball had struck him 2.4m away from the wicket and been predicted to simply graze the edge of the wicket?

The trouble is, without actually knowing the extent of the margin-for-error, there’s very little the rules-makers can sensibly do to account for it.

So anyway, back to journalism. Statistics, particularly when they’re based on guesses, need to have some kind of margin for error associated with them. It doesn’t even need to be that technical, just creating awareness that single figures might be complete guesses, subject to very rough rounding, or actually completely robust, and we as readers are left wondering which are which.

Share

Leave a Reply

  

  

  


*

You can use these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>