Updated 23 July 2010.
Last week, the Taxpayers’ Alliance and the Drivers’ Alliance brought out a statistical report claiming that the introduction of speed cameras had failed in reducing the number of road accidents. A number of choice quotes about how speed cameras were nothing more than money-making conspiracies and how this analysis proved that speed was not the main cause of road accidents accompanied the press releases of this report.
The report was again featured on the BBC One O’clock News this lunchtime on a story about Swindon council’s decision to axe speed cameras in the town, in the context of budget cuts in local government. Quite how this made it into today’s news is anyone’s guess, given that Swindon council made that decision two years ago. [Update - the news item is now up on the BBC website.]
Anyway, what leapt out at me was a shot of a local resident [update: actually, on seeing the report again, it turns out to be the reporter] being shown a graph that claimed to show that the introduction of speed cameras had actually slowed the rate of decrease in the number of traffic accidents in the UK.
After a bit of digging, I found the report and the graph in question. Here it is:
At first glance this looks pretty convincing. There’s a really obvious red dotted line denoting when speed cameras were introduced, the slope of the green line (showing the data) looks different before 1990 to afterwards, and look! there’s a nice blue dotted line showing the pre-1990 line extrapolated beyond 1990 and it’s way lower than the actual data.
Does this prove that speed cameras have reduced the rate at which road casualty rates have been decreasing? Of course it doesn’t.
Helpfully, the report’s authors had cited the source of their data. One Google search later, I found the data upon which this graph was based. They come from the Department for Transport’s statistical yearbook “Transport Statistics Great Britain” (an imaginitive title if ever there was one; I’m sure it makes great bedtime reading). Having found the data (and to be fair, the latest edition of the yearbook has the road casualties data for 1956-1982 wiped out due to an editorial booboo; fortunately the 2008 edition doesn’t have that same cock-up) I repeated the same analysis myself (a linear regression, if you must know), and got the same line as in the TPA’s report. So far, so good.
But then I stopped and thought for a moment. What had the author (and now myself, separately) actually done, and was it a fair way of describing the data? They’d looked at the graph, noticed a kink around 1990, and compared the actual data with what would have happened had the earlier trend continued. So what would have happened? To cut a long story short, this trend line is rather optimistic, to say the least. It suggests that there would be no casualties from road accidents at all by the year 2012.
Something else bothered me. I had data from 1952 sitting in front of me in the yearbook. Why had they not included it in their analysis? Here’s the same data, extended back to 1952. [Just noticed an error - the Y-axis title should read "per billion passenger-km", not "per passenger-km". This error also appears in the TPA's graph. It doesn't really change the analysis though.]
This would suggest that 1978 would be a bit of a misleading point to start the graph from – the data point for 1978 (the first red line) is higher than the overall trend based on the 5-or-so years either side of that point. So, any decrease in the rate from that point will be faster than if you’d started your analysis elsewhere.
However, more to the point, the levelling-off from 1990 (the second red line) appears entirely consistent with the rest of the historical trend when taken back to around 1960: from 1960, road casualty rates started to plummet, then over the course of the next few decades, they’ve started to level off. This happened until around 2000, when they started to drop again. Far be it for me to suggest that this might perhaps have something to do with the introduction of speed cameras, which were introduced in the early 1990s, but weren’t widespread until the late 1990s…
Before writing this post, I emailed Jennifer Dunn, the contact person for methodological questions on this report at the Taxpayers’ Alliance, with a couple of my concerns. I was interested in why they’d based their regression on data from 1978 (when data from beforehand were available) to 1990 (which was before speed cameras were in common use), and why they’d extrapolated using a straight line, even though such a technique would “predict” road casualties to be zero by 2012. I got a reply which I would describe as very prompt, reasonably polite, quite firm and very deflective. The gist of it was that they’d plotted the graph, noticed a break in around 1990, performed a statistical test I’d not heard of before (the Chow Test, for what it’s worth), confirmed from the result of the test that it was a break, and added in the post hoc justification that it was all to do with speed cameras, because they’d been introduced at roughly the same time.
I wasn’t convinced, and I’m still not. There undoubtedly issues with speed cameras, I’m sure, and I would accept that they are probably not the only method for reducing road traffic casualties. But I am pretty sure that, even if they are overused, they are particularly useful in specific areas, and discarding them outright as Swindon council have been reported to have done, is probably, on balance, foolish. But to evaluate their effectiveness, we need good evidence, properly analysed, taking all variables into account. An unrealistic extrapolation and a big red “speed cameras introduced here” sign don’t help.
I got another polite but firm email from Jennifer Dunn this morning. She writes:
I think the graph confirms that if we had used data from as far back as the early 1960s we would have had similar results because the road casualty rate is declining rapidly year on year. We decided to go from a later period to try and control for dramatic changes in road technology. For example in 1952 there weren’t motorways. As your graph illustrates there were breaks in the pattern earlier in the series, but we have used a period sufficient to establish a trend and see how it changed in the early nineties. This was a report about a specific road safety policy, speed cameras and not a history of road safety. We therefore didn’t take the sample as far back as 1952.
This is partly valid. It’s totally fair not to go back as far as 1952 in the analysis, because, as Jennifer says, there is a break in the pattern in the early years. But that’s still no reason not to graph it. It also doesn’t explain why they didn’t take the sample back to around 1960, which is where the current trend started. It’s partially right to say that road casualty rates are declining rapidly from the early 1960s right through to the present day, but crucially, the rate at which they are declining is already slowing down by the 1970s and 1980s.
To demonstrate this, I performed the same statistical test as in the report, the Chow Test. Helpfully, and here’s where I really have to say “fair play” to the TPA, they do give a detailed description of how to perform one in the report. Following their instructions, I re-performed the test on the two time periods they mention in their study to check that I was doing it right. The exact numbers I got were very slightly different – and by “slightly different”, I mean disagreeing about the height of Everest by a couple of inches. In other words, not enough to alter the overall result of the test. It also transpired that the TPA had compared 1979-1990 (rather than 1978-2990, as they said in the text) with 1991-2007, but this discrepancy is forgiveable and makes absolutely no difference.
So, to demonstrate that the rates were already slowing down, I performed the same test, comparing the period 1962-1978 with 1979-1990. And whaddya know? The test concluded that there was a break in the time series, and two straight lines over the sub-periods were better at describing the data than a single one across both.
My attention was also drawn to another article from last month questioning the notion of whether speed cameras were “cash cows” for public finances. You can discuss the merits or otherwise of that article with its authors if you like, but I was particularly interested by the following quote from a spokesperson from the AA:
Spokesperson Andrew Howard explained to us where he felt the ‘cash cow’ claims had come from.
Until 2000, because authorities were unable to keep the revenue for fines, it cost police money to pursue anyone caught by the cameras. It was when this system changed that interpretation of the system changed.
He said: “From 2000 onwards the local authorities could effectively get the cost of running cameras back from fine revenue.
“That was where the cash cow claims started, because people started saying that councils make more money the more people they catch.
“To some extent they did, but that was because it cost more money to catch people, therefore they had to get funding to do that.”
So, although speed cameras were introduced in the early 1990s, local authorities did not keep the revenue from the fines until the year 2000. This would imply that there might be a “shock” in the time series in the year 2000. From the graph, it would certainly appear that from the year 2000, the rate of decline in road casulaties picks up again. So I performed another Chow Test, this time comparing 1991-1999 with 2000-2007. And, lo and behold, another “significant” result: two straight lines are better at describing the data than a single one across the whole of 1990-2007. You can have a look at my calculations for this test, the other one, and my re-doing of the TPA’s in the following screenshot.
I could go on, but it would be pointless. You could pretty much break the time series where you like, perform a Chow test, get a significant result, and come up with some post hoc justification for that result, just as both the TPA and I have just done. It’s not good evidence. Neither my analysis nor theirs has directly taken into account confounding variables, and has only considered the UK rates as a whole, not focussing on specific places where speed cameras have been introduced. Better evidence is available from a systematic review from the Cochrane Collaboration. Admittedly, the evidence is still not great. But certainly better than performing econometric tests on public health data until the Chows come home.
Much as I’d love to end this post on a bad pun, I can’t end without at least giving some thanks to Jennifer Dunn at the Taxpayers’ Alliance for her willingness to engage with my concerns and for the manner of her email correspondence. I am also impressed (even if I still disagree with her about their appropriateness) about how transparent they have been about their statistical methods, so that we can all have a look, and decide whether or not their conclusions are justified. Thank you.
It also turns out that there’s another blogpost written last week, when the report came out, describing the foolishness of the extrapolation, here.