April 2014
« May    

Speed cameras don’t cause road casualties. Tell your friends.

Updated 23 July 2010.

Last week, the Taxpayers’ Alliance and the Drivers’ Alliance brought out a statistical report claiming that the introduction of speed cameras had failed in reducing the number of road accidents. A number of choice quotes about how speed cameras were nothing more than money-making conspiracies and how this analysis proved that speed was not the main cause of road accidents accompanied the press releases of this report.

The report was again featured on the BBC One O’clock News this lunchtime on a story about Swindon council’s decision to axe speed cameras in the town, in the context of budget cuts in local government. Quite how this made it into today’s news is anyone’s guess, given that Swindon council made that decision two years ago. [Update - the news item is now up on the BBC website.]

The TPA's statistical report on the BBC

Anyway, what leapt out at me was a shot of a local resident [update: actually, on seeing the report again, it turns out to be the reporter] being shown a graph that claimed to show that the introduction of speed cameras had actually slowed the rate of decrease in the number of traffic accidents in the UK.

After a bit of digging, I found the report and the graph in question. Here it is:

Taxpayers' Alliance road casualty graph

At first glance this looks pretty convincing. There’s a really obvious red dotted line denoting when speed cameras were introduced, the slope of the green line (showing the data) looks different before 1990 to afterwards, and look! there’s a nice blue dotted line showing the pre-1990 line extrapolated beyond 1990 and it’s way lower than the actual data.

Does this prove that speed cameras have reduced the rate at which road casualty rates have been decreasing? Of course it doesn’t.

Helpfully, the report’s authors had cited the source of their data. One Google search later, I found the data upon which this graph was based. They come from the Department for Transport’s statistical yearbook “Transport Statistics Great Britain” (an imaginitive title if ever there was one; I’m sure it makes great bedtime reading). Having found the data (and to be fair, the latest edition of the yearbook has the road casualties data for 1956-1982 wiped out due to an editorial booboo; fortunately the 2008 edition doesn’t have that same cock-up) I repeated the same analysis myself (a linear regression, if you must know), and got the same line as in the TPA’s report. So far, so good.

But then I stopped and thought for a moment. What had the author (and now myself, separately) actually done, and was it a fair way of describing the data? They’d looked at the graph, noticed a kink around 1990, and compared the actual data with what would have happened had the earlier trend continued. So what would have happened? To cut a long story short, this trend line is rather optimistic, to say the least. It suggests that there would be no casualties from road accidents at all by the year 2012.

Something else bothered me. I had data from 1952 sitting in front of me in the yearbook. Why had they not included it in their analysis? Here’s the same data, extended back to 1952. [Just noticed an error - the Y-axis title should read "per billion passenger-km", not "per passenger-km". This error also appears in the TPA's graph. It doesn't really change the analysis though.]

My graph of road casualty rates, 1952-2007

This would suggest that 1978 would be a bit of a misleading point to start the graph from – the data point for 1978 (the first red line) is higher than the overall trend based on the 5-or-so years either side of that point. So, any decrease in the rate from that point will be faster than if you’d started your analysis elsewhere.

However, more to the point, the levelling-off from 1990 (the second red line) appears entirely consistent with the rest of the historical trend when taken back to around 1960: from 1960, road casualty rates started to plummet, then over the course of the next few decades, they’ve started to level off. This happened until around 2000, when they started to drop again. Far be it for me to suggest that this might perhaps have something to do with the introduction of speed cameras, which were introduced in the early 1990s, but weren’t widespread until the late 1990s…

Before writing this post, I emailed Jennifer Dunn, the contact person for methodological questions on this report at the Taxpayers’ Alliance, with a couple of my concerns. I was interested in why they’d based their regression on data from 1978 (when data from beforehand were available) to 1990 (which was before speed cameras were in common use), and why they’d extrapolated using a straight line, even though such a technique would “predict” road casualties to be zero by 2012. I got a reply which I would describe as very prompt, reasonably polite, quite firm and very deflective. The gist of it was that they’d plotted the graph, noticed a break in around 1990, performed a statistical test I’d not heard of before (the Chow Test, for what it’s worth), confirmed from the result of the test that it was a break, and added in the post hoc justification that it was all to do with speed cameras, because they’d been introduced at roughly the same time.

I wasn’t convinced, and I’m still not. There undoubtedly issues with speed cameras, I’m sure, and I would accept that they are probably not the only method for reducing road traffic casualties. But I am pretty sure that, even if they are overused, they are particularly useful in specific areas, and discarding them outright as Swindon council have been reported to have done, is probably, on balance, foolish. But to evaluate their effectiveness, we need good evidence, properly analysed, taking all variables into account. An unrealistic extrapolation and a big red “speed cameras introduced here” sign don’t help.


I got another polite but firm email from Jennifer Dunn this morning. She writes:

I think the graph confirms that if we had used data from as far back as the early 1960s we would have had similar results because the road casualty rate is declining rapidly year on year. We decided to go from a later period to try and control for dramatic changes in road technology. For example in 1952 there weren’t motorways. As your graph illustrates there were breaks in the pattern earlier in the series, but we have used a period sufficient to establish a trend and see how it changed in the early nineties. This was a report about a specific road safety policy, speed cameras and not a history of road safety. We therefore didn’t take the sample as far back as 1952.

This is partly valid. It’s totally fair not to go back as far as 1952 in the analysis, because, as Jennifer says, there is a break in the pattern in the early years. But that’s still no reason not to graph it. It also doesn’t explain why they didn’t take the sample back to around 1960, which is where the current trend started. It’s partially right to say that road casualty rates are declining rapidly from the early 1960s right through to the present day, but crucially, the rate at which they are declining is already slowing down by the 1970s and 1980s.

To demonstrate this, I performed the same statistical test as in the report, the Chow Test. Helpfully, and here’s where I really have to say “fair play” to the TPA, they do give a detailed description of how to perform one in the report. Following their instructions, I re-performed the test on the two time periods they mention in their study to check that I was doing it right. The exact numbers I got were very slightly different – and by “slightly different”, I mean disagreeing about the height of Everest by a couple of inches.  In other words, not enough to alter the overall result of the test. It also transpired that the TPA had compared 1979-1990 (rather than 1978-2990, as they said in the text) with 1991-2007, but this discrepancy is forgiveable and makes absolutely no difference.

So, to demonstrate that the rates were already slowing down, I performed the same test, comparing the period 1962-1978 with 1979-1990. And whaddya know? The test concluded that there was a break in the time series, and two straight lines over the sub-periods were better at describing the data than a single one across both.

My attention was also drawn to another article from last month questioning the notion of whether speed cameras were “cash cows” for public finances. You can discuss the merits or otherwise of that article with its authors if you like, but I was particularly interested by the following quote from a spokesperson from the AA:

Spokesperson Andrew Howard explained to us where he felt the ‘cash cow’ claims had come from.

Until 2000, because authorities were unable to keep the revenue for fines, it cost police money to pursue anyone caught by the cameras. It was when this system changed that interpretation of the system changed.

He said: “From 2000 onwards the local authorities could effectively get  the cost of running cameras back from fine revenue.

“That was where the cash cow claims started, because people started saying that councils make more money the more people they catch.

“To some extent they did, but that was because it cost more money to catch people, therefore they had to get funding to do that.”

So, although speed cameras were introduced in the early 1990s, local authorities did not keep the revenue from the fines until the year 2000. This would imply that there might be a “shock” in the time series in the year 2000. From the graph, it would certainly appear that from the year 2000, the rate of decline in road casulaties picks up again. So I performed another Chow Test, this time comparing 1991-1999 with 2000-2007. And, lo and behold, another “significant” result: two straight lines are better at describing the data than a single one across the whole of 1990-2007. You can have a look at my calculations for this test, the other one, and my re-doing of the TPA’s in the following screenshot.

My Chow Test spreadsheet

I could go on, but it would be pointless. You could pretty much break the time series where you like, perform a Chow test, get a significant result, and come up with some post hoc justification for that result, just as both the TPA and I have just done. It’s not good evidence. Neither my analysis nor theirs has directly taken into account confounding variables, and has only considered the UK rates as a whole, not focussing on specific places where speed cameras have been introduced. Better evidence is available from a systematic review from the Cochrane Collaboration. Admittedly, the evidence is still not great. But certainly better than performing econometric tests on public health data until the Chows come home.


Much as I’d love to end this post on a bad pun, I can’t end without at least giving some thanks to Jennifer Dunn at the Taxpayers’ Alliance for her willingness to engage with my concerns and for the manner of her email correspondence. I am also impressed (even if I still disagree with her about their appropriateness) about how transparent they have been about their statistical methods, so that we can all have a look, and decide whether or not their conclusions are justified. Thank you.

It also turns out that there’s another blogpost written last week, when the report came out, describing the foolishness of the extrapolation, here.

Spokesperson Andrew Howard explained to us where he felt the ‘cash cow’ claims had come from.Until 2000, because authorities were unable to keep the revenue for fines, it cost police money to pursue anyone caught by the cameras. It was when this system changed that interpretation of the system changed.

He said: “From 2000 onwards the local authorities could effectively get  the cost of running cameras back from fine revenue.

“That was where the cash cow claims started, because people started saying that councils make more money the more people they catch.

“To some extent they did, but that was because it cost more money to catch people, therefore they had to get funding to do that.”


14 comments to Speed cameras don’t cause road casualties. Tell your friends.

  • Great blog, Michael.

    Their extrapolation is totally foolish, as you have correctly shown. Not only are their choice of methods deeply suspect, as you point out, but it is crazy to try to draw conclusions just from time trends when far better evidence (although sadly still far from perfect) is available.

    Are you familiar with this?


    However, if you were going to look at time trends, an alternative (and probably more reasonable) interpretation of the data is that after introduction of speed cameras, accident rates continued to fall.

  • RichardRambles

    A great report on another example of junk statistics. The average camera installation date in this country is indeed 2000. It could therefore be argued that cameras didn’t have their full impact until most of them had been installed.

    It is incredibly silly to look at all crashes on the roads and then blame them on a single thing (unless it’s something you hate of course). Most (all?) studies at camera sites show significant reductions, even when the random effect is taken into account.

    It always puzzles me why people rant so much about speed cameras. Nobody moans about breathalyzers or speed bumps do they. Err, hang on a bit….

  • David

    This is what the RSS or the UK SA should be doing – subjecting tendentious use of stats to rigorous examination. They may not be found wanting, but the exercise must (we hope)make them more cautious about misuse in future.
    But the whole thing is also an object lesson in the limits of statistical analysis wrested from a social science or behavioural context. Why not combine the analysis of accidents with observation of speed-camera-related behaviour (which includes drivers slowing down)?
    And by the way, the right is dead keen on localism – so why exclude councils from mounting cameras, if they choose, where they choose? The answer, of course, is that the right’s love of localism is limited; it tends to stop when it comes to raising revenue.

  • statsnerd

    speed cameras are often installed following a spate of accidents. But that spate could be a random set of events. If so the chances are there will then be a decline in accidents. Due to chance (regression to the mean essentially). Straight Statistics mention this point in their excellent Making Sense of Statistics (p12).

    Do the road causality figures include pedestrians and cyclists? Cars are increasingly protecting passengers from harm (hence the steady decline in your chart) and (IIRR) causing greater damage to pedestrians etc as a partial consequence.

  • michaelgrayer

    Thank you for your comment. Your first point is absolutely valid, and is a valid concern about existing epidemiological evidence. This point is raised in the systematic review from the BMJ that Adam linked to above. However, that doesn’t detract from the point that this particular piece of analysis overstates the case against speed cameras using an unrealistic regression and a very tenuous post hoc hypothesis. Even “before/after” studies are marginally better than this.

    The data, by the way, do include persons injured/killed regardless of whether they were passengers, cyclists, pedestrians, or whoever.

  • Great work.

    I read that Oxford has just nixed all their speed cameras too. Not sure why.

  • Jim

    Great post, but I think if anything you’re too soft on the TPA, as they seem to have deliberately foregone any semblance of real time series analysis. While speed cameras may have been ‘introduced’ in 1992, the real growth in their numbers came in the first half of the 2000s, exactly when the casualties trend started falling sharply again! See figures here: http://bit.ly/ajZkEf

  • Allan

    The idea that speed cameras could somehow be responsible for the slow-down in the rate of decrease of road casualties is totally absurd. And illogical. Apart from which, and as already pointed out, there were but a few dozen cameras in operation by the mid-nineties, and according to The Times, there were only around 500 in operation by 1999.

    I would suggest that the advent of the boy-racer phenomenon in the mid-nineties and the ever-increasing ownership and use of mobile phones whilst diving since the mid-late nineties had a lot to do with the slowdown, as did the massive rise in migrants from Eastern European countries and Africa since 1997 where driving standards are generally a lot lower, along with the big rise in motorbike ownership (and particularly powerful motorbikes) that happened during the period from 1996 to 2004, and the corresponding number of motorcyclist fatalities and injuries.

    There were also massive cuts in traffic police numbers from around 16,500 in 1987/88 to around 7,500 in 1996/97 (under the Tories) and about an additional 1,000 since then (under New Labour). It’s funny how none of the anti-camera newspapers ever mentioned to their millions of readers – either at the time the cuts were taking place or since – about the more than 50% cuts that took place under the Tories, but they have been ranting on about “the 11% cut in traffic police” since 1997 over and over again for the past six years. The reality is of course that but for speed cameras, we wouldn’t have heard a peep out of them about the 11% cut!

  • [...] Michael points out the statistical check is biased by data selection and… …you could pretty much break [...]

  • David Bernett

    1. The TPA did not claim that the extrapolation would continue indefinitely so the “negative deaths” argument is irrelevant.

    2. The statistics in no way support the idea that speed cameras help.

    3. If you want a mechanism for why the cameras may be causing accidents, just see driver behaviour around known speed camera sites where fixation on speed causes sudden slowing.

    A good driver will judge his speed according to road conditions – not the speedometer. There are deceptive places for which a recommended speed with an immediate feedback sigh is helpful.

    But burying your nose in the dashboard for fear of getting a ticket is dangerous.

    P.S. When I am driving, tailgaters scare the crap out of me – much more dangerous than being a little over an arbitrary speed limit.

  • michaelgrayer

    1. “Negative deaths” is not the only reason why the TPA’s linear-extrapolation-after-the-conveniently-placed-red-line model is inappropriate. It’s arguably the funniest, but just saying that “The TPA did not claim that the extrapolation would continue indefinitely” is not enough to validate their model.

    2. Citation needed. The systematic review that Adam linked to above does not support your argument.

    3. Again, citation needed. Has that ever, y’know, actually happened? And is that really more dangerous than driving too fast in a built up area and mowing down pedestrians?

    P.S. When I am driving, people driving at 55mph in a 30mph zone scare the crap out of me — much more dangerous than being an inch or two closer than an arbitrary safe distance. See, I can do hyperbole too!

  • David Barnett

    1. The point is not the extrapolation itself so much as the clear inflexion which the extrapolation line highlights. That inflexion demands explanation. It may be just statistical noise on the downward trend, but the TPA offered serious arguments for its significance.

    2. The meta-analysis linked to by Adam looks like statistical noise. Not very convincing. Re citation, the TPA is using real data. However you mash it, there is no evidence there that the cameras help.

    3. I see the slowing at certain GATSOs when I am driving. Anecdotal, yes, but, for example, it happens almost every time on the Hendon Way just before Brent Cross going north.

    P.S. Tailgating of 2 or 3 car lengths at 70 miles an hour is common enough to be observed routinely. How often do you see drivers going at 55 in a 30 mph residential zone?

  • michaelgrayer

    1. I have been through this before. As demonstrated by the analysis I conducted in the follow-up, the test that the TPA used to detect points of discontinuity in the linear trend, the Chow Test, detects them in that data all over the place. What’s so special about 1990 that distinguishes it from the others? There are also historical inaccuracies in their explanation: while speed cameras were introduced “in the early 1990s” (which is after 1990 by the way), they were not in widespread use until later. I remain unconvinced that the Chow Test shows us anything particularly special in 1990, and even less convinced by the TPA’s highly tenuous post hoc explanation.

    2. Are you seriously suggesting that the TPA’s tenuous post-hoc analysis (complete with revisionist history) trumps a peer-reviewed BMJ-published meta-analysis of all the data collected in peer-reviewed studies on the effects of the introduction of speed cameras on accident rates. Come on.

    3. Yes, I am aware that people slow down for speed cameras, but while it may be a little annoying or look a bit weird, I’m not aware that such behaviour has been the cause of any accidents. Are you?

    People driving at 55mph in a 30mph zone – frequently. I know much of London’s traffic doesn’t get above about 10 during the daytime but crossing the road at night and in rural residential areas, it’s a whole different story. Besides, in all honesty, I totally agree with you that tailgating is dangerous, but that is completely irrelevant to whether speeding is dangerous or not. Perhaps I should be clear: you compared minor speeding with major tailgating. I did the opposite to show you how ridiculous that comparison was.

  • David Barnett

    Michael, just because the meta-analysis is peer-reviewed, does not mean that the evidence found is strong enough to justify the cameras against the serious societal side effects.

    Our civilization depends on respect for the rule of law. Obedience to a law must be seen as right by the vast majority. There has to be a sense of trust that the rule is right and enforced fairly and with human common sense. The cameras violate that trust in several ways and dangerously weaken respect for law in general.

    In terms of usefulness, the cameras provide no feedback until weeks after the event. This is especially true at times when the road is empty of other cars, whose slowing may alert you to your speed. In fact, under road conditions conducive to going at higher speeds, constant monitoring of the speedometer is needed to keep below a limit which is set arbitrarily low.

    One must use common sense when driving. Just because that country road has a sign with an oblique line does not mean it is safe to drive at 60 mph on it. I was recently on such lane with stretches where 30 mph seemed too fast for safety.

    There is also a cry-wolf effect of blanket limits. As you know, there are stretches of road where normal speed cues are deceptive, and where a low speed limit is a very helpful advisory. The problem is that one does not know what is real and what is arbitrary. So genuine safety signs get ignored.

    Much more useful than cameras are those signs which tell you your speed as you approach. I would like to see them at all deceptive black spots.

Leave a Reply





You can use these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>