Have you lodged that report with the relevant authorities? Is there a body who can take up your case and get these speed meters re-tested for type approval?

The official bodies that should care, have it, and don't. The Home Office Scientific Development Branch clearly didn't test it properly so they don't want to admit that. The Home Office under FOI settled on responses such as "nothing more we can say". ACPO had its flaws demonstrated infront of them (some contribution is in the report) but the action they took was to deny the knowledge, including the fact that they have their own report AAIU. CPS and SCP have no interest in proving any failings. The Transport minister doesn't understand his job, puts out spin via flawed statistics and proposes plans for more speed enforcement. And no one holds onto Secretary of State long enough to read any report. Although the Gov't ignores its citizens, it would normally repond to the BBC and the Daily Mail but that went on in 2007 and still the result was denial.

The offence is exceeding the regulatory speed. The degree to which you exceeded it determines the sentence (fines/points). The moral conflict is that DfT figures show half the motorists on the road every day exceed the posted limit and since the UK is still within the top 10 for world road safety clearly this is safe driver behaviour. Search for Sentencing Guidelines and decide whether you want to take the fixed penalty offer or gamble on the higher limit applicable if you fight the case, which I think might only be lower than the FPN if you convince the court that you were under the speed limit in the first place ie: no penalty at all.

It was only having gone back to the station where I was based in the camera unit to pick up some papers for court, that I happened upon a hard copy of Paul’s review. I remember at the time, it was floating about for months in various vehicles and I meant to comment then but never actually did.

I thought I’d highlight a few points linked to this quote from higher in the thread;

‘Have you lodged that report with the relevant authorities? Is there a body who can take up your case and get these speed meters re-tested for type approval?’

[/size]

[size="3"]The most crucial question that I cannot find answered by the review, is the software in use by the speedscope in use at the time of the report?

This will mean nothing to most here, but it is most apparent by the age of the report that things have moved on dramatically since the report was written. Although, the review can be updated (and has been), unless the actual model tested is at least fairly current, with the most recent software (v3.03 is the latest I think, but could be wrong on that bit), then each subsequent model of speedscope has been ‘back’ for testing before being approved for operational use. My unit waited for a whole year for the latest Ultralyte to go through type approval, so they don’t just ship ‘em over and knock ‘em out, each subsequent model is different to the previous and the whole type approval process starts again. All subsequent models aren’t automatically approved just because the original model gained approved status. Every/any enhancement resets the counter.

The point here being, that whilst I have no problem with anything that improves road safety and the concept of the report, to hold it aloft based upon an elderly speedscope and unknown software (what was it, v1042, earlier?), against greatly enhanced models with more complex metering patterns/multi-point confirmation, just isn’t going to hold the same amount of water.

Things move on so quickly technologically speaking, that it is only going to be relevant if current models and versions are actually reviewed. There might have been a brief snapshot in time when Paul’s review was relative to an unknown percentage of models in use, but now? Unquestionably not.If he continually got access to the latest model, with the latest software and ran the same tests, then it would have a lot more credence and carry a lot more weight. Unfortunately, other than to stir up the masses who base their ‘expert’ opinion on such works, it is completely outdated and subsequently pretty valueless as an example of the state of current operational equipment.

From a software engineering point of view the 'latest and therefor better' stance does not hold water.

--------------------

Which facts in any situation or problem are “essential” and what makes them “essential”? If the “essential” facts are said to depend on the principles involved, then the whole business, all too obviously, goes right around in a circle. In the light of one principle or set of principles, one bunch of facts will be the “essential” ones; in the light of another principle or set of principles, a different bunch of facts will be “essential.” In order to settle on the right facts you first have to pick your principles, although the whole point of finding the facts was to indicate which principles apply.

Note that I am not legally qualified and any and all statements made are "Reserved". Liability for application lies with the reader.

Quite so Bama. I work in a company that produces systems, both the hardware and the software. Hardware revisions and updates receive far more analysis and testing that software updates do. Why? Because the physical mechanics of a device are more often that not fundamental to the way something works, and the software is simply a means to gather and display the results. I would not expect the software in any speed-gun to be anything like sophisticated enough to overcome the fundamental issues of beam spread, cross-hair alignment and slip. The processing required to deal with those problems would require considerably complicated image analysis by the software, and not problems any software house would find a walk in the park.

How is a computer programmed to recognise that the beam pattern is possible overlapping two vehicles?How is a computer programmed to recognise that the visual cross hair is incorrectly aligned with the beam?How is a computer programmed to recognise that the beam has slipped against the target surface?

How is a computer powerful enough to do all these things and more going be crammed into a handheld device?

Nope sorry, irrespective of how good and up-to-date the software might have become, I can't see it negating all of the problems identified in the report.

Mortimer even attempts a bit of the old credential flourishing in an attempt at endorsement enhancement!

I particularly liked these two questions which assigned the LTI with additional responsibilities;

‘How is a computer programmed to recognise that the visual cross hair is incorrectly aligned with the beam?’

It isn’t. The crosshairs are part of the video camera and nothing whatsoever to do with the speedscope. They are moved by nothing more space age than mechanical thumbwheels situated on the video camera, so;

‘How is a computer powerful enough to do all these things and more going be crammed into a handheld device?’

It doesn’t assign any processing power to ‘crosshair recognition’, so can assign everything to the job in hand.

Having achieved type approval status, the manufacturer and/or supplier doesn’t now need to prove anything. So can the review demonstrate that the latest versions/models exhibit the same characteristics claimed about the originals? The answer is clearly and blatantly no.

From the perspective where the review can do any ‘good’, influence operational use and/or convince anyone who is in a position to make those changes happen, it is unquestionably not current enough. Every week that passes, compounds the problem even more.Until that situation changes, my previous comments will stand.

The point of the report is to highlight some fundamental features of the measuring equipment that affect its accuracy under certain circumstances and conditions. The report suggests that the testing used to give these devices type approval is flawed, leading to the belief that the devices give infallible results, and therefore cannot be questioned in court.

I do not wish to reveal what I do, as I prefer to remain anonymous to a degree, but I have significant experience of working in the design, manufacture, programming, testing, integration, regression testing, proving and so on, of critical systems, the sort of things that if we get it wrong lots of people get hurt. So I'm familiar with extremely rigorous testing and proving regimes that would probably put the so called 'Type Approval' testing of these speed meters to shame.

The point of the report as I see it, is to plant the seed of knowledge that these tools are not infallible, and are subject to errors in use that can result in injustice. Therefore, either the testing needs to be redone, and type approval status revoke if the flaws are confirmed, or serious testing and study carried out to identify changes to the rules and guidance of there use, so that injustice cannot occur.

As to your attempt to understand the question I raised: Don't bother. They were hypothetical. Those are the sort of software improvements that would be required to negate the inherent mechanical flaws in the hardware. Unless the software updates can do those things, then ANY software update released since the report was written stands no chance in resolving the issues raised by the report.

The biggest question resulting from the report still remains:

How did these devices get type approval in the first place, with so many inherent flaws?

I think the point here is that the original Type Approval, carried out by the Home Office scientific branch, was not sufficiently thorough in taking account of all possible operational conditions. This is borne out by the fact that in nearly half of the States of the USA, where these devices are manufactured, it is not recognised as a reliable evidential device, whilst in other States the permissible range is far shorter than in the UK.

Whether the shortcomings still apply to later models / updated software versions is, as you say, impossible to determine without retesting those models.

However, in practical terms, it is type approved for prosecution purposes, and any defendant wishing to challenge the speed reading will have to rely on establishing from the tape that the reading is wrong in their particular case, rather than trying to discredit the LTi per se.

The Type Approval testing forms the "Requirements Baseline" (software techno talk, sorry. Look up the V model if you are interested) which as you say has not changed.So there is no requirement to change the solution at all IF the original solution actually mapped correctly to the requirements. (Apart from any hardware obsolescence, which of course should have been accounted for in the original design).

Of course a 'correct' solution would mean no nice new models needed to sell to the Bib.

--------------------

Which facts in any situation or problem are “essential” and what makes them “essential”? If the “essential” facts are said to depend on the principles involved, then the whole business, all too obviously, goes right around in a circle. In the light of one principle or set of principles, one bunch of facts will be the “essential” ones; in the light of another principle or set of principles, a different bunch of facts will be “essential.” In order to settle on the right facts you first have to pick your principles, although the whole point of finding the facts was to indicate which principles apply.

Note that I am not legally qualified and any and all statements made are "Reserved". Liability for application lies with the reader.

I will have to make this my final word on the matter as I have other commitments over my leave period;Never have I suggested that each LTI software/hardware change is better than the last – more, the pertinent phrase would be ‘different’. I am also not championing it as a speed enforcement tool, merely saying that if someone goes to the trouble of producing and publishing any form of ‘expose’, it needs to be current to be taken seriously.

Would a report on the security shortcomings of modern operating systems produced today be relevant if the software used for the research was Windows 95?Has my existing equipment had an update?In the years I was involved in speed enforcement (I am no longer), my unit had four models of LTI, each with a different software version. Every model was designed to give a speed reading but each progressive model also brought with it other ‘benefits’. These included less weight due to smaller components, different power options, digital compatibility which means far less storage space needed as video tapes not used, plus reduced viewing time when searching the resulting DVD. There were also software developments which dramatically improved the system’s abilities in poor weather conditions such as rain, something not available on early models/software versions. A quick word on the LTI in the rain. It abilities depend on five key things;* The model of LTI in use.* Scope to target distance.* How heavy the rain is.* Direction of travel of target vehicle (those traveling away chuck up the equivalent of ‘chaff’ in the form of spray from the tyres, much like aircraft eject counter-measures to confuse incoming guided missiles).* Speed of target vehicle (faster vehicles generate more spray).

As someone who used an LTI for the purposes of detecting speeding offences, I have read everything I can find on the supposed problems associated with such equipment. Consequently, I have tried many times to duplicate ‘slip’ type effects, but every time the scope just threw an error back. I tried bouncing the laser off mirrors, signs, the roadway – just an error, no speed reading. Every time it worked exactly as it should, if you asked it do something it couldn’t deal with = error message. Error messages equate to the system working as it should, the scope is saying ‘You haven’t given me the right conditions to let me do my job’. For ‘error’, don’t read ‘malfunction’.

Out of many hundreds of thousands of trigger pulls and umpteen detected offences, I never had cause to doubt what was being displayed on the back of the speedscope or being recorded. If the system had given me cause to question its results, I would have aborted the check and reported the symptoms. I never had cause to do that. I have no reason to lie. I have no vested interest. I don’t owe anyone anything and have nothing to gain by fibbing. So do I have any basis for saying that newer equipment would not fail under the same circumstances?

I have never seen it ‘fail’, or even a firsthand demonstration of it being encouraged to give erroneous readings.I will finish by saying this;So are the newer models of LTI ‘better’ than the previous models?They definitely return a quicker speed reading, so from a (n ex-) user’s perspective I view that as being an improvement. I would expect such developments to take place and for anyone involved in software development to suggest ‘You should have got it ‘right’ in the first place’, is simply nuts given the way in which technology advances. Space shuttle ‘O’ rings, the shape of early Comet windows anyone?

Just comparing the domestic digital camera I bought in 1999 to the one I got this year being a case in point – both ultimately record an image, but the effortless nature of the more recent model, its size, its power requirements, its storage options, its near instantaneous focusing, makes the other seem virtually prehistoric.However, none of that matters, I am not here to argue or debate the rights or wrongs of any aspect of the equipment in use, I am merely offering my view based upon the reactions I witnessed of people who were exposed to Paul’s report – everyone instantly dismissed it due to the age of the kit being reviewed and unknown software version.

Treat that information in whichever way you choose, but it will not alter the fact that the review was poo-poo’ed at a very rudimental level, so no-one at ‘approval’ level will give it a second glance due to what’s on the test bench.No doubt, this will generate all the usual, ‘What about…yadda-yadda?’, for which you will have to fill in the blanks yourself.

The problem highlighted by Paul's report is a direct result of a fundamental part of how the device was designed to work - it 'locks on' to the first signal received above a pre-set threshold. Unless this has been changed, or the beam divergence substantially reduced, the device would almost certainly produce the same results even with the new super-wizzy software.

As Toad Hall, who is apparently in charge of selecting devices for HOTA testing, was at the Elvington tests, one would assume that if they had ironed out this 'undocumented feature', he'd have made some noise about it.However, he and his boss Med "90 in a 60 limit" Hughes seem to believe that the integrity of devices relied on in prosecutions can best be assured by threatening to slap anyone who dares to challenge them with substantial and disproportionate costs.

If you know how we can get borrow a current HOTA UK police spec device without having to get a Crown Court judge to order it, I'm sure that we could arrange another test.

--------------------

Andy

If you're going to try to contradict me, please at least try to get your facts straight.

"Would a report on the security shortcomings of modern operating systems produced today be relevant if the software used for the research was Windows 95?"

OF what possible relevance is that ?None.

I stand by post #15/ See the V Model.

"No one knows where the laser beam goes on its trip(s)"

--------------------

Which facts in any situation or problem are “essential” and what makes them “essential”? If the “essential” facts are said to depend on the principles involved, then the whole business, all too obviously, goes right around in a circle. In the light of one principle or set of principles, one bunch of facts will be the “essential” ones; in the light of another principle or set of principles, a different bunch of facts will be “essential.” In order to settle on the right facts you first have to pick your principles, although the whole point of finding the facts was to indicate which principles apply.

Note that I am not legally qualified and any and all statements made are "Reserved". Liability for application lies with the reader.

The problem highlighted by Paul's report is a direct result of a fundamental part of how the device was designed to work - it 'locks on' to the first signal received above a pre-set threshold. Unless this has been changed, or the beam divergence substantially reduced, the device would almost certainly produce the same results even with the new super-wizzy software.

The problem highlighted by Paul's report is a direct result of a fundamental part of how the device was designed to work - it 'locks on' to the first signal received above a pre-set threshold. Unless this has been changed, or the beam divergence substantially reduced, the device would almost certainly produce the same results even with the new super-wizzy software.

I believe there's an old saying that appears rather appropriate:

You can't make a silk purse from a sow's ear.

Quite true. However, you can make one that for all the world looks like one - at a healthy distance. Provided you prevent anyone from looking at it too closely and effectively conceal the fact that whilst its silk-like it was still has porcine DNA.

--------------------

“Political correctness is a doctrine, fostered by a delusional, illogical minority, and rabidly promoted by an unscrupulous mainstream media, which holds forth the proposition that it is entirely possible to pick up a tu.rd by the clean end.” - R.J. Wiedemann, Lt. Col. USMC Ret.