Data flow, crash and burn

For some years now, technologists have been developing ‘autonomous vehicles’ - cars which will be driven by computers, without direct human intervention.

Google, for example, is a pioneer in the industry. In road trials, the company’s computer controlled cars have racked up 200,000 miles without any accidents.

Real-time flows of data will be critical to the operation of such vehicles in a real-world environment.

Sensors around the cars will measure proximity to other vehicles; satellites will constantly monitor traffic congestion and optimise speed and position; and traffic lights and road signs will also feed instructions and warnings to the car’s computer brain.

But, while the dream of a safer, and more efficient, road transportation system is worthy, it won’t be happening on a large-scale anytime soon.

Some of the technology is in place to make it possible, and functions like automatic parking gives a glimpse of what can be achieved.

But computers aren’t yet intelligent enough to reliably make safe, on-time decisions about everyday driving problems. Especially when human drivers, lumbered as we are with relatively slow reaction times, will still be on the road.

For example, how would a computer driven car cope with,

Narrow country road at dusk, is that a large rock or a plastic bag – swerve or not?

Thinking technically, what degree of data quality will be required in digital sensors on these cars? Will this change, depending on the driving environment or the model of car you can afford? Or, by law, will all cars have exactly the same specification?

Will there be a global data standard for traffic lights, signage and other ‘road furniture’? Will each car manufacturer have its own satellite and data network?

Satellites can be hit by space junk or possibly jammed by hackers - what happens if the data flow stops? Will all computer driven cars glide safely to a halt? What warning will human drivers receive, if any?

There are also many other non-technical questions which will have to be addressed.

In a predictable accident situation, do the lives of the ‘owners’ of the car take priority? Or, in an automated transport system, is a computer’s role to minimise the overall number of human casualties?

Will insurers have a say on the programming of a car’s computers? Will you pay lower premiums if you tick the box marked, "Save others’ lives first"?

Will journeys be private, or will route information be recorded and stored? By whom? And who has access to that information, and for how long? Will your conversations be recorded in a ‘black box’?

Will it be legal for a car to drive without human passengers? If the police think a car is being used for smuggling, do they have the right to stop it, and how do they stop it?

For me, there are some lessons to be learned from thinking about these questions.

Decisions about technology are not always best decided by technologists. Sometimes ethical, philosophical or societal implications need to be considered first, even within the business environment.

The use of systems to perform autonomous actions that can affect others may need to be tightly regulated - knowing how the algorithms work within the financial industries is a prime example. The interaction of different algorithms used in different banks might have a profound effect on the stock exchange, should they not work in harmony.

The quality and flow of data must be understood and protected, if the decisions based on it are to be trusted.

And we all know that at some time, and often at the worst possible time, systems will fail.

We’d better have a good handle on what will happen when such failures occur, or it might not be only our careers that crash and burn.