The US has also recently issued Federal proposed guidelines setting out 15 benchmarkscar manufacturers will need to meet before their autonomous vehicles can hit the road. Ford is currently developing in-car connectivity, ride-sharing and autonomous technologies through its subsidiary; Ford Smart Mobility (FSM) and will launch its self-driving car by 2021. This vehicle will come without a steering wheel, an accelerator or pedals – an image of a car that was probably unimaginable a decade ago and the battle for the dashboard becomes easier to imagine.

For a market that does not exist yet it’s already crowded. Ford’s move in the car market is about changing consumer behaviour. As we use more subscription-based services like Spotify and Netflix, the sharing economy is becoming more prevalent in our everyday transactions and people are growing more accepting of the fact that they don’t need to own a vehicle anymore. People simply want to be able to own something when they need to, and they are increasingly happy to share that thing. As a result, Ford is changing the way it operates to focus on providing services, rather than vehicles. With any new technology there is some concern with regards to regulation.

Here are some of the legal issues manufacturers and users could come up against with driverless cars:

1)The law isn’t keeping up with tech
Self-driving cars are like many other emerging technologies – the law develops after the technology. This has an impact on how the technology is perceived and dealt with when an accident happens. Legislators are faced with the conundrum of potentially over-regulating an infant market and stifling its growth and adoption, or not sufficiently protecting the consumer, a tricky balance which can lead to suboptimal safety standards or even considerable risk to human life. The indications seem to be that governments are looking for the car manufacturers to attempt to self-regulate at least to some degree and adopt high standards for vehicle safety.

As roads clog up, pollution rises and commutes lengthen, the benefits of allowing cars to drive autonomously and in the most efficient manner are significant. However, it’s going to take some time before the law is completely clear on this technology because it’s not likely to be completely up-to-date for a long time. Lawyers could be working with a patchwork of legislation as the tech develops, meaning there are likely to be unanswered questions when cases arise.

2)The definition of ‘driver’ – who takes responsibility for an accident?
As has been demonstrated by the recent Google and Tesla road traffic accidents, the risks are significant if there is a malfunction or the wrong decision is made by an autonomous system in an emergency situation. Who would be responsible if the technology fails and places drivers and passengers in great risk? The law essentially needs to clarify what we mean by “driver” for the new era of self-driving or in fact entirely autonomous cars and define where responsibility lies - in theory self-driving cars should ‘de-risk’ driving for a person, as the car manufacturer or autonomous system takes control of the vehicle. However, what has become clear with the recent accidents is that there are varying levels of control.

Some arguably present a heightened risk to a driver who is still required to be able to re-take control at short notice, but the degree of autonomy combined with human nature means that it is difficult for many drivers to remain focused on the vehicle and the road ahead.

As the cars become more mainstream, consumer awareness requirements as to the limitations of the system and carve-outs from liability will need to be spelled out prior to purchase (such as requirements to upgrade and patch any software systems installed), as will the level of insurance required. As for the manufacturers providing these systems, who are arguably betting the future of the company on a successful move from car ownership to ‘transportation as a service’, should there be requirements in relation to the level of capitalisation or underwriting required to ensure the individual or the public sector isn’t left carrying the risk of a self-driving car failing to operate in accordance with specifications particularly in an epidemic failure type scenario?

A driver buys a car and the law typically puts an obligation on the manufacturer to make sure the door or ignition locks work to an appropriate standard – the law will likely apply similar tests to the protections surrounding a computer system in a car. It will not be as clear as saying that any unauthorised hacking generates a liability on the car manufacturer, but responsibility will likely arise where the protections put in place to prevent it are not appropriate. That appropriateness will be measured by factors such as the cost of the vehicle, the cost of the solution and the impact of failure.

The law will expect the manufacturers to have invested significantly in protections and will likely look to a mix of manufacturers, insurers and drivers to allocate the cost of that liability as the connected car market matures. The government continues to support autonomous vehicle technology, and with leading technology companies pioneering driverless systems, it looks likely that we could still start seeing driverless cars on UK roads by the year 2020. The Modern Transport Bill, detailed in the Queen’s speech, is expected to be discussed in the coming months and should answer questions around potential liability for connected vehicles and how they will be regulated in the future. Manufacturers, service providers and consumers need to ensure that they understand the legal developments around this technology in order to be prepared for the legal responsibility these vehicles could present.