If I’m involved in an accident while riding in an autonomous vehicle that I wasn't actively driving, who is at fault? Was I meant to have my hands on the wheel?

Will we ever again be allowed to control our cars directly? What will insurance premiums look like in the future? And what about those nutcase hackers?

Do I need to know anything about how this thing works, or can I just climb in and start watching the latest episode of Beyond 2000?

Where are police going to make up all the revenue when speeding fines don’t exist anymore?

This is just a small sample of the questions posed by observers here in the ancient past, regarding our inevitable transition to autonomous commuting. It is inevitable, by the way.

Given that it’s the future we’re discussing, not all of these questions and concerns can be answered or resolved right now - really, there’s an amount of crystal-ball gazing being done here by punters and experts alike.

And, while many seem certain that we’re bound for a life of robot-led slavery and an endless stream of hacked cars careening off into the trees, the reality right now is that most of us don’t truly know what to expect - and those that have a good idea, the engineers and ‘human factors’ teams designing our motoring future, are still nutting out the details.

For some onlookers, confronted each week by experimental driverless prototypes with different levels of autonomy, there’s also an interpretation barrier.

What should people even expect from a driverless car? More, what do they want from them?

Speaking with CarAdvice at this week’s International Driverless Cars Conference in Adelaide, Professor Regan said that, for most users, an assurance of reliability will be the key factor.

“They need to know that when they activate the automation, they can sit back and relax and do whatever they want,” he said.

“If they can’t do that, if they have to spend most of their time monitoring the environment, then they are actually going to be consistently distracted by the very thing the car is designed to free them up from.”

Why would a driverless car user need to “monitor the environment” at all? What does ‘autonomous car’ even mean?

Most carmakers, along with America’s National Highway Traffic Safety Administration and the Society of Automotive Engineers, agree on five distinct levels of autonomous technology in vehicles:

Level 1:Already, systems as relatively simple and familiar as Electronic Stability Control through to the still quite new Active Cruise Control technology now featured in many of today’s cars, are considered a form of autonomous technology. That’s right, many of us have partially automated cars right now, but we’re expected to maintain complete control.

Level 2: Some very new vehicles can now also offer a combination of autonomous functions, such as basic lane-keeping autonomous steering in combination with active cruise control. But, again, it’s a hands-on-the-wheel affair. No kicking back with an episode of Knight Rider.

Beyond level 2, changes to legislation - such as those proposed recently in South Australia - will be required before private owners can access the more advanced systems being tested by many carmakers today.

Level 3represents full automation of all systems, without driver control, in certain situations. Driver intervention is still required often, however, and the technology can only be used on freeways.

Level 4also allows for full automation, again in certain situations, but almost entirely without the need for driver intervention. At this level, the vehicle is intelligent enough to work its way through most or even all potential situations in areas where autonomous driving is allowed.

Level 5, full driverless automation, is the future that carmakers are working towards. At that point, all new infrastructure has been designed with autonomous vehicles in mind, and human control will never be required - and may even be outlawed on public roads, restricted instead to race tracks and special new driving parks.

People have been told to look forward to level five, that fully autonomous future, but most of the systems revealed recently are of the ‘highly automated’ type - levels three and four - which will still rely on interaction with and regular control by the human operator behind the wheel.

That’s where we arrive at the question of liability. If your vehicle were fully autonomous, there would be no question as to who is responsible in the event of an accident. Or, rather, it would be clear who isn’t responsible: you. The car’s systems may have failed, in which case it’s the manufacturer at fault, or external factors - such as a catastrophic infrastructure failure or the actions of a human driver in another car - may also be the cause. Again, you’d be without blame.

In a partially- or highly-automated vehicle, the lines appear blurred, but the vehicle’s own telematics and the monitoring systems external to the vehicle would in most cases provide clarity. Did you fail to take back control when the vehicle asked you to? Did you ignore visual and audible warnings? Did the vehicle itself fail in a situation that it should have had control over? Did another vehicle collide with you in such a way that neither you nor your vehicle could have avoided?

The telematics and monitoring functions of your car, recording every aspect of your drive, will almost always have the answer. When it doesn’t, external data - cameras, witnesses, crash scene evidence - will likely complete the picture, just as it does today.

As Professor Regan tells it, levels three, four and five are where most potential users want to be, with the latter two hinting at a utopia of cereal-munching and breakfast TV-watching on the commute.

“The sort of things that people are now wanting to do are things that have nothing to do with cars. And they don’t want to be intermittently monitoring what’s going on outside the vehicle, but I think that, sadly, that’s going to be part-and-parcel of life until we get to the stage where these cars are fully automated,” Professor Regan said.

“It’s not all doom and gloom. I think that what’s going to happen is that, as Volvo is doing in Sweden, cars will be developed that can function reliably [autonomously] all of the time, but only under certain operating conditions - on the freeway, let’s say. So I think that people will get used to being able to drive autonomously there, I think they’ll develop trust pretty quickly, and they’ll know that for certain sections of the road that it’s unlikely that they’ll have to intervene.”

The flip side to that, and the aspect that Professor Regan is most focused on, is ‘human factors’: the question of how people use new technology, what they expect from it, and what their interaction with the technology will lead to.

In the context of autonomous vehicles, one major concern is that users may put too much trust in their partially- or highly-automated car, becoming overly reliant on the technology and overestimating what they can expect from it.

When so much talk is devoted to giving potential users reasons to trust autonomous vehicle technology, people like Professor Regan are just as concerned with over-trust.

“Over-trust can be a problem if you do have too much trust in the technology, to the extent that if, when we’re going through an interim period [in the technology’s advancement] and there is a requirement to take over control occasionally, you become so complacent that you forget about the need to take over, or this idea that if you don’t take over, if you ignore the warnings, the car will do it all.”

Professor Regan said that, just as a surface-level understanding of your shiny new smartphone can leave you open to trouble, users of modern vehicles - not just the fully autonomous cars of the future - will be best served by knowing what they should and shouldn’t expect from the technology.

“I know a fellow who bought a car that’s startled him three or four times, because he doesn’t know what the bells and whistles are doing when they go off, and they’re distracting him to the point where he’s nearly had two crashes,” he said.

“So we need to make people understand how the systems work, the conditions under which they’re likely to see warnings or actions.”

From a ‘human factors’ perspective, Professor Regan said, the success of autonomous vehicles will depend a great deal on people knowing the limitations of the systems in their car.

“One of the things we do know is that a lot of people have reverse collision warning systems [usually in the form of rear parking or cross-traffic sensors], and they think they’re capable of detecting small children running behind the car, but they’re not - not well enough.” (Bosch Australia has recently demonstrated a prototype version of such a system, including automatic braking.)

Whose job is it to educate users? Professor Regan would like to see manufacturers offer introductory tours of a vehicle’s features when they first use it, just as users see when they boot up a new phone or install a new mobile app.

“It’s a bit like… when an aircraft company sells an aircraft to an airline, they’re obliged to provide the airline with a training program. And I think that if a manufacturer is selling a car, I think the education has to occur at that level. I’d like to be able to hop into a car and press a button and it brings up in the display a walkthrough of the systems; something of a virtual tour of the systems.”

Despite concerns that the bridge between autonomous technology and its optimal usability is still to be perfected, Professor Regan said he “can’t wait to have one”.

“Initially, I’ll be one of the types that only uses them under certain conditions, until I develop a trust - which is why I like the approach that Volvo is taking in Europe, testing the technology on the periphery of Gothenburg, which they know has the lowest crash rate of any road on the system. Virtually no crashes.”

“Although a colleague here this week, Dr Trent Victor at Volvo, was telling me that a horse and cart found its way onto that road recently…”