More Information

Transcript

KIM LANDERS: Artificial intelligence is part of our daily lives. Voice recognition systems like Alexa, Siri and Google Home are just a few simple examples.

It's developing at a tremendous pace - but is it possible to trust AI?

Australia's chief scientist wants governments and businesses to develop some sort of regulation or ethical stamp for artificial intelligence.

Dr Alan Finkel is calling it a 'Turing certificate' after the famous English scientist, Dr Alan Turing, whose life was depicted, you might remember in the movie The Imitation Game.

It's an idea he will expand on during a lunchtime speech in Sydney today for the Committee for Economic Development of Australia.

But ahead of that, he joins me in our studio.

Dr Finkel, good morning.

ALAN FINKEL: Good morning, Kim.

KIM LANDERS: Artificial intelligence is already with us. So how are we now going to try to put some ethical guide rails around it?

ALAN FINKEL: Well, artificial intelligence is not only with us but it's becoming ever more pervasive.

What brought it home very recently was Google demonstrating a new product called Google Duplex, which is a digital voice assistant. And it's quite remarkable in its naturalness.

KIM LANDERS: It sounds like a human?

ALAN FINKEL: It sounds like a human. So instead of being: "Good morning, Dave" from a movie, it calls up and it says: "Hi. I'm calling to see if I can, hmm, maybe make a booking for a restaurant this evening, perhaps at 7pm?"

That sounds pretty natural. You won't know, if you're the person taking that phone call, who you're actually speaking to.

So that raises all the questions about: what are our expectations from the companies who are providing these products?

That's just a very visible illustration.

So what I've been giving a lot of consideration to of late is the spectrum of regulations that will assist us in integrating AI into human society. Sometimes I refer to AI and HI: human intelligence and AI.

Ultimately, what we need is for those two societies to play nice and get along.

KIM LANDERS: But if you've got big tech companies like Google, Facebook and Amazon pursuing AI, the same companies that have been responsible for massive privacy breaches in recent times, how on earth is any government or authority going to be able to issue these certificates or assess whether a company or product is using AI ethically?

ALAN FINKEL: It's a huge challenge but we've got examples that have been developed over decades, working with other products that are pervasive in our society.

Think about your electric kettle. You're quite confident that, when you switch on the plug, touch the cord, pick up the kettle, that you're not going to get electrocuted.

It's not because you've read a 20-page statement from the manufacturer. It's because on the base of that kettle there's a mark, called a CE mark.

KIM LANDERS: But if you want one of these similar trust marks on AI products, who is going to be the authority that issues that?

ALAN FINKEL: So there needs to be an authority and it could be one of the existing authorities.

So the CE mark is issued because you met the ISO - the International Standards Organisation - 9000 test. And that test is done by companies in Germany like TUV (Technical Inspection Association) and other companies in Australia. And they test not just the final product but the whole process of concept, design, manufacturing, the test process; and then the product.

Now, you as a consumer do not have to understand anything about what they're testing. But they go in. They audit the companies. The companies have to work very, very hard to qualify to get the CE mark put onto an electronic product. And you know that you are protected by that stamp.

KIM LANDERS: So if you're using something similar for AI: I mean, isn't the point that AI is really smart; could outwit humans, outwit our attempts to regulate it?

ALAN FINKEL: Well, that's right. That's why you can't just do this certification based on the final product that has been made by the company.

You have to look at the design process to ensure that principles of appropriate - let me call it behaviour - for the AI will be designed into the product, not just as an afterthought.

KIM LANDERS: If we look at the US, for example, it's had decades of public and private investment in this space. China is pouring billions of dollars into AI research and development.

And yet the recent federal budget had just $30 million in it to boost this country's AI capability. Isn't that a bit of a puny contribution?

ALAN FINKEL: Look, it is small. But think of that as a down-payment on perhaps where we are going to go.

We're unlikely to be able to compete to be the world leader in AI, but we need to be a significant player.

Part of that $30 million will enable CSIRO - through Data61, one of its divisions - to develop a bit of a road map on AI. And they've already done a lot of work on that.

The funding around the world is just huge. You mentioned a few countries. France has committed about $2 billion over the next five years to AI; China much, much more than that. But one company, Alibaba in China, claims that they will put $US13 billion into AI development work.

KIM LANDERS: That's what I mean. Australia's contribution is pretty tiny. So are you advocating that the Federal Government should be chipping in more, for example?

ALAN FINKEL: Well, it's a corporate and government expectation.

I think what the Federal Government is going to do with its $30 million is stimulate a number of companies, through what's called the CRCP (Cooperative Research Centres) program, a cooperative research program, to start upping their investment in AI. And it will get a road map as to where we need to go.

KIM LANDERS: All right. Dr Alan Finkel, it's very interesting. Thank you very much for speaking with AM this morning.

Australia should double its pace of artificial intelligence and robotics automation to reap a $2.2 trillion opportunity by 2030, while also urgently preparing to support more than 3 million workers whose jobs may be at risk, according to a new report.