THE advent of artificial intelligence capable of writing convincing product and restaurant reviews could usher in a new era of fake news, researchers warn.

Bots could be programmed to undermine peer review platforms, and launch a torrent of malicious crowdturfing attacks.

So called crowdturfing takes its name from a combination of crowdsourcing and astroturfing. The former is the well known online tactic of getting lots of people to do a small thing to achieve a big goal, while astroturfing is internet parlance for an orchestrated campaign under the guise of unsolicited comments from members of the public. For instance an army of netizens might swamp the reviews of a particular business they don’t like causing damage to its online reputation.

It has become a commonly used tactic to disseminate information, or more often disinformation, with a particular agenda in mind and has been used by big tobacco and the fossil fuel industry to warp online debates.

For the time being it is the work of co-ordinated people online but the rise of AI bots which can generate automated content means such techniques could be vastly scaled up and would effectively amount to a new kind of cyber attack, says Prof Ben Zhao from the University of Chicago.

He has worked in cyber security research for a while and says “one of the interesting things about this type of attack is that it’s unlike the traditional kinds of automated attacks that people think of,” he told news.com.au.

As a result, “there are very few defences in today’s security realm against this,” he said.

While AI generated crowdturfing is not thought to have become commonplace, Prof Zhao believes if it can’t be successfully mitigated it has the potential to undermine public trust and hurt businesses that rely on legitimate customer reviews.

Prof Zhao is part of a team of researchers from the University of Chicago who have published a paper that shows how AI can be used to develop sophisticated reviews that are not only undetectable using contemporary methods, but are also considered highly reliable by unsuspecting readers.

WHAT AN AI REVIEW LOOKS LIKE

Below is an example of a bot generated review, can you tell is has an artificial author?

“I love this place. I went with my brother and we had the vegetarian pasta and it was delicious. The beer was good and the service was amazing. I would definitely recommend this place to anyone looking for a great place to go for a great breakfast and a small spot with a great deal.”

Granted it’s a bit clunky, with a rather strange turn of phrase at the end. And while the lack of specific details could be a giveaway, for anyone scrolling through online reviews it’s unlikely they would be able to discern its fakeness at a glance.

When you think about it in terms of economies of scale, the implications for sites like Yelp, Urban Spoon and Google Reviews as well as the businesses that rely on them is staggering.

Yelp app showing reviews for restaurants in Melbourne.Source:Supplied

WHY ARE REVIEW SITES VULNERABLE?

The authentication process for most online platforms, whether it be social media or review sites typically involve a human verification to sign up initially. But once the account is created, there is often no further scrutiny of the poster’s authenticity.

“So what you can do is very easily have someone online go click a button that says “I’m not a robot” for very cheap, a one time cost,” Prof Zhao said. “Once you’ve done that ... now you can generate automated content.

“The bottleneck before was getting someone to actually sit down and generate some sort of thoughtful, meaning, grammatically correct content that actually made sense relative to the context.

“Often the case would be you would have to pay up handsomely for that,” he said. “That real bottleneck has now been completely bypassed.”

There are certainly defences that can be put in place like requiring the input of a complex code before each comment or post. But that is seldom the case and most defence methods will likely amount to a cat-and-mouse game.

For many commentators, Donald Trump has helped usher in the post truth era, but legitimate-looking automated reviews could lead to a staggering new form of fake news. Picture: Michael B. ThomasSource:AFP

To show the potential of AI generated reviews, Prof Zhao and his team of researchers used a deep learning technique called recurrent neural networks (RNN). The algorithm was trained by being fed thousands of real reviews that are, of course, freely available online.

Their paper entitled Automated Crowdturfing Attacks and Defences in Online Review Systems will be presented at the ACM Conference on Computer and Communications Security later this year.

“Misinformation is being used as a tool to harm competitors, win political campaigns, and sway public opinion,” the researchers wrote.

“In this paper, we identify a new class of attacks that leverage deep learning language models (Recurrent Neural Networks or RNNs) to automate the generation of fake online reviews for products and services.”

The paper seeks to show the potential such software has to corrupt our faith and public trust while the technology is still emerging.

“There’s a lot of a lot of automated content generation maybe not on Yelp or Amazon review sites but certainly on things like Twitter where it is much easier to use a software controlled bot to basically disseminate information,” Prof Zhao told news.com.au.

“I think we’re ahead of it, but again, it’s one of those things where you’re never quite sure,” he said.

TAKE THE TEST

Below are some fake reviews generated by Prof Zhao and his team’s software mixed in with real online reviews about Sydney restaurants. Can you spot the fakes?

1) “This place is amazing! The bartenders are absolutely amazing. The pasta is delicious and I love their pastries and it is amazing. I love the breakfast, friendly staff and the price is very reasonable. I have never had a bad experience here. I will be back for sure!”

2) “DO NOT WASTE YOUR TIME AND MONEY! The absolute worst service I have ever experienced. This place is a joke. The waitress was rude and said she would put the manager to come out but never happened. I wish I could give zero star.”

3) “A disappointing experience. Totally overpriced for what you get, unfriendly and slow service, average food. Definitely not worth the money.”

4) “I opted for the BBQ pork laksa and it was served in a matter of minutes for a measly $12.00. The BBQ pork was delicious and the laksa was piping hot with a slight chilli hit. I also chose 2 types of noodles for my soup because 2 is better than 1. I would have liked a few more veggies but the serving was generous enough.”

5) “I was here for a weekend brunch and the food was OK. I love the pizza that is a chain restaurant. I think the service is excellent. I had the spaghetti and they were very good and the hot dog was good. I got the red velvet chocolate cake special which was very good but the service was a little slow. The food was good, but not up to par with other places nearby.”

Answers: 1,2 and 5 are fake, while 3 and 4 were real.

Art By Artificial Intelligence: AI Expands Into Artistic Realm4:24

Can machines make art and music that moves us? Engineers and artists are testing that notion with an array of new artificial intelligence that is expanding the boundaries of how imagery, music and videogames are created. Image: Adele Morgan/The Wall Street Journal