Fractl sees it as a threat to the integrity of its trade

Although much has been written on how recent strides in text-generation technology could bring a new scale to fake news proliferation, less attention has been paid to its more banal potential to facilitate the mass production of run-of-the-mill spam.

Content marketing agency Fractl is seeking to spark a conversation around this risk with a fake marketing blog generated almost entirely by artificial intelligence. The site is a passable imitation of typical branded “thought-fluencer” fare, complete with AI-generated author headshots and evergreen, SEO-friendly headlines.

Skim the text below the blog titles, and it’s not immediately obvious the posts are the work of a machine. Grover, the Allen Institute research tool that Fractl actually used to author the text, is programmed to turn a prompt into realistic-sounding copy that’s tailored to the style of specific domains or authors. The more exhaustive reader probably will notice that the tangles of buzzword-laden snippets, taken as a whole, are more or less gibberish.

But that ultimate incoherence doesn’t mean it can’t have an insidious effect.

Kristin Tynski, co-founder and svp of creative for Fractl, who spearheaded the project, worries that the measures search engines like Google have in place to prevent SEO abuse might not be attuned to more advanced spamming technology like AI.

“From my point of view, this is sort of a new era of risk for Google and other search engines—and the internet as a whole,” Tynski told Adweek. “It seems like the beginning of like a new arms race between Google and content spammers.”

'Barry Tyree' is one of many fake AI-generated thought leaders featured in Fractl's demo.

Fractl

While there have yet to be any notable known instances of bad actors using tools like Grover for spamming purposes, this dynamic, context-savvy natural language-processing technology is still in its early stages. One of its most high-profile breakthroughs came in February, when research group Open AI unveiled GPT-2, a text-generation tool so advanced, its makers said, that it was too dangerous to release publicly. Grover iterated on that basic framework with settings to tailor text to specific styles.

Despite these advances, AI-generated content still remains fairly easy to suss out with the right tools, according to Rowan Zellers, a University of Washington Ph.D student who co-created Grover. One of those tools is Grover itself, which doubles as a fake news detector with a higher accuracy rate than many of the systems that came before it.

Still, Zellers said in an email to Adweek, one of the hardest areas for Grover to discern is financial content churns like AmericanBankingNews.com, which likely uses autofilled post templates to mass-produce copy.

Text-generation tools themselves aren’t inherently nefarious—news organizations like the Washington Post and Associated Press have used similar technology to automate the creation of simple stories like sports outcomes and earnings reports—but the question is how much value the specific piece of content actually provides beyond boosting the search ranking position of its ostensible authors.

“Where it’s dangerous is when AI is used to create content that’s of questionable value or zero value,” Tynski said. “But is there a future where AI can mass-create content that is of value? I mean, that’s a crazy sort of existential question. When it can do it that well and create web content of any type, what does that say for humanity?”

Patrick Kulp is an emerging tech reporter for Adweek. He covers creative innovation, artificial intelligence and the future of 5G, which culminated in a cover story on AT&T and Verizon's strategic plans for 5G. Patrick holds degrees in economics and political Science from UC Santa Barbara. He previously worked as a business Reporter for Mashable.