A lot of interesting speakers have graced the stage at TED, a conference where visionaries discuss innovations in technology, entertainment and design. But for all its focus on the future, there's never been a robot speaker at the event.

That could soon change. In March 2014, TED and XPrize announced a contest for researchers to develop an artificially-intelligent agent capable of delivering a convincing TED Talk. Just over a year later, one programmer (almost) has a submission, though he calls it a "hilariously comedic" one.

Samim Winiger, the CEO of German videogame company 2Beats, compiled over 1,900 TED talks (the equivalent of more than 4 million words) to train a bot to write one. The software scans through all these talks, learns their general structure and comes up with its own, letter by letter. After the talks were "written," Winiger took a text-to-speech synthesizer so the bots could deliver their own presentations.

The result, Winiger revealed last week, was three different "speakers," which Winiger dubbed Jürgen TEDhuber, Ada LoveTED and Isaac TEDimov. Their orations are nowhere near the level of a human TED talk. For the most part, they're pretty nonsensical. Take, for instance, its thoughts on technology:

The real problem with the death for the universe is the predictions of the size of the other. There’s not a problem. They are manufactured by governments to be a choice. So we should really get money at the summit of a male. The reality is that there is a problem, and the fish have learned. They have been tracked in the world — but we do something to possible change the next genes between a lot of books that we might never read.

Advertisement

The grammar is sloppy at points. And although some sentences make sense, the overall message is illogical. Winiger's bots wouldn't pass for human, but they're a start. You can watch Jürgen, Ada and Isaac deliver their robo-musings on the future of technology, entertainment and design below:

Good afternoon. God bless you. The United States will step up to the cost of a new challenges of the American people that will share the fact that we created the problem. They were attacked and so that they have to say that all the task of the final days of war that I will not be able to get this done.

Advertisement

The point of both projects, which rely on a program called char-rnn that learns to predict the next character in a sequence after "reading" through a bunch of text, isn't to win a fancy AI competition, Winiger told me. (He hasn't decided yet whether he'll submit his program for XPrize consideration yet.) Instead, he wanted to spark a public debate over what "intent" means in the context of an intelligent program by exploring the interaction between artificial intelligence, art, comedy and politics.

On Twitter, for instance, people were debating whether the program's robo-TED talks were funny or comedic, since as Alex Champandard points out, comedy requires intent.

Advertisement

"There are these great discussions that come from these experiments…When an AI writes something, whose intent is it? Is it the algorithm's intent? Is it Obama's? The answer is all over the place, which is lovely," Winiger said. "Art—AI art, if you want to call it that—is pretty much the best way to explain these very deep concepts to a more general public."

This, of course, isn't new. Every since Elon Musk and Stephen Hawking expressed their concerns over killer robots taking out the human race, public discussions over the evolution of AI intent and how to control it have become commonplace. What happens when an AI's intent is in conflict with human interest? Some say we could just literally pull the plug to evade a robo-apocalypse. Others think AIs could outsmart us before we're able to disconnect them.

Winiger's bots are just toy versions of the more sophisticated technologies out there, but they can help people understand their potential applications and implications better than even the best news articles about AI, says Winiger. He hopes that his nonsense-spewing bots will, ironically, make public debates over AI more rational and well-informed.

Daniela Hernandez is a senior writer at Fusion. She likes science, robots, pugs, and coffee.