A Russian chatterbot named "Eugene Goostman" has become the first to pass the Turing Test – an assessment of machine intelligence first proposed in 1950 by visionary mathematician, logician and codebreaker Alan Turing – by convincing 1 in 3 judges that it was a 13-year-old non-native-English-speaking Ukrainian boy.

Here's What Went Down:

"Eugene" and four other computerized contenders took part Saturday at the Turing Test 2014 Competition at the Royal Society in London. Each chatterbox was required to engage in a series of five-minute text-based conversations with a panel of judges. The rules stipulate that a computer passes the test if it is mistaken for a human more than 30% of the time. Eugene managed to convince 33% of the judges it was human, the only machine-contender at the competition – indeed, if the event's independent verifiers are to be believed, the only machine-contender in history – to do so. A veteran of the Loebner prize and the Chatterbox challenge, Eugene also took first place at the 2012 Turing Test, though it only duped 29% of that year's judges.

On a poignant note, the competition was held on the 60th anniversary of Turing's death, less than six months after he was granted a posthumous royal pardon for a 1952-conviction of "gross indeceny," the standard criminal charge at the time for homosexuality.

What It Means (Important Considerations):

In what can be interpreted as brilliant in its deviousness or exploitative in its disregard for the spirit of Turing's originally proposed test, Eugene's creators kind of kluged their way to victory on this one, by having it pretend to be a 13-year-old, non-native-English-speaking Ukrainian. As Eugene's creator Vladimir Veselov put it, "our main idea was that [Eugene] can claim that he knows anything, but his age also makes it perfectly reasonable that he doesn't know everything." Is it fair? Technically. But it's not the least bit impressive, in a cognitive sense. Which brings us to:

The chatbot is not thinking in the cognitive sense; it's a sophisticated simulator of human conversation run by scripts.

In other words, this is far from the milestone it's been made out to be. That said, it is important, because it supports the idea that we have entered an era in which it will become increasingly difficult to discern chatbots from real humans.

"Having a computer that can trick a human into thinking that someone, or even something, is a person we trust is a wake-up call to cybercrime [and the] Turing Test is a vital tool for combatting that threat," said competition organizer Kevin Warwick on the subject of the test's implications for modern society. "It is important to understand more fully how online, real-time communication of this type can influence an individual human in such a way that they are fooled into believing something is true...when in fact it is not."