Engadget RSS Feedhttp://www.engadget.com
Engadgethttp://www.blogsmithmedia.com/www.engadget.com/media/feedlogo.gifEngadgethttp://www.engadget.com
en-usCopyright 2015 AOL Inc. The contents of this feed are available for non-commercial use only.Blogsmith http://www.blogsmith.com/http://www.engadget.com/2015/02/26/deepmind-atari-games-tests/?utm_medium=feed&utm_source=Feed_Classic&utm_campaign=Engadget&ncid=rss_semi
http://www.engadget.com/2015/02/26/deepmind-atari-games-tests/http://www.engadget.com/2015/02/26/deepmind-atari-games-tests/?utm_source=Feed_Classic&utm_medium=feed&utm_campaign=Engadget#comments

Most people's anxieties about AI concern computers realizing they don't need humans and wiping us out. It probably never occurred to anyone that, as soon as they discovered beer, Netflix and video games, that computers would ditch plans for world domination, drop out and get a job at the local gas station. It's a lesson that Google-owned startup DeepMind has learned the hard way after it got its thinking computer hooked on retro computer games.

Meet Eve: she's darn smart, can make the process of finding new drugs a lot faster and cheaper -- and she costs around $1 million. That's because Eve is a robotic scientist developed by researchers from the Universities of Aberystwyth and Cambridge, the same team who created her predecessor (you guessed it) Adam back in 2009. Since Eve was created specifically to automate the early stages of drug design, she's capable of scanning over 10,000 compounds a day, whereas humans obviously wouldn't be able to process as many in the same timeframe. As Professor Ross King from the University of Manchester (which Eve calls home) said: "Every industry now benefits from automation and science is no exception. Bringing in machine learning to make this process intelligent -- rather than just a 'brute force' approach -- could greatly speed up scientific progress and potentially reap huge rewards."

Elon Musk and Stephen Hawking have brought up the potential dangers of super intelligent AI several times over the past few years (Musk even donated $10 million toward cautious AI research), but now Bill Gates is also getting into the mix. In his Reddit "AmA" Q&A session today, Gates made it clear that he agrees with Musk's stance, which basically amounts to being very careful about how we approach the rise of intelligent machines:

I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though, the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don't understand why some people are not concerned.

As fun as Super Mario Bros. games are to play, wouldn't it be nice if you could coach from the sidelines every now and then? The University of Tubingen has developed an artificial intelligence that lets you do just that. Its Mario AI project makes Nintendo's plumber both aware of his environment and responsive to your advice on how he should behave. You can teach him that stomping on Goombas will definitely take them down, for instance. Mario even has his own systems for feelings and needs. He'll explore the world if he's sufficiently curious, and he'll chase after coins if he's "hungry."

Elon Musk hasn't been shy about bringing up the potential dangers of artificial intelligence -- now, he's actually doing something to help prevent an AI takeover. Musk just announced that he's donating $10 million to the Future of Life Institute (FLI) for a research program that will focus on keeping AI "beneficial" to humanity. And by beneficial, he means making sure future computers will actually listen to us when they surpass our intelligence, and not take matters into their own hands when they realize they don't need us. Musk isn't some outlier -- Stephen Hawking has made it clear he's terrified of the AI revolution as well, and FLI has also gotten plenty of researchers to sign an open letter calling for research on keeping AI beneficial.

A team of software developers and poker researchers from the University of Alberta have developed a program that can completely demolish their fellow humans in a game of Texas Hold 'em. They named the artificial intelligence "Cepheus," and it's so good, the developers say you could play against it your whole life and never win. Even if you win, "it [still] cannot be beaten with statistical significance in a lifetime," according to the paper Science has just published. Well, that is if you're playing the two-player version (which is also the simplest one) called "heads-up limit hold 'em," because poker's apparently an extremely complicated game. The team has been working on developing an AI poker expert for the past ten years, though it only took them two months to "train" Cepheus.

When renowned physicist Stephen Hawking repeatedly warns us about the impending robot apocalypse, we should probably pay attention. "The development of full artificial intelligence could spell the end of the human race," Hawking told the BBC yesterday. While he admits early forms of AI have been useful (it's clearly been a huge help for his speech systems), he worries that we won't be able to keep up with super-intelligent versions that outwit humans. Hawking made similar comments back in May when he called the development of full AI "potentially our greatest mistake in history." (Or maybe he just really hated Transcendence.) And he's not the only genius singing this tune; Tesla's Elon Musk is also afraid of a Terminator scenario. While plenty of scientists have far more measured expectations for AI, Hawking's comments are worth noting. We really don't know what's ahead for intelligent machines, so perhaps we should proceed with caution.

In June, the developers of a Russian chatbot posing as a 13-year-old boy from Ukraine claimed it had passed the Turing test. While a lot of people doubt the result's validity because the testers used a sketchy methodology and the event was organized by a man fond of making wild claims, it's clear we need a better way to determine if an AI possesses human levels of intelligence. Enter Lovelace 2.0, a test proposed by Georgia Tech associate professor Mark Riedl.

Here's how Lovelace 2.0 works:

For the test, the artificial agent passes if it develops a creative artifact from a subset of artistic genres deemed to require human-level intelligence and the artifact meets certain creative constraints given by a human evaluator. Further, the human evaluator must determine that the object is a valid representative of the creative subset and that it meets the criteria. The created artifact needs only meet these criteria but does not need to have any aesthetic value. Finally, a human referee must determine that the combination of the subset and criteria is not an impossible standard.

You might not have to be a professional magician to come up with clever tricks in the near future. Researchers at Queen Mary University of London have developed artificial intelligence that can create magic tricks (specifically, those based on math) all on its own. Once their program learns the basics of creating magic jigsaws and "mind reading" stunts, it can generate many variants of these tricks by itself. This could be particularly handy if you like to impress your friends on a regular basis -- you could show them a new card trick every time without having to do much work.

Elon Musk really wants us to be worried about the potential danger of artificial intelligence. He just told an MIT symposium that he feels it's "our biggest existential threat," then ratcheted the hyperbole further, saying "with artificial intelligence, we're summoning the demon." He added that "HAL9000 would be... like a puppy dog," and said governments need to start regulating the development of AI sooner than later. Last August, Musk said that super-intelligent robots were "potentially more dangerous than nukes." Paranoid rantings? We doubt it -- given his track record, it's more likely that Musk knows something we don't.

]]>
AIArtificial IntelligenceElon MuskElonMuskMIT AeroAstrovideoMon, 27 Oct 2014 08:07:00 -040021|20984265http://massively.joystiq.com/2014/09/30/soe-devs-on-everquest-nexts-life-of-consequence/?utm_medium=feed&utm_source=Feed_Classic&utm_campaign=Massively&ncid=rss_semi
http://massively.joystiq.com/2014/09/30/soe-devs-on-everquest-nexts-life-of-consequence/http://massively.joystiq.com/2014/09/30/soe-devs-on-everquest-nexts-life-of-consequence/?utm_source=Feed_Classic&utm_medium=feed&utm_campaign=Massively#comments
We've known for a while that SOE is cooking up some sort of emergent AI concoction for EverQuest Next. The company famously partnered with Storybricks last year to bring its fantasy NPCs to life, and a newly released video sheds a bit more light on what exactly that means.

The clip stems from a panel that was originally conducted at this year's SOE Live, which has now been distilled to a more manageable five-minute running time. Click past the cut to find out about EQN's lack of traditional quest hubs and how to make NPCs bow before your mighty axe of authority.

Us humans are normally good at making quick judgments about neighborhoods. We can figure out whether we're safe, or if we're likely to find a certain store. Computers haven't had such an easy time of it, but that's changing now that MIT researchers have created a deep learning algorithm that sizes up neighborhoods roughly as well as humans. The code correlates what it sees in millions of Google Street View images with crime rates and points of interest; it can tell what a sketchy part of town looks like, or what you're likely to see near a McDonald's (taxis and police vans, apparently).

The vocabulary we use to describe music can be tough enough for a human to grok (really, what does it mean when a guitar riff is "crunchy"?) but a team of tinkerers from Birmingham City University aren't interested in helping people understand that language. Nope -- instead, they've cooked up a way to teach your computer what you mean when you throw around words like "bright" or "fuzzy" or, yes, "crunchy" with a program they call the SAFE Project.

]]>
aidawmachinelearningmusicpluginsafeprojectFri, 12 Sep 2014 21:37:00 -040021|20961780http://massively.joystiq.com/2014/08/21/soe-live-2014-the-revolutionary-intelligence-of-storybricks-ai/?utm_medium=feed&utm_source=Feed_Classic&utm_campaign=Massively&ncid=rss_semi
http://massively.joystiq.com/2014/08/21/soe-live-2014-the-revolutionary-intelligence-of-storybricks-ai/http://massively.joystiq.com/2014/08/21/soe-live-2014-the-revolutionary-intelligence-of-storybricks-ai/?utm_source=Feed_Classic&utm_medium=feed&utm_campaign=Massively#comments
The most exciting part of EverQuest Next and Landmark for me is the living, intelligent AI brought to the games courtesy of Storybricks. Thanks to a tech demonstration at SOE Live, we got to see that AI in action, and can I tell you I am even more excited having seen it! This technology really will revolutionize the game, creating a living, breathing world in EQ Nextthat players help shape as it develops as well as give players the power to make their world come alive in Landmark. And to add icing to the cake, the panel also delved into the background of the new Norrath a bit, revealing the world map complete with familiar areas (like Kithicor).

It looks like robots can trust us humans to take care of them, after all. Hitchbot has successfully completed its hitchhiking trek across Canada, landing in Victoria, British Columbia this past weekend. The ride-bumming robot didn't survive its 4,000-mile journey completely unscathed. Its LED protector was cracked, and its speech had clearly suffered after two weeks of travel (hey, you try talking to people for that long). It doesn't look like there's another adventure in store, but that's okay by us; it clearly accomplished its goals of testing artificial intelligence techniques and human interaction. If you're ever keen to relive the trip, there's a photo gallery available to satisfy your nostalgic side.

There's no question that Apple's virtual assistant comes in handy when you need information quickly. And now, one of Siri's creators is working on a more advanced form of AI that goes far beyond the current iPhone option. Viv Labs says its personal assistant will possess a limitless tool set because the software can teach itself. This means that in addition to the regular ol' search, the artificial intelligence will also be able to perform tasks like booking reservations based on openings in your schedule and more. "Siri is chapter one of a much longer, bigger story," Dag Kittlaus, Viv's co-founder who also worked on Apple's assistant, tells Wired. Bypassing the need for coders (and constraints), Viv generates its own program to answer queries and complete tasks in seconds. Once the system is "trained" to understand the vocabulary of a subject, the company aims to sort through loads of data to not only lend a hand, but anticipate what we'll need next.

Computer scientists have been modeling networks after the human brain and teaching machines to think independently for years, completing tasks like document reading and speech cues. Image recognition is another useful chore for the neural networks, and Microsoft Research has just offered a peek at its recent dive into the matter. Project Adam is one of those deep-learning systems that's been taught to complete image-recognition tasks 50 times faster and twice as accurate as its predecessors. So, what does that mean? Well, instead of just determining the breed in a canine snapshot, the tech can also distinguish between American and English Cocker Spaniels. The team is looking into tacking on speech and text recognition as well, so your next virtual assistant may not only wrangle your schedule and commute, but also could constantly learn from the world that you live in.

After 64 long years, it looks like a machine has finally passed the Turing test for artificial intelligence. A supercomputer in a chat-based challenge fooled 33 percent of judges into thinking that it was Eugene Goostman, a fictional 13 year old boy; that's just above the commonly accepted Turing test's 30 percent threshold. Developers Vladimir Veselov and Eugene Demchenko say that the key ingredients were both a plausible personality (a teen who thinks he knows more than he does) and a dialog system adept at handling more than direct questions.

She was modeled after real-life personal assistants. She is the product of two years of work, and a large team of scientists and product managers. She has video game origins. She is Microsoft's response to Siri and Google Now. She is Artificial Intelligence and proud of it. She is Cortana.

Google spits out about 4 million search results per minute (among many other duties), which consumes a lot of energy. According to a recent blog, it cut its electrical bills significantly by applying the same kind of machine learning used in speech recognition and other consumer applications. A data center engineer on a 20 percent project plotted environmental factors like outside air temperature, IT load and other server-related factors. He then developed a neural network that could see the "underlying story" in the data, predicting loads 99.6 percent of the time. With a bit more work, Mountain View managed to eke out significant savings by varying cooling and other factors. It also published a white paper to share the info with other data centers and prove once again that humans are redundant.

IBM's Watson supercomputer is already good at finding answers to tough questions, but it's going one step further: it can now argue an issue when there's no clear answer. A new Debater feature lets the machine take a given topic, scan for relevant articles, and automatically deduce the pros and cons based on the context and language of any claims. In a demo, Watson took 45 seconds to scour millions of Wikipedia articles and make cases both for and against limiting access to violent video games. It's likely that many people would take much longer, even if they're well-informed on the subject.

Xprize is known for its ambition. The outfit, with the help of some big name (and deep pocketed) partners, has launched initiatives to spur Star Trek-like tricorder development and even get private industry to land a rover on the moon. But now, it's teaming up with TED, that forum for big ideas, to do something a little different. The two companies have just announced an Xprize for Artificial Intelligence and here's the hook: they want the AI to conduct its own TED Talk with no human assist. Mind. Blown. None of this is actually set in stone though and, in fact, the partners are looking to you -- yes, you -- for help in deciding how this all goes down.

Xprize is hosting a dedicated subsite so that readers (excuse us, big thinkers!) like you can pitch in with ideas on what the AI TED Talk format should be, how long it should run, what topic will be chosen and so on and so forth. You'll even get to help determine what type of AI makes the grade: will it be a walking robot, a rollie or a disembodied voice? It's up to you to pitch in and figure it out. Because, hey, if you can't actually help build the AI, setting it up for stage fright is the next best thing.

A new press release says that Star Citizen fans will get their first taste of what Kythera can do in April's dogfighting module, while CIG chairman Chris Roberts enthuses over what the tech brings to the table. "The dynamic nature of the technology allows for more realistic dogfights, but at the same time it delivers a very true-to-life universe where planetside environments will be able to display very large scale city simulations all going on at once," he explains. "Kythera will give Star Citizen a true world AI rather than a less dynamic scripted AI which you may find in other games."

]]>
aiartificial-intelligencebut-i-thought-it-was-a-pvp-gamechris-robertscloud-imperiumcloud-imperium-gamescrowdfundinginternet-spaceshipskickstarterkytheramoon-colliderpvesandboxsci-fispaceshipstar-citizenWed, 12 Mar 2014 11:30:00 -0400319|20848318http://www.engadget.com/2014/02/09/torres-quevedo-chess-player-automaton/?utm_medium=feed&utm_source=Feed_Classic&utm_campaign=Engadget&ncid=rss_semi
http://www.engadget.com/2014/02/09/torres-quevedo-chess-player-automaton/http://www.engadget.com/2014/02/09/torres-quevedo-chess-player-automaton/?utm_source=Feed_Classic&utm_medium=feed&utm_campaign=Engadget#commentsWelcome to Time Machines, where we offer up a selection of mechanical oddities, milestone gadgets and unique inventions to test out your tech-history skills.

Machines may need to start a union. After all, various deep thinkers have been busy for more than a century dreaming up ways to impart human-like thought processes and capabilities into them, just so they can do more of our work. Familiar names in the annals of computing's history such as Charles Babbage and Alan Turing may stand out, but wedged between those figures on the historical timeline is the perhaps lesser-known Spanish inventor and engineer Leonardo Torres Quevedo. Of his many inventions, one of the most unique is "El Ajedrecista" (The Chess Player), which he presented to the Parisian public in 1914. It was a chess-playing automaton, programmed to stand against a human opponent and respond accordingly to any move they made. It knew if someone was trying to cheat, and took pride in moving its own playing pieces around the board. Most of all, it reveled in announcing a victory against its human taskmasters when it inevitably won the game.

According to reports (and confirmed by the internet company itself), Google has bought Deepmind: a relatively small AI company from London. Re/codebroke the news, stating that Google had sunk $400 million into the purchase -- a figure that the company hasn't yet confirmed. The startup's placeholder site outlines its work on "general purpose learning algorithms," with its first projects encompassing games, e-commerce and simulations. It sounds like the team might be working in a separate direction to Google's recent robotics purchases, but the company (unsurprisingly) is plenty interested in the future of artificial intelligence: it teamed up with NASA to launch an AI research lab just last year.