Posted
by
BeauHDon Friday March 18, 2016 @08:26PM
from the the-demise-of-humans dept.

schwit1 writes: In reaction to the recent Go victory by a computer program over a human, the government of South Korea has quickly accelerated its plans to back research into the field of artificial intelligence with a commitment of $863 million and the establishment of [a] public/private institute. According to Nature.com, "It is not immediately clear whether the cash represents new funding, or had been previously allocated to AI efforts. But it does include the founding of a high-profile, public-private research center with participation from several Korean conglomerates, including Samsung, LG Electronics and Hyundai Motor, as well as the technology firm Naver, based near Seoul. The timing of the announcement indicates the impact [AlphaGo has on South Korea], which two days earlier wrapped up a 4-1 victory over grandmaster Lee Sedol in an exhibition match in Seoul. The feat was hailed as a milestone for AI research. But it also shocked the Korean public, stoking widespread concern over the capabilities of AI, as well as a spate of newspaper headlines worrying that South Korea was falling behind in a crucial growth industry. South Korean President Park Geun-hye has also announced the formation of a council that will provide recommendations to overhaul the nation's research and development process to enhance productivity. In her [March 17] speech, she emphasized that "artificial intelligence can be a blessing for human society" and called it "the fourth industrial revolution." She added, "Above all, Korean society is ironically lucky, that thanks to the 'AlphaGo shock,' we have learned the importance of AI before it is too late."' Not surprisingly, some academics are complaining that the money is going to [the] industry rather than the universities. Will this crony capitalistic approach produce any real development, or will it instead end up [being] a pork-laden jobs program for South Korean politicians?

Because people are lazy and people with money would rather throw money at things to meet their desires and go back to whatever they were doing prior than to sift through people and all their associated bullshit trying to determine who is actually qualified. Plus one person can only do so much, you'd end up getting celebrities instead of coders.

Not surprisingly, some academics are complaining that the money is going to [the] industry rather than the universities. Will this crony capitalistic approach produce any real development, or will it instead end up [being] a pork-laden jobs program for South Korean politicians?

Giveaways to giant tech companies may produce short term results (or not if the companies spend it on executive bonuses) but then they're not necessarily supporting the longer term development of AI. It's the universities that do possibly ground-breaking research with no guarantees of results and the corporations that monetise them. Corporations don't have problems finding investors for short-term projects. We need to support the longer term through

Don't kid yourself. When it comes to AI universities are actually mostly just centres for incompetence and wasting big baskets of money..

My own project, begun in 1990 has been developing the theory for building a Strong AI since then - private research, no money no external backing.. With even a tiny bit of the kind of money the universities have wasted my project could have had a working machine by about 2005. The real problem with Strong AI is that it requires a lot of extrapolation and thinking well outs

"...The nail that sticks out the most is the first to be hammered in..."

It's not that they aren't capable of original thought and creativity, it's that the society is very conformist, and no-one will risk trying to do things differently. There are European cities like that too. In Summer everyone wears the exact same clothes that are shown in an H&M catalog.

Too late to cash in on the final blow to the concept of employment and position themselves such that they can continue to create scarcity and become the arbiters of who will be fed and housed for the rest of human civilization.

Perhaps I skimmed the articles too quickly, but who is talking about strong AI? Perhaps the most important take away here is what can be accomplished with AI research regardless of how far off strong AI is.

Employment rates keep rising, the youth employment rate has just reached the highest point in years. She's ruined just about everything including inter-Korea relations. I can't wait until she gets out of office. !

I recall 20 years ago when Deep Blue won against Kasparov, people said that an AI would never be able to brute-force Go well enough to beat a human master. It may not have used only brute-force techniques, but AlphaGo surely did win. I expect that arrangements are being made for the AI to face off against the #1 world Go champion (Sedol was #3 IIRC) and it may even take some tweaking for it to triumph. However this raises the question: where do we move the goalposts to next? What does AI have to accomplish to change how we fundamentally think of it, and consider it as 'real AI'?

Many people have an AI assistant (ok a text-to-speech shortcut to a semantic search engine) in their pocket, and will soon be entrusting their lives daily to autonomous cars. Anyone else feeling like the singularity is coming?

No I think they do, though admittedly it's very narrow. In this case it's "construct new rules automatically (about winning Go) after experiencing winning and losing Go". The difference is the rules about how to play and the single Goal (winning) and what winning looks like are predefined. The objective function about how to go about winning is what is learned. For a different game insert new game mechanics and end positions and then let the same optimizer run. It might be interesting to se what happen

I recall 20 years ago when Deep Blue won against Kasparov, people said that an AI would never be able to brute-force Go well enough to beat a human master. It may not have used only brute-force techniques, but AlphaGo surely did win. I expect that arrangements are being made for the AI to face off against the #1 world Go champion (Sedol was #3 IIRC) and it may even take some tweaking for it to triumph. However this raises the question: where do we move the goalposts to next? What does AI have to accomplish

How many people can write songs, stories, or poems which meet human standards for quality and originality? Machines can now do all three, poorly, which makes them just as good at those tasks as the majority of humanity.To be honest, I plagiarized this answer. In the movie "I, Robot" (which the Slashdotariat hates, but was not bad), the robot lead was confronted with just that question by a human. The exact exchange is,"Detective Del Spooner: Human beings have dreams. Even dogs have dreams, but not you, y

Sonny the Robot could tell jokes, and hit the timing for a punch line. If a robot can do that, I'll consider it my equal.

It's not the big stuff I'm looking for. It's the common, everyday stuff: telling jokes, folding laundry, telling a picture from a person... all at the same time rather than one algorithm specialized to it. Like people do. I really don't know how far we are from that; it feels like it's 20 years off, same as always. But the AlphaGo thing (using a neural network which just just possibly be

Interacting with the real world seems to be the next big frontier. Some robots are already getting quite good at it. See how far robot vacuum cleaners and autonomous cars have come, for example. They have got a lot better at navigating and mapping their environment. Even so, making a cup of coffee is still rather difficult for robots.

We now have game AI that's really good at tactics but not so good at strategy (chess AIs), and we have game AI that's really good at strategy but not so good at tactics (AlphaGo with its failure to spot tesuji). The next step would be to make game AI that's good at both. See e.g. On Adversarial Search Spaces and Sampling-Based Planning [aaai.org]. The next step after that? I'd say incorporating the kind of strategic capabilities AlphaGo shows to make AIs for very large incomplete information games.

They came out with ideas like TRON . Have smart appliances that could interact with each other. Turn the cooker on, and the extractor fan goes on as well. Turn the stereo on and the windows close (to stop neighbors hearing loud music). If your alarm clock goes on, the lights in the house go on.

There was considerable research into expert systems back then. They thought everything could be solved using binary decision trees. But then they realized that things weren't yes/no but more definitely/possibly/no eff

They must have a bit flipped somewhere, because every kid riding down the road in his ricer has the stereo full blast and the windows wide open. The worse the "music" the louder it's played. Now get off my lawn.

Back in 2006, I was asked on Slashdot what my advice would be to students interested in a career in AI. I told them to get their PhD under Hutter. Hutter's first students were founders of Google DeepMind thence AlphaGo.

I'm now, as then, advising investment in compression prizes [slashdot.org] for the same reason*. (And thanks to Matt Mahoney for pointing me to Hutter's AIXI theory way back then.)