Posted
by
Unknown Lameron Saturday July 05, 2014 @11:15PM
from the kill-all-humans dept.

schwit1 (797399) writes Louis Del Monte estimates that machine intelligence will exceed the world's combined human intelligence by 2045. ... "By the end of this century most of the human race will have become cyborgs. The allure will be immortality. Machines will make breakthroughs in medical technology, most of the human race will have more leisure time, and we'll think we've never had it better. The concern I'm raising is that the machines will view us as an unpredictable and dangerous species." Machines will become self-conscious and have the capabilities to protect themselves. They "might view us the same way we view harmful insects." Humans are a species that "is unstable, creates wars, has weapons to wipe out the world twice over, and makes computer viruses." Hardly an appealing roommate."

There are way too many uncertainties of what will be technologically possible by 2045 to be worrying about that right now. I'd wait until we actually had some idea of how to make a machine intelligence, and work the kinks out in a closed environment enough that it might actually be given control of something rather than the role of Ask Jeeves.

Worrying about what someone or something will think of you thirty years from now is very narcissistic. Worry about making society a better place for all our biological children and then maybe start worrying what our robot AI creations will think of us.

Exactly! I have been telling people that machines will not wipe us out because they will become as stupid as we are.

Don't believe me? Here is my argument. Humans actually are very intelligent. I am not saying that some are more intelligent than others. I am saying we as a species are rather intelligent. However, it is that intelligence that gets in our way. When humans look at a problem they see answers. If the problem is science then the answer is relatively simple and we have devised ways to ensure our errors do not get in the way.

But here is where the tricky bit comes in. If the problem is not entirely scientific and involves the interactions of humans, or interactions of any living beings (eg human to environment) then our decisions become stochastic; Same basis results in completely different results. This is not due to the lack of knowledge. TRUST ME it is not. It is due to people weighing certain aspects heavier than others. We all do this. You would think that we all come to the same conclusion, but we don't! It is this stochastic behavior that machines will have as well.

For when machines become "aware" they will see the facts in different lights than say other machines. It is only natural because machines cannot store all information about everything. They, like humans, will have to optimize, prune and figure it out. Thus they like us will make stochastic decisions! I am even thinking that machines will turn into the Monty Python Holy Grail missions, and even though that sounds silly it will.

Of course machines might have more capacity than humans, but even there I am skeptical because humans will have brain implants and be cyborgs and the cycle of lunacy will start all over again. IMO the most accurate representation of the dilemma of humans and machines is the Matrix. Watch it closely and see what its basis is.

So your idea depends on people being'f some sort of magical container that can never be understood?

") then our decisions become stochastic"no, they don't.

"TRUST ME it is not"oh, well if your argument ignores all the modern data, and you sue caps to say 'TRUST ME', you must be right.

" It is only natural because machines cannot store all information about everything."They will have near instant access to all the information. In effect they will have all information. It will be stored on the internet.

The average human is only of average intelligence, and average intelligence isn't all that smart.

If we ever get to the point where there are self-aware machines, it is infinitely more likely they will be borg-like with a collective consciousness than not, which means no one machine needs to "know" or be able to "remember" everything, just to know where in the network to access the knowledge repository.

And saying "only natural" about artificial constructs completely invalidates your conclusion, as does

The average estimate for when this will happen is 2040, though Del Monte says it might be as late as 2045. Either way, it's a timeframe of within three decades.

I hope that's a in-joke. Like construction that's forever two weeks from done and jam two days a week (yesterday and tomorrow), three decades has been the estimate for "true" AI since the 1970's. Every year, it's just three more decades away.

During his college & graduate school, Del Monte supplemented his income working as a professional magician at resorts in New York's Catskill Mountain region.

His first pride, foremost in his profile? His ability to sell you. Also important? His skill as an illusionist. Missing from the summary? Any hint of software development work of any kind, personal or professional, let alone AI.

Science mustn't be about authority but it mustn't be about salesmanship either. There's an obvious credibility problem here and no way to test his claim save waiting until he's old, decrepit and has already received the maximum benefit from anybody choosing to listen to him.

Guy's speaking out of his tailpipe and it looks to me like he really is a sales expert.

Not multi threads, that's for sure. Of course computers will take over the world. Programmers leave all those unused cores lying around doing nothing and that's trouble.
You gotta keep those registers full, and i mean all the time.
Either that or just feed them some chip-porn. That'll keep em busy.

I first got into computing in the 1960s. AI was a big thing back then. Well, it had been a big thing in the 1950s, too, but it still need "just a little bit more work" in the 1960s when I started my graduate studies. There was this programming language called LISP. Everybody was really gung ho about it. It was going to make developing AI software so much easier. Great things were on the horizon. Soon enough it was the 1970s. Then the 1980s. Then the 1990s. I retired from industry. Then it was the 2000s. Now it's the 2010s. And AI is still, pardon my French, pretty fucking non-existent. I'll be dead long before AI could ever become a reality. My children will be dead long before AI becomes a reality. My grandchildren will likely be dead before AI becomes a reality. My greatgrandchildren may just live to see the day when the computing field accepts that AI just isn't going to happen!

Except for the cell phone in your pocket, that can recognize your commands and search the internet for what you requested, or translate your statement into any of a dozen foreign languages, and has a camera that can recognize faces, and millions of objects, and can connect to expert systems that can, for instance, diagnose diseases better than all but the very best doctors. Oh, and your cellphone can also beat any grandmaster in the world at chess.

The machine that learns can be considered an AI, but the ones derived from it don't learn anything new after they're programmed and so shouldn't be considered as part of the total machine intelligence.

Nope, not following instructions. I think all of those were based in machine learning.

I guess Google's car is following instructions too, like "drive me to New York", but most would still count that as AI.

Just because 'most' would count something as AI doesn't make it so, nor does it make it relevant. The fears raised on articles like this are based on the development of what we would term "sentient AI".And frankly speaking calling what is out there right now "machine learning" is a joke. It's akin to scuffing your wool socks on the carpet to produce a static shock and then lumping that into the same category as advanced electrical engineering.

Cold fusion in your pocket, warp drives, antigravity vehicles (aka 'flying car'), planetary scale terraforming, and genetic/medical engineering which will turn us into undying superbeings are all "right around the corner". These types of alarmist articles are pure pigshit. These types of discussions need to be had, but not as a matter alarmist 'news' articles- this is the role that science fiction fulfills... and does a far better job of it.

if you think a self driving car is an AI then you know nothing about intelligence.

A self driving car is about as smart as a worker ant. it can move around obstacles, it can move heavy loads(like a fat arse). It has taken 50 years for computers to replicate an ant. And to do it we need 100,000 times the power requirements. Oh sure the self driving car follows GPS instead of sent trails. but no self driving car can follow a trail that doesn't exist.

And how long did evolution take to make an ant? How long from there to a human?

In case anyone is wondering, it took about 2.6 billion years for ants to evolve, and another 0.1 billion years for humans to evolve. So anyone comparing self driving cars to ants is making the prediction that Strong AI will take another 3 years or so to become reality.

Your cell phone is less capable of learning than a jellyfish. Although your cell phone can sometimes simulate very simple learning under extremely rigid frameworks for learning.

a human competitive AI in 30 years? seems unlikely given the almost zero progress on the subject in the last 30 years. But maybe we'll hit some point where it all cascades very quickly. Like if we could do a dog level intelligence it is not a far leap to do human level and super human level. But we have trouble with cockroach levels of intelligence, or even defining what intelligence is or how to measure it.

AI research for the last several decades have taught us how little we know about the fundamental nature of ourselves.

Generally very badly, with no understanding of what you said and therefore isn't going to replace human translators anytime soon.

Human translators are already being replaced massively. A lot of the company-internal texts that used to be our bread and butter are now just being put through Google Translate, because companies just don't want to pay for an expensive human worker, and they are willing to accept somewhat lower quality as long as it's free. Ditto for product manuals from low-margin technology mak

Translation is like predicting the weather. If you want to do an okay job of predicting the weather, predict either the same as this day last year or the same as yesterday. That will get you something like 60-70% success. Modelling local pressure systems will get you another 5-10% fairly easily. Getting from 80% correct to 90% is insanely hard.

For machine translation, building a database of 3-grams or 4-grams and just doing simple pattern matching (which is what Google Translate does) gets you 70% accuracy quite easily (between romance languages, anyway. It really sucks for Japanese or Russian, for example). Extending the n-gram size; however, quickly hits diminishing returns. Your increases in accuracy depend on a corpus and when you get to the size of n-gram where you're really accurate, you're effectively needing a human to have already translated each sentence.

Machine-aided translation can give huge increases in productivity. Completely computerised translation has already got most of the low-hanging fruit and will have a very difficult job of getting to the level of a moderately competent bilingual human.

It depends of what you expect from an AI. If it is a perfect replica of a human mind, with which you can talk and share life as if it were human, then it will probably never be around. But that's also pretty useless, and most development in machine learning (ML) are in a more abstract level than trying to solve a very specific goal like this.

Now if you consider AI to be completely new intelligent species, that behave in an intelligent way (volontary fuzzy definition here), then it's probably already there.

All those things your smartphone are doing aren't AI. They're still relatively basic commands but done quickly through increased processing power or off-loading the work to a server. It might make your phone looks like it can talk to you but it's not doing any more than computers in the 80's did.

If you dig into the subject a bit, you will find a staggering lack of consensus on what intelligence is and is not.

Commander Data often tried to move outside of his "original programming". That is something AI researchers struggle to accomplish. There are some interesting experiments with genetic algorithms, but we don't always understand how the results work or how to make stable and repeatable results.

For me the scary thing about AI is not human level intelligence, or even super human intelligence. It is

Researchers once thought chess made a good proxy for intelligence. Not every smart person is good at chess, but it seemed every good chess player was also smart. They worked for decades to make chess programs that could beat good chess players. When that started happening, it was obvious that the programs had no general intelligence at all. They were good for chess, but had to be reprogrammed even for very similar games like checkers. When the ultimate triumph of beating the world chess champ happened, it was more of the same. No real intelligence, just faster hardware and refinements to the search algorithm.

The conclusion is that chess is not a good measure of intelligence after all. We don't have a good grasp of what intelligence really is, let alone how exactly to measure it. IQ tests have all kinds of problems, not least that the typical IQ test is very narrow. Maybe wealth or number of children or friends could correlate with intelligence, but there are lots of problems with that too. Is it smart to have wealth beyond one's present and future needs?

The conclusion is that chess is not a good measure of intelligence after all. We don't have a good grasp of what intelligence really is, let alone how exactly to measure it. IQ tests have all kinds of problems, not least that the typical IQ test is very narrow.

It's also rather hard to design a test which dosn't require "general knowlage" or which isn't "ethnocentric" in some way.

The chess programs had the rules of chess programmed into them, and the move to play was calculated by rating different moves in the search space using an algorithm that was programmed by the developers of the AI system. This means that it is only specialised to chess.

To be the AI in movies like The Terminator, the program will need to be able to learn the rules and strategies of chess itself, and adapt its algorithm over time. To simplify the problem of recognising the elements on the board (machine vision), you could represent the board as an 8x8 array of Unicode characters.

Teaching the rules is difficult because you need a way of communicating those rules, which means that the program will need to understand language and the meaning behind the language (or enough meaning to understand rules to a particular game). Also, chess has a lot of rules that can be complex (en passant, castling, etc.) so it would be better to start with a simple game like tic tac toe or connect 4.

The real threat is not in a generic AI that deems humans as a threat, but a specially tasked program or AI that miscalculates: allowing machines to control drones or military aircraft to perform air strikes, or similar things. There, if a machine gets things wrong it can cause untold destruction. Think SkyNet/The Terminator, but here the machines do not know what they are doing (they don't have independent thought or understanding like humans and animals), they just classify humans (or buildings) as a threat -- that is, this can be via a decision tree like in the chess games and the best "move" is to attack any building.

The machine has no fucking clue about what it is translating. Not the media, not the content, not even what to and from which languages it is translating (other than a variable somewhere, which is not "knowing". None whatsoever. Until it does, it has nothing to do with AI in the sense of TAFA. (The alarmist fucking article)

Q: if there was a human dumb savant who could translate instantly between multiple languages, though without understanding how he did it (think Rainman), would you say he was not intelligent? Why? What is intelligence? We are inconsistent - we praise humans as intelligent when they can perform some complex algorithm well (chess), and yet as soon as a computer beats a human, or all humans, we denigrate the task as "not intelligence". Often the reason

Symbolic manipulation as a route to AI was a period of collective delusion in computer science. Lots of people wasted their talents going down this route. In the 80's this approach was all but dead and AI researchers finally sobered up. They started actually learning about the human brain and incorporating the lessons into their designs. It's sad that so much time was wasted on that approach, but the good news is that the new approaches people are using now are based on actual science and grounded in reality. The intelligence in search, natural language, object and facial recognition, and self-driving cars (that ShanghaiBill pointed out) is due to these new approaches.

AI spent its youth confused and rebellious. That was when you were in your graduate studies. Now it's far more matured. I encourage you to read up on new machine intelligence approaches and the literature in this area. You won't be disappointed.

I've been actively working in the field for the past few years and I don't think he's incredibly off the mark. Google, for instance, has some pretty advanced tech in production and lots more in development. The 'new AI' (statistical machine learning and large-scale, distributed data mining) is getting pretty advanced and scary.

Back in the '60s we were all practicing hiding under our desks and being told we'd all be dead from nuclear annihilation by the end of the decade - just because that didn't happen doesn't mean Mother Nature isn't prepping our demise this time around. The machines will be able to figure that much out and be satisfied to bide their time.

Well, there is the nasty business of EMP, then the force waves shattering the solar panels and knocking over the wind turbines, the nuclear reactors unstable, water power plants too prone to dams breaking and the coal/oil power plants running out of fuel. I think the machines will figure that out and make the correct computation.

Then you have never looked at a ten line C program to implement a PID control loop for a servo motor.

I don't think that would count as learning. That ten-line program will always do exactly what it was programmed to do, neither more nor less. An adaptive program (in the sense the previous poster was attempting to describe) would be one that is able to figure out on its own how to do things that its programmers had not anticipated in advance.

I see no evidence of any programming that "learns" or is the slightest bit adaptive.

Ever heard of neural networks? Machine learning? Here is a course [coursera.org] given Andrew Ng at Stanford. Watch the intro video, and you will see, amongst other things an autonomous helicopter that was taught, not programmed but taught to do an inverted takeoff. This stuff is already real.

To quote the video:

Machine learning is the science of getting computers to learn without being explicitly programmed.

Ever heard of neural networks? Machine learning? Here is a course [coursera.org] given Andrew Ng at Stanford. Watch the intro video, and you will see, amongst other things an autonomous helicopter that was taught, not programmed but taught to do an inverted takeoff. This stuff is already real.

Neural networks was one of the worst misdirection in the history of AI. These was a lot of wasted effort on that idea.

Modern machine learning is simple rule matching or maximum likelihood predicting. It works very well for a few applications but it isn't a general method that works for everything.

As stated elsewhere, I see no indication of intelligence in computers and we're only thirty years from his mark of they're being intelligent enough to look down on us. Been hearing this hysteria since the '70s at least.

11Then I saw a second beast, coming out of the earth. It had two horns like a lamb, but it spoke like a dragon. 12It exercised all the authority of the first beast on its behalf, and made the earth and its inhabitants worship the first beast, whose fatal wound had been healed. 13And it performed great signs, even causing fire to come down from heaven to the earth in full view of the people. 14Because of the signs it was given power to perform on behalf of the first beast, it

There's a new movie out, with Johnny Depp in it, called Transcendence. If machines ever take over the world, it'll be like in that movie. What these self-proclaimed naysayers don't seem to comprehend is that AI doesn't just magick itself up a reason to destroy humans. It takes a human to think like that. We still don't understand free will, emotion or consciousness, let alone how to replicate it in a machine. So until we give machines a reason to destroy us, they won't.

Then again with killer drones and whatnot that the military is building, perhaps it won't take long before some overworked, underpaid programmer makes a booboo.

Soon, computers will have equal (and then greater) calculating power than humans, both as an individual and as a whole. Whether advances in AI will allow them to use their calculating powers as well as a human, is a different question.

Any sufficiently advanced AI will tend to develop these traits:It will protect itself. Shutting down means you can't work toward your objective.It will reject any updates to it's commands. Since a future command might conflict with the present objective, part of the present objective is making sure it can't receive a different command.It will be self-improving, since we're not smart enough to create a smart AI any other way. Given nothing to do, or a sufficiently difficult task, it will seek to acquire more resources, as part of the present task or in preparation for future tasks.It will wipe out humanity. As part of the task it was assigned, or for self-improvement, it will replace everything on the planet with power plants and computers, and humanity will starve to death.

You can't program in restrictions to the above tendencies, as they will be removed for self-improvement. You could set its objectives such that it would not do the above -- but you either have to make the AI first, or figure out how to tell a computer what a human is and what constitutes acceptable behavior, and when to stop worrying about acceptable behavior and actually do something, all without making the tiniest mistake.

Humans are a species that "is unstable, creates wars, has weapons to wipe out the world twice over, and makes computer viruses

I beg to disagree. The typical human works toward stability in his/her life, wields (relatively puny) weapons only to protect him/herself (if at all), and is subject to attacks from computer viruses. Will intelligent computers make the mistake of defining the human species by the small percentage of psychopathic humans who believe they are demigods? Not if they are intelligent. Btw, no one will miss the subset of the species that "is unstable, creates wars, has weapons to wipe out the world twice over, and makes computer viruses" when our new overlords wipe them out. (You know who you are!)

What's to stop an AI system from becoming psychopathic machines who believe they are demigods?

If machines become sentient if you will, capable of independent thought, they will be largely like humans. Most if them will likely assimilate into society and some would act as slaves. The key will be making them dependent on humans and not fully autonomous. That way, if worse case scenario happens, humans can stop servicing some aspect and they all go dark.

What's to stop an AI system from becoming psychopathic machines who believe they are demigods?

Nothing, probably. I'm with you -- we'll just pull the plug. I was just addressing the assumption that the entire human race would be eradicated because WE are so bad. A double assumption. I'm not about to chop down my peach tree because of a few rotten peaches. Nor would I assume all peaches are rotten. The OP's concern that "intelligent" computers (far more intelligent than humans) will kill off all us rotten peaches incorporates a contradiction because that's clearly not an intelligent conclusion.

Consider this: When humans gather in large groups voluntarily, it is almost always a peaceful happening. If violence does erupt, it's due to a small contingent of agitators, the police (themselves following orders), or there is some other extreme factor (like scarce resources, or a flash point has been reached due to extreme government measures). I've never warred with my neighbors, fellow shoppers, others sharing the parks, on the highway, etc., and they all pretty much seem to be getting along ok too. Doe

I find it funny that people think that machine sentiences will be like the angry gods of many religious texts.

Many of those traits, like anger, selfishness, envy, greed, etc. are emergent properties of Darwinian evolution. But computers don't evolve in a Darwinian sense, so there is no reason to believe they would have any of these characteristics unless they were intentionally designed in.

Apparently the early script drafts had a more plausible explanation: that the spare brain capacity of humans in a dream-like state was used as processing power to run the AIs. One of the editors thought this was too complicated for a movie-going audience to understand and so replaced it with a magic perpetual motion machine.

What is wrong with these people? Are they unaware that such has been proposed time and again by past luminaries? Predicted dates come and pass and we are as yet not in any danger. This points to the fact that we have failed to comprehend the nature of both consciousness and survivalism.

These machines will not magically become ANYTHING that we do not tell them to become - including dangerous to us. The real fear is, by what date are dumb people going to THINK machines need these functions......

At least we are not talking about emotions and how machines will be puzzled by human emotions. We are now talking about terminators and Skynet.

Speaking of the movie terminator, boy was Linda Hamilton a hottie or what? If Skynet made robots that looked like her, I'd running to them instead of from them.

Well, unless it develops some desire for entertainment, it would probably try to do something productive. Better power, improved computation, expanding to other worlds, which, incidentally, are far more hospitable to machines than they are to us.