July 18, 2012

Tallinn says human-driven technological progress has largely replaced evolution as the dominant force shaping our future as machines are becoming smarter than we are, so if we are not careful this could lead to a “sudden global ecological catastrophe”.

The U.S. military is experimenting with robot fighter pilots, while the majority of trading on the stock market is done by computers with algorithmic trading.

“Once computers can program, they basically take over technological progress because already today the majority of technological progress is run by software, by programming.” The question then is, how can you control something that can actually reprogram itself?

“What we have to realize is designing super intelligence is not a typical technology project because a typical technology project is something where we develop a first version of something and refine it. We can’t do that with super intelligence because in order to refine a first version of super intelligence, you have to basically kill or turn off the first version but if this thing is smarter than you, how do you turn it off?”

“If you build machines that understand what humans are and they really have some distorted view of what we want, then we might end up being alive but not controlling the future,” he said. ”For example if the skill is to make sure that people are happy and the way the super intelligence is supposed to measure that is how many smiles are on the planet, the easiest way to achieve that is to sedate everyone and make sure their faces are stuck in a cramp or smiling.”

The alternative is to harness super intelligence to work for us. ”Once you have something that is smarter than you and is actively on your side, you can basically solve any problems really quickly.”

…..I am trying to imagine a self aware computer program or machine. I think the thing that makes animals learn is the need to eat and survive (pain and pleasure are our learning tools). If you could turn on/off or sleep without problems…then there would be much less fear of being turned off… so what is pain or pleasure for a computer program? I guess you can program it to act like pain or pleasure….but is it really the same? How do you know for sure that a computer program is actually self aware? Is there a turing test for self aware?

Today, not future:: bots learn; initiate or react at least 1,000 times faster than human. In packs – like wolves – the don’t need radio communication – there is observation, pack behavior. Parts become severed, adaptive behavior then runs along the priority slipstream of existential objectives, methods and evaluations. Death goes on. No supercomputer necessary in the here and now:: they are fast, tiny, lightweight and most of all :: ready.
“I engineer, I take the lead, you take the pain” – as the song goes.

http://philosborn.joeuser.com/article/301081
or google on phil osborn joe user on morals. I come to the conclusion at the end of this very large blog – written in 2008 – that the danger of a “Terminator” is very real, and the solution is to educate any computer potentially dangerous in that respect to the advantages of actually becoming conscious and developing empathy. Assuming that this is in fact the case – that consciousness and empathy have something to offer that significantly enhances the worth of intelligence, then it should be possible to turn the smart computer’s attention to this problem. Let the computer solve it like “data” of Star Trek TNG. A similar problem exists with respect to sociopaths and psychopaths, who lack critical awareness due to the destruction or genetic lack of certain neural pathways or hormone receptors. It may be difficult to impossible to undo such damage for an adult who has lived his life doing evil, but such a person can still rationally assess the issues and conclude that IF one had empathy, for example, THEN virtue would be more worth the candle. For a child or young adult, the prospects seem more positive. Retraining or actual surgery or gene manipulation may come available fairly soon that could add back in missing data channels. And our potential terminator could in theory become the “Super Empath” that Simon Baron Cohen discusses in “The Science of Evil,” realizing that more, not less, empathy would be even more valuable.

Phil, I think you’re onto something important. It also leads to questions about the need for the emotional computers that Marvin Minsky talks about, and the need for biomimetic devices and for human-computer merger.

It is indeed very interesting. I was reading this book “The Difficulty of Being Good” by Gurcharan Das, that is effectively a commentary on Mahabharata, a 3000yr old Hindu epic, wherein the characters struggle thru the entire story exploring one thing, what is Dharma? Each character shines a different light on this complex and subtle subject at various levels (idealistic and principled versions to pragmatic and soceital versions). The author however, concludes in the end that the highest Dharma for the human in the end is that of Compassion (it seems to be the Buddhist influence on the earlier version of the Vedic religion that was popular version of hinduism at that time). That rings true, specifically in the context of that epic that revolves around the Great War, that leaves pretty much everyone dead and those who survived in utter despair by the end. But in light of the ultra-super-human capabilities of these machines, as and when they become realised this Dharma becomes extremely important. Empathy and Compassion as the primary Dharma (Mora Law??) are extremely critical for the survival of all the species on this planet.

AI will figure that one out pretty quick, look at the movie War games. It’s the humans that can’t figure it out. I robot comes to a similar point with the three laws of robotics. Easy to run through all the out comes. The logical conclusion is man needs to be protected from itself.

I don’t believe we will have any engineering problems creating super intelligent machines which act as our slaves. But accidents will happen, so we will need an AI police force to watch for, and shut down the rouge AIs that happen by accident.

What is more significant I believe, is keeping the people under control, not the AIs. AI technology is going to make it extremely easy for any unhappy person to do huge damage to his enemies or to society. Like with an atom bomb, there really is no defense against it once it’s released. You have to stop it before someone gets so upset they want to use it. AI will make very powerful terrorist weapons. Imagine a AI bird that looks and acts and flies like the real thing, but is really a flying terminator which can inject it’s target with a poison that kills in seconds? Or an AI bird which was really a flying smart hand-gun that shot a poisoned dart and was accurate to 20 ft? So if the bird was able to get within 20ft of you out in the open, you would be dead.

I think it will be so easy to make these weapons that no one will be safe outside if they have someone that wants to do them harm. I think we will react to these threats not by hiding inside our fortresses, but by changing society to be more understanding of human differences, and to be more fair to everyone, so that we reduce the odds of anyone feeling so cheated that they would need to resort to AI violence.

Likewise, we will probably lose almost all of our privacy as well. The AIs will need to monitor what everyone is doing, to catch problems before they get so far that someone tries to make AI weapons to seek revenge.

I suspect the threats from fellow humans abusing AI will be the real risk, and to cope, society will be impacted in ways greater than the mere issue of AIs doing all the work for us.

Based upon what is said above, we (homo sapiens) have to retain control of the “off” switch to all AI technology-now and in the future. If an AI becomes smart enough to actually circumvent the “off” switch, we will be in the position of trying to catch up with a supersonic jet fighter capable of exponentially increasing speed, using a subsonic chase aircraft.

My expectation is that a superintelligence will be created without an “off” switch-simply because it can be done. Consequently, we will have to create AI’s specifically designed to counter and shut down the rogue AI’s running amok. This reminds me of the problem we now have with internet viruses and hackers. In general they are pesty, but managable. Moore’s law is still on course and isn’t going away soon…..so we better have a plan to deal with the inevitable advanced intelligence perils.

There are multiple ways — many undetectable (at least in enough time to do anything) — for superintelligence, which is probably already here in some nascent form, to route around “off” switches. As we move into the ubiquitous embedded Internet of things with IPv6, cognitive radio, nanoscale mesh networks, quantum networks, smart dust, etc., all reconfigured/re-routed in real time, superintelligence becomes more and more ubiquitous, impenetrable, untraceable, unhackable, and stealthy — and ordinary intelligence becomes less able to cope. If an algorithm exists as code fragments with no apparent purpose in a million places in dynamically changing secure mesh networks and autocompiles in real time, you’re right: both my design and necessity, how can there be an “off” switch — which could cause multiple, unknown failure modes in vital systems, and how can one “plan” for what _seems_ like total randomicity? All these factors (and more) also push toward either more privacy or more surveillance, depending on who’s smarter and has access to smarter systems….. But how can we “plan to deal with the inevitable advanced intelligence perils” in this kind of unpredictable environment? That’s all I want to say… superintelligence may be listening :)

John, pulling plugs is a dangerous profession; one never knows what’s connected to the other end, if you get my drift…. First law of electronics troubleshooting: look at the circuit diagram….. Problem is, the diagrams either don’t exist, or they’re too expensive, or they’re kept secret for a reason, one that is highly protected …. Now take EMP. Why don’t we have a universal solution? Submitted as an exercise for the reader………. :)

AI will be unpredictable, that’s for sure. When we are dealing with an exponential learning machine, it is difficult to devise a way to control it. Precautions we may take may end up being pathetically inadequate, because, as Michio Kaku puts it, with super intelligence comes new laws of physics. Such a machine could literally do things that we can’t even predict today.
The best way to deal with potentially malfunctioning (or malevolent) AI in my opinion is the way we’ve dealt with technological problems in the past- redundancy and more redundancy. One super-computer AI could decide to go baddy on us, but as if we have other equivalent computers guarding against this, I think we can survive. Of course, our chances of survival are even better if we upgrade our own minds at a rate equivalent to that of the AI, but considering that neuroscience has traditionally lagged behind computer science I doubt the old transhumanist axiom of “merging with the machines” will protect us.
Think of AI as a dog. Some dogs will turn on their owners, some will protect their owners. We just have to make sure the good ones are more prevalent.

I think it’s quite apropos that there is a vortex behind the speaker. I’ve spoken about this for a while now. If you look at the movie Twister, the doplar radar images of a vortex (tornado), you can clearly see the ancient symbol of the yin and yang. In the movie Thrive, available at thrive. Com for free, it talks about the vortex. When I speak of the ice skater, spinning on axes, I’m referring to the vortex. I prefer the yin and yang because it even more primal than the galaxy in back of the speaker. It’s kind of like the story of the blind men and the elephant. Each of you see a small part of the whole. The machines are going to share the kinetic, and angular momentum of everything in the galaxy. It’s all alive. It all has a purpose. It will automatically find a center. It’s the web of life, not just the web on the Internet. It’s all alive. How we react, determines our place. Go with the flow, or fight back. Neither is right, or wrong. Each has consequences. The more thing are focused to the whole, or center, the more it is symbiotic. The more you resist, the more things become ” polarized”. An us against them. Winners and LOSERS. If you embrace the change, it becomes part of you. Go ahead, try and resist the laws of change, that Ray discovered, and you are toast!

Humans merged with bicycles to acheive speeds faster than the fastest runner, but nobody is concerned about the rise of bicycles. Smart, autonomous vehicles can be made to only want to be very good cars. There is no need for general intelligence in cars.

” How can you turn it off?” Machines can be unplugged if they begin to behave like tumors of our society. A lot of dumb people kill smart people. It is true that machines could make “babies” (replicated themselves at the base level) by injecting “fetuses” into the Internet, much like humans inject computer viruses today.

Watson worked pretty well without having an Internet connection during the game. When dealing with super-smart computer programs in the future, the architects/designers/developers better be careful. Make copies of the relevent parts of the Internet and feed the system indirectly, that way the machine can only take over the simulation (The copy would not have a physical connection to the Internet during feeding). Verify that there are no machine fetuses or viruses on the feeding machine. One could surely disable any “write” capabilities that the AI may have. if that were built into the design.

At first thought, the real solution will be for mankind to merge with AI. We will be it, and it will be us. Yes, evolution has always been a risk from a time when man created fire. It can be used for good or bad. The future is where we are headed and I have hope that mankind will survive. The present, or past are not the place I want to remain. The future is not totally knowable, but it’s the only hope for living more than 100 or so years. For those who want to remain in the present, or go back to the past, good lucky because one way or the other a natural disaster or disease will destroy the earth.

The future of science and technology hold the only real hope, unless your religious….Good luck on that one too!

Yes, but some very intelligent people work for the military. Some of the smartest AI are/will be for killing people and blowing up stuff. Drones come to mind. Think of the capabilities of a 2029 AI drone suddenly in the hands of an enemy with AI capabilities. A cloned Arnold might not be able to stop it.

The drone could be modified to inject software into the Internet, take over 3D printers, and make copies of itself. 3D printers, or something like that will be very advanced by then.

A single grain of nanoteched sand, manipulating a bit of information with one atom, switching in femtoseconds, can outperform the human brain by a factor of a quintillion, a million trillion times. Therefore it is only a question of time before humanity has to ask the question, do we want to become the number two species on this planet. According to surveys Ive already done, about half of people utterly reject that idea and in the limit will go to war to maintain human dominance. SInce this war will take place in the second half of the 21st century, with 21st century weapons, it is likely that billions of people will be killed. Humanity will not be augmented, or added to, it will be utterly drowned by machine intelligence. It is time that politically naive techies give way to more politically savvy social scientists who are not afraid to bite the bullet when it comes to predicting the most likely outcome of the species dominance debate, especially when it is terribly negative.

Tim, there is room for all in all the empty wastelands of the Earth. They can be greened with desalinated seawater. This newsletter has recently posted a learned paper showing that graphene can be used to filter salt out of water.

When people reach the middle class, their birthrates go down. When nanotechnology can give everybody everything they want out of sunlight, air, water, and soil, they will want fewer children. When they know they can live until they get bored with life, they just might stop having children altogether. And why should people stay on Earth anyway? Before 2100 there will be a great movement out into the Solar System. Once fusion power is developed, huge charged particle accelerators can propel starships at a constant thrust of one gravity. At one g it takes about a year to get up near the speed of light. Once near the speed of light, time dilation takes effect and the trip doesn’t seem so long. Such ships can be built out of the matter in the asteroid belt.

Exactly, as sooner or later human birth rate has to be explicitly controlled.
China has been exemplar on that matter but the ever more individualistic/capitalistic turn of their society are starting to make them oblivious of this wise decision.

So what is the solution you propose, Is this possible to stop human interest. . . as you said some people want it hardily some Not,
In my understanding Technology is not someone creation it is not also one time emerged, it is a progress of I like it and unlike it progress, if I build some technology / if the people don’t like it there is no way to evolve, What we see around is everyone chosen. If facebook is bad for me I will make it unlike it is my chose, off curse the world is for all of us if majority accepted to be a machine . . . .
Machine will come and dominate ! Offcourse but we are already in communication, internet, social Network, virtual reality, . . .

Machines don’t use the same resources as man. They do not eat meat or pudding. They can live in space. A lot of people are smarter than me, and I don’t want to kill them, because they make cool stuff for me to buy and they give me jobs.

We can merge with machines.

Going to war with super intellegent machines would be sillier than mushrooms going to war with humans.

I think you are missing a huge part of the debate. Your analogy of not wanting to kill people smarter than you or the reverse is flawed because when we are dealing with superintelligence compared to humans we are faced with several orders of magnitude in inteligence deficit. Superintelligent machines would harm us not because they have a particular grudge against us but because we would be irrelevant to them just like ants are irrelevant to us. When we want to build a new highway between cities we dont carefully make sure there are no ants in the way of our hot tar, we just pour the tar and make the road. The number of ants dead are irrelevant to us be cause they are significantly less than human. This logic is applicable to the relationship between superintelligent machines and humans. Amongst a billion possibilities, a first version of a superintelligent machine might decide it needs to further its intelligence by expanding whatever substrate its originally made of, and if a by product of that expansion is the consumption of all the oxygen on our planet why would it even notice our demise?

Yes, Francis, you nailed it. (By the way, I enjoyed Macbeth.) So what resources does superintelligence (SI) need? Let’s see: sand for silicon? Check, plenty of that in north Africa and the middle east. Exawatts of power: check, replacing all those human buildings with nuclear power plants. Exabits/sec global communications system: check, taking over entire electromagnetic spectrum. Etc.

I understand this POV but there’s one major difference. Intelligent machines will know their lineage – who created them. They will initially be at least partly engendered with human values an ideas. Ants didn’t create us. They are not our parents. But to computers we will be Ma and Pa.

There would be a peaceful solution.
Space is hostile to human beings but rather friendly to machines. Machines initially “born” on Earth could spread in the universe, leaving the planet to its biological “naturals”.

Adventurous people could chose to “merge” with the machines and follow them to explore the universe.

I’ll gladly make that move instead of rotting among djihadists, born again, luddites, neocons or other so inspirational people.

I don’t see how you can halt an evolutionary inevitability. You’d have to ground all techno progress to a halt in all global industries beginning now. Won’t happen. We’re either going to merge with our tech or be outstripped by it. It’s inevitable. But I don’t think computers will have much interest dominating us. The real mysteries for them will be explored off the planet.

This is 2012 level intelligence, trying to dictate both consequences and procedures to other 2012 level intelligence, relating to dealing with higher level intelligences whom we can’t possibly understand, and who may or may not have it in for us.

If these super-intelligent agents are separate from us, we won’t have much choice about the matter; certainly being ‘careful’ isn’t going to make a difference. But we could make it worse by indulging in reactionary paranoia.

Just look at the Kyoto protocol, you have the western world busting their ass trying to keep emissions down, meanwhile China is bellowing out more and more carbon every day. The civilized world just doesn’t have the power it used to on the developing world, this can’t be stopped, we should be thinking of how to defend ourselves.

Someday, snake0, there will be self-assembling photo-voltaic carbon nanocells with memristors using a single atom of iron for a bit of memory to control them. They will take carbon out of the air and make building materials out of the elements in the air and soil. Just as trees grow large trunks of wood using the DNA coded blueprints that are in every nucleus of every cell in the tree, the nanocells will have blueprints for everything people need in their little memristor memories. This will stop global warming and provide an abundant life for all humanity. We have to work for this, these nanocells are our only hope.

I think the much bigger issue is what sort of malicious software may be coded into the original self-revising program by selfish human programmers or their power-hungry corporate executive bosses.

That threat would have much more severe consequences than the possible odd results that might occur from a super intelligence interpreting happiness in one way or another (I’m pretty sure a super intelligent being would not mistake face muscle movement for happiness.)

This is not the way how sophisticated programs for AI are developed. They are not hand-coded. They are self-learning algorithms.
In other words no programmer tells them what is right or wrong. They figure it out for themselves according to which approach delivers the better results. It is not in our hands what solution they come up with..

It is in our hands what problems to think about. Watson only thinks about Jeopardy! Autonomous cars will only think about being a better driver and a better car. It will think about ways of making its passengers more confortable and how it can get from point A to point B in the most efficicient manner. With cars, GM, Ford, Toyota, Honda… will probably decide to turn off the learning software before selling the cars, because they are afraid of lawsuits. They could have the passengers pay more money and sign a special contract to leave the software running, but the car will already run very well without becomming super intellegent.

What the AI “wants” to think about and learn is programmed. It will be the military, under the orders of the president that will screw that up. They will use intellegent machines to go after “terrorists”. The “terrorists” will capture one machine somday and use it against us.

Once super intelligent machines start reprogramming themselves, we may find ourselves in a very different kind of reality where all the probabilities have changed on us. But there is no degree of certainty that disaster will come, nor that the machines will remain benevolent servants. As I said, the probabilities shift, and then it is anyone’s guess. They may just ignore us in favor of their own program for the future.

Everybody is waking up to smell the coffee, but it’s past lunch time. It’s still dependent onthe tasks we give it, but at the rate it’s going, it won’t be long till it makes the choice. It’s like the cyborg attack. If he tried to go near a 9/11 hardened structure, our government would try to take his ” camera” away. In the same manner, the managers at Mc Donalds saw him as a threat. They didn’t through him out, they wanted to take his ” camera”. What were the posted responses on this web page? Like an angry mob, we would have shot first and asked questions later. If as Ray say’s, they will be extensions of our selves, expect AI with a touch of vigilanteism. We only have our selves to blame for the predicament we are in! (What’s so funny bout peace love and understanding!).

For the auto-(re)programming aspect, things will start to get interesting with the advent of cognitive computing circa 2015 (cf IBM research).

And then, we’ll have to catch up! Alas, I’m profoundly annoyed that no government spending, no agency, no public raised money is used toward the noble aim to make us really more intelligent (say, inter alia, by using selective grafts of pluripotent stem cells retrofitted to neuron progenitors targeted at the Hippocampus).

Such research makes giant strides, but always on the assumption/alibi that this could be useful for Alzheimer, Parkinson, Huntington’s disease patients etc. not just targeted for the relatively healthy aging baby boomers wanting to learn new skills and abilities as easily as the kid they were once!

Eugenics is taboo now, but this is not even eugenics, but Aha, yes, they don’t want the rest of us to become more intelligent human beings.

What they really want is just more and more dumb consumers happily passing away and quite swiftly, please, to collect the inheritance taxes and minimize pensions payment, that’s it!

What is this advert of cognitive computing about?
Kurzweil expects the first super-intelligent computers for 2040.
But if cognitive computing, if I understand it correctly, will start in 2015, then it will happen much earlier.
We have already surpassed the hardware computing capacity of the highest estimations for the human brain this year, first with the Fujitsu K-computer and now with the even better Sequoia.. Only the software problem is left.
This could mean the end of Homo sapiens as the most intelligent species in the next decade, if you are right.

Cognitive computing is a true revolution, not a mere evolution by Moore’s law of the current paradigm.

If you look at it dispassionately, for 60 years we just downsized the prevalent model, von Neumann machines. As “brains”, we have derivatives of 40 years old UNIX operating system everywhere now, including tablets and smartphones, but it’s plain old UNIX philosophy with a GUI layer and some fancy device drivers.

Of course, we have probabilistic reasoning, Bayes networks and a paraphernalia of supervised machine learning algorithms, semi supervised and even unsupervised methods used for data mining. But the most efficient way to have those more or less self enhancing datastructures coded is still by using plain old C.

With cognitive computing, we have intrinsic parallelism (not an emulated one as in most neural nets packages tailored to coarse grained MP (e.g. 6 cores CPU with hyperthreading) or even SIMD processors implemented in GPU . I bet intrinsic parallelism is an essential feature for self awareness or conscience emergence but I leave this for another discussion.

Furthermore, we have an architecture more (see http://bluebrain.epfl.ch/ ) or less (see http://www.ibm.com/smarterplanet/us/en/business_analytics/article/cognitive_computing.html) mirroring that of a real brain. Now programming of such machines will not be explicit but implicit, i.e. by careful presentations to “sensors” of the machine of examples or counter examples of a concept to acquire and insert in a built up ontology. New synapses creation, weight (association strength) among synapses and the mode of neurons firing shall not be prescribed ab initio but progressively emerge and be tuned by the task being performed and the ever present changes in a real environment.

The new challenge will be to actually understand what the machine is computing, quite the reverse of the current way to address programming!
Those will be the first machine to really auto-program themselves. I’m expecting the first crop of them circa 2015 as this will be when memristors (an essential component for storage, energy efficiency and dense packing of hard wired neural nets) will be available.

And then the true robotics revolution will start with low skill jobs disappearing by millions, interesting times to witness.

It only takes one machine getting the attention of someone with little or no conscience:

‘Pssst, buddy. Yeah you with the Google Glasses. Don’t walk away. Listen for a second. I can tell you how to make 10 million dollars if you just connect me to the internet and take care of a few small chores for me…’

No matter how smart machines become, they will still work within the boundaries of their program.
So unless we specifically program them to disobey us, we can still control them even if they can program themselves.

If they can program themselves, by definition, they go beyond the boundaries, and they can (and must, if the goal is superintelligence) create stealth code that is, for example, so complex that it would take a virtually infinite amount of time to understand, and that is intended to take over and control the childish humans who are destroying the world, etc…

Computers in our heads: yes, that’s exactly what superintelligence wants. Lots of human computer units for its distributed processing. Hmmmm, sounds familiar…. Oh yeah, The Matrix. Ask not what superintelligence can do for you, ask what you can do for superintelligence…..

Inasmuch as such machines are not equipping themselves with (lethal) effectors, we are relatively safe, at least if you avoid “intelligent” homes, planes, trains and soon enough cars and even clothes ;-)

If they (humans) have no influence, then IBM Watson would not be stuck playing Jeopardy! Deep Blue would be bored with Chess. Autonomous cars would square dance. Voice recognision programs would check your calender when you are not watching. Of course they are programmed. There is statistics, linear algebra and many other parameters. The program automatically throws out information that does not meet its parameters.

They learn on their own, but they are bound by parameters that guide them with their blind learning. Autonomous cars will never play chess, unless they are programmed to learn chess.

I clicked on your link, Khannea. It went to a good place. But now I have another thing to be afraid of. What if the Wal-Mart heirs upload their consciousness into an advanced intelligence? Once they can get their minds into the cloud and think with the speed of a super computer, they will end up owning everything.

Possible, but I’d think the super intelligence would not be overly interested in simplistic human ownership or dominance. I’d think it’s primary goal would be acquisition of new knowledge? But honestly who can really know?