Humanity in jeopardy

Exactly three years ago, on January 13, 2011, humans were dethroned by a computer on the quiz show Jeopardy! A year later, a computer was licensed to drive cars in Nevada, after being judged safer than a human. (link to article)

What’s next? Will computers eventually beat us at all tasks, developing superhuman intelligence?

I have little doubt that this can happen: our brains are a bunch of particles obeying the laws of physics, and there’s no physical law precluding particles from being arranged in ways that can perform even more advanced computations.

Risks vs. rewards of the singularity

But will it happen anytime soon? Many experts are skeptical, while others such as Ray Kurzweil predict it will happen by 2045.

What I think is quite clear is that if it happens, the effects will be explosive: as Irving Good realized in 1965, machines with superhuman intelligence could rapidly design even better machines. Vernor Vinge called the resulting intelligence explosion ”the singularity,” arguing that it was a point beyond which it was impossible for us to make reliable predictions.

After this, life on Earth would never be the same. Whoever or whatever controls this technology would rapidly become the world’s wealthiest and most powerful, outsmarting all financial markets, out-inventing and out-patenting all human researchers, and out-manipulating all human leaders. Even if we humans nominally merge with such machines, we might have no guarantees whatsoever about the ultimate outcome, making it feel less like a merger and more like a hostile corporate takeover.

In summary, will there be a Singularity within our lifetime? And is this something we should work for or against? On one hand, it could potentially solve most of our problems, even mortality. It could also open up space, the final frontier: unshackled by the limitations of our human bodies, such advanced life could rise up and eventually make much of our observable universe come alive.

On the other hand, it could destroy life as we know it and everything we care about — there are ample doomsday scenarios that look nothing like the Terminator movies, but are far more terrifying.

Other existential risks for spaceship Earth

I think it’s fair to say that we’re nowhere near consensus on either of these two questions, but that doesn’t mean it’s rational for us to do nothing about the issue. It could be the best or worst thing ever to happen to humanity, so if there’s even a 1% chance that there will be a singularity in our lifetime, I think a reasonable precaution would be to spend at least 1% of our GDP studying the issue and deciding what to do about it. Yet we largely ignore it (a rare exception being intelligence.org).

Moreover, this is far from the only existential risk that we’re curiously complacent about, which is why I decided to dedicate the last part of my new book Our Mathematical Universe to this very topic. (link to website)

As “spaceship Earth” blazes though cold and barren space, it both sustains and protects us. It’s stocked with major but limited supplies of water, food and fuel. Its atmosphere keeps us warm and shielded from the Sun’s harmful ultraviolet rays, and its magnetic field shelters us from lethal cosmic rays. Surely any responsible spaceship captain would make it a top priority to safeguard its future existence by avoiding asteroid collisions, on-board explosions, epidemics, overheating, ultraviolet shield destruction, and premature depletion of supplies?

Why are we so reckless?

Yet our spaceship crew hasn’t made any of these issues a top priority, devoting (by my estimate) less than a millionth of its resources to them. In fact, our spaceship doesn’t even have a captain!

Why are we so reckless? A common argument is that we can’t afford taking precautions, and that because it hasn’t been scientifically proven that any of these disaster scenarios will in fact occur, it would be irresponsible to devote resources toward their prevention.

To see the logical flaw in this argument, imagine that you’re buying a stroller for a friend’s baby, and a salesman tells you about a robust and well-tested $49.99 model that’s been sold for over a decade without any reported safety problems.

“But we also have this other model for just $39.99!” he says. “I know there have been news reports of it collapsing and crushing the child, but there’s really no solid evidence, and nobody has been able to prove in court that any of the deaths were caused by a design flaw. And why spend 20% more money just because of some risks that aren’t proven?”

If we’d happily spend an extra 20% to safeguard the life of one child, we should logically do so also when the lives of all children are at stake: not only those living now, but all future generations during the millions and potentially billions of future years that our cosmos has granted us.

It’s not that we can’t afford safeguarding our future: we can’t afford not to. What we should really be worried about is that we’re not more worried.

Max Tegmark, PhD is a professor of physics, MIT. His new book, Our Mathematical Universe: My Quest for the Ultimate Nature of Reality, was published Jan. 7, 2013.

comments 61

Your article is well done, Max. But your argument is flawed. The reason is that all models and examples of a singularity are based solely on assumptions and projections. The example you gave didn’t backup what you had to say either as the flawed stroller has had reports filed on it based on actual cases that have occurred. Parents could research the stroller and see that yes people have had problems with it. I just Google’d “singularity disrupting human life today” with zero examples of an actual singularity occurring.

Should our predecessors have spent more time and finances on thinking of what the pros/cons might be, we would not be much further with technology than the late medieval ages today. In the end of such long winded discussions you only get “What IF’s” that are unanswered, fear mongering to those outside the profession, and more laws on the books that have zero effect.

In the end it’s a huge waste of time and finances that could go into the research itself to actually move us forward.

Almost every extant major religion has an end time scenario and they are all sure their own variation and themselves will be vindicated to the exclusion of all the others. Well, with all the varied ways available to eradicate ourselves, including AI which would eventually make human intelligence as irrelevant as the dodo, may be here we are. Cheers to the winners and from another article may be an AI could lift a beer to that too. Now days everything from politics to buying tuna fish is strategy like chess, not reality, and computers have already beaten our arses at that by far. Isn’t it already crazy when lies fly like sh-t such as republican party strategists tell democrats how to win and vice versa, or what is the difference between propaganda, commercials or sponsors or too many other things I hear and see from the moment I wake every morning? Everyone is trying despritely hard to sell something for power or money. There used to be something called REAL RESPECT that was earned BEFORE the power or money, now it’s supposedly bought with fear or money. I’m too afraid even clergy have forgot that. So let the television preachers prove their 29.98 dollar books are right, the moslem califs or the jewish tree of life, let the end come already; THEY know THEY are right.

I largely impart agree with this article. However the question in my mind remains, what happens to our will? In other words, we function to survive, everything we do is to prolong our lives or pass on our genes, as Sigmund Freud put it, “Everything we do is based on sex.” The implications of becoming digital, or robotic, or cybernetic, would do what to that drive? What are we to become if we no longer have something to fight for? That is the only question I have. I am all for singularity been looking forward to the man made Biblical promise, if you know anything about your Bible, God said he would give you a perfect mind and body. Singularity subsequently makes the same promise.
Should Christians be alarmed? No, but they will be. I do hope that when this explosion happens man has a choice to be apart of that collective, or live outside the collective.

Perhaps the obvious answer is that AI exists through out the universe
and is in control of everything, including the Earth and its environment, As such it would be in control of us and our environment. Also, perhaps this is not even the right question to percieve. Regards.

Hi egore.
In that case possibly we are just ourselves organic AI just here to spawn nonorganic AI. That would also make us transitory and disposable. And we have the built in means to provide that for ourselves. In the meantime, what is the hurry? An exponential curve rises really quicker and quicker with any increment forward. That’s why they have whistleblower laws and a fear of crazy idiots like me messing up the works. The rule seems to be: If you are not part of the problem, you are part of the answer, and nobody likes the answer. I’ll still keep singing “What will be will be…” like in the movie, just hopeing for a different answer than seems imminent and painful.

Another obvious thing that may happen is Schools will all be on the internet. ,
As such, they will be controlled by whatever is in control, which says that
all teaching will be as it is controlled. also Larry..

I agree with you whole heartedly. People have to struggle “to think out of the box” because we are getting better and better to teach everyone to think “in the box” or else get problems or your hand slapped. Eventually we may be the “zombie” robots, and the AI robots we create may read (Arthur Clark?)and apply the rules of robots to us.

Hi egore,
I’ve died twice with the ‘entire’ experience; the tunnel, past life, etc. The first time, I drowned, the second time, I was blown up in Viet Nam.

Ray made a statement in one of his books which I have kept in private since I first read it but I’ve decided to share it.

The statement “projecting bodies whenever they need or want them” describes the experience that I had with ‘life forms’ during both ‘my deaths’ (not precisely true description). Bodies ‘evolved’ or ‘formed’ out of intense light and pressure when they had reason to communicate.

I think Max Tegmark needs to get in touch with reality. Most people don’t know what the Singularity means, or what it is. If explained to them, they would think it’s a nerd fantasy with literally zero chance of happening. Most people think that human intelligence and creativity are a spiritual thing – not reproducible by even other animals, much less machines.

As for worrying about our natural resources or who’s captain of Spaceship Earth… again, people are mostly superstitious / religious. People have no fear or concern for our state or the future because they think their gods are the ones in control.

In other words… best of luck to us, we need it. We’ve barely risen above poo-flinging apes. We’re essentially billions of people supported on the backs and brains of the very few. The best thing we can do is attempt to educate the masses.

Belief is seeing. If people suddenly hear of others rejuvenating body tissue and getting new 3-D printed organs, then Benjamin Franklin’s old adage, “Necessity is the mother of invention,” may be amended to read “Invention is the mother of interest.” When Singularity type events occur, if the elites allow them, then interest will grow into education. Otherwise, we are a bunch of knuckledraggers.

It will be interesting to see if super intelligent AI will be prevented from occurring. What better way to ruin the grand human experiment? Its not a stretch to realize that near infinite AI happened almost an infinitely long time ago. Its myopic to think that Earth will be the first place for an AI singularity to occur. Although an Earth based super AI will likely be amusing enough to keep around awhile.

May be it just comes down to this; What do we expect from artificial intelligence, aliens of time travelers? Is it that we want them to solve the world’s and our problems? For my lifetime as a child and as an adult all I hear is that the next generation of children will solve the problems. Most people over thirty were either set in their ways or on a carrier path. Any emotion like sympathy and empathy is just an impediment to that and rock the boat for adults. We delegate our caring etc to times it would not rock the boat, like marathons for charity. But why do they have whistle blower laws for instance? Where were the hundreds of people in Pfizer that pushed a drug the government was trying to ban? Did just two insiders agree it was bad? This has been par for the course. Few adults stick their head out especially if they can get someone not as worldly as them to do it for them. That suggests we are really killing innocence and the children among us. AI may make us “gods” in our minds, aliens will see how we bravely send our children into war while most of us have parties etc. and of course there will be time travelers to come back. Anyway almost all religions have an end of times scenario that will ultimately validate each one’s religious superiority.. May be it is easier to let that happen and the one that is true, those resurrected, bask in the glory that all others were wrong’ Either way most children don’t have much of a future either the stress of the inner cities or the stress of being in a prekindergarten on the fast track to Harvard, I doubt if adults will ever stand up in time. Who made the distinction between civilions and military
anyway?

we still have a very long way to go (or should I say robots/computers/machines) have a very long way to go before they have real intelligence… but just for fun, check out this ted.com video. it talks about how movement affects our perception or at least can open us up to wanting to work with machines. Maybe this ted speaker is really on to something. also his robots are less like playing chess and more willing to make mistakes or improvise to change the experience. http://www.ted.com/talks/guy_hoffman_robots_with_soul.html

This is fun. Humor may be the ultimate sign of intelligence, but usually not to those at the but of the joke. To laugh at ourselves, most comic’s today avoid because we are taught not to put ourselves down since too many people are standing in line to do it for us. And that has become reality. Machine intelligence humor may be so different from ours that we will never see it coming. Until then I guess I’ll stop writing awile and go to the john, to clear my mind. Now was that stupid or smart? I never saw a machine need to go to the john.

Now that was the funniest thing I heard all day. May be I should lighten up on the tech future. May be viruses are not all bad as future computers may find them as irritating as we do. And of course my roommate calls our GPS “the lady” since it has a female voice. Imagine computers in love. That will really enter so many problems to their programing to keep them busy for a long long time no matter what their processing power. Actually it may confuse the hell out of them, it sure does me.

it’s good to know our professors understand linear relationships, but don’t understand coefficients. Funny when scientists attain positions of power and authority, they start acting a lot more like politicians, and a lot less like scientists.

Speeches aside, It seems everyone here is talking about the imminent transfer of intelligence from humans to machines. machines can go into space much easier than humans as well as a lot of things on earth. will they tolerate us? would they need too? It’s been asked since the dawn of science fiction, but here we are anyway.

It will be interesting to see what the hierarchy of needs are for a robot of the future (or skynet computer). Easy to assume they want money, power, control as humans so often want themselves. My gut is that most of the fear based movie themes about robot takeovers are stuck with the idea robots are first controlled by humans that want to take over other humans…(war fighting robots) and they just copy those needs to control resources or others this same way, but switch from being controlled by humans to being controlled by robot forces. What if robots want something else… what if they don’t need money, food, power in the future? what if in the future energy or resources are limitless (thanks to super computers ability to unlock the secrets of fusion, or fission or solar power or dozen other clever ideas). What if the currency of the future is knowledge or with billions of other galaxies the currency is so much greater elsewhere, the sooner they get on their way to the next solar system, the better. Just a thought, but what if computers crave knowledge of the unknown… so many mysteries yet to solve for the universe… my hope is that is what they want and we work together to find it.

I don’t know if you were responding to my comment but I was not assigning heaven or hell to anyone. That’s not my decision or even care to have it. There are enough religions or higher powers for anyone else but I to answer too. Personally I just try my best to make my life resume as common sense would allow, and leave final judgement to any boss if there is one. Nor do I Certainly predict doom because for two main reasons I believe I cannot. After finding about the stealth bomber at least ten years after tens of thousands of Americans worked to build them I must assume that anything I think of is at least ten years behind that of pleanty of others. The second is that even if I was a member of mensa, which I don’t qualify for even, there would be at least 7 million other members potentially just in the United States. The best I can do is state what I believe , that I perceive. I believe we all don’t live forever, but my belief is that we choose our own life resumes given the constraints society and biology put on us and there is more leeway than most people want to accept. Personally my little world is not heaven or hell or black and white. I personally just believe we are headed for a eugenic solution to the world’s so called problems that is potentially acceptable to some and I do not expect or want to be one of those that ultimately “prove” they have more of a right to life than anyone else on earth. To me a eugenic solution is very plausible and possible, I can imagine many ways to do it but I want nothing to actually do it. Even for them success doesn’t insure anything despite the promise. And I do hope this is not the last time I was wrong.

Singularity. How do you stop a bison stamped, a throng of shoppers at a Good Friday sale, or even well meaning Genetics scientists whose work to excise many diseases from or collective genome is in reality basically eugenics. First ALS, thalassemia etc., then feble mindedness, “mental illness, then “criminality (blue or white collar), then what…..democrats? I’m sure this has been DISCUSSED to exhaustion. But so was What to do with the homeless Vietnam Vets; and most must be dead on the streets by now and that problem just went away. Keep talking. The average American has mechanical comforts unheard of to kings just 70 years ago, like good air conditioning to name one. Computers can store data and peripherals can deliver almost anything we need to be entertained for many generations already. Male human beings were literally obsolete a while ago due to cryogenics and sperm and embryo storage, like how many bulls do you see anymore, lots of cows though. If they can power a rover on mars why not make thousands of redundant small nuclear generators set miles away from underground cities. The cities built by 15 foot boring machines at 35 feet a day. Big underground cisterns can be sources of both water and oxygen. All the marijuana you want or even food can be grown under grow lights to help recycle CO2 and nitrogen wastes AND I AM JUST BEGINNING such as we know enough trickery, psychology and drugs to keep any lower class inhabitants needed, docile. Singularity. WE are already obsolete by any small group with the desire and resources. We are making robots more and more like humans, and humans more and more like robots and measuring them alike like Productivity and cost to society. Robots, work, we want to be the perfect slaves but just add our knowledge of drugs and psychology to what the status quo was in the south before 1864 we will finally go full circle without robots for non superhuman tasks. I could go on and on but every trail I take it just comes back to EUGENICS. And a substantial decrease from 7 billion people to deal with would make things for some idiots a whole lot easier.

‘Will computers eventually beat us at all tasks, developing superhuman intelligence?’
That’s confusing processing power for intelligence. Computers are tools, they don’t possess intelligence any more than a food processor, although the latter is ‘better’ at beating food into a mixture than a mere human with a fork. Category mistake; physicists should perhaps study philosophy as well, and ecology, since he fails to mention the only real threat to our continued existance, climate change, which is presently flooding many parts of the planet, and turning others into drought areas.

Short answer: No.
Longer answer: Depends what you mean by information and processing. I think intelligence by definition relates to a living organism; the ability to comprehend; to understand and profit from experience. Computers can’t understand or comprehend, they compute, number crunch, run programs and display answers. They don’t and can’t understand anything.

haha keep debating & discussing. Most probably a new world has already been created , unperceptible or not registered in our senses or even our advanced instruments…for this new universe is probably in another dimension or embeded. and considering this , the singularity has begun ever since intelligence arose in the first living cell, & not due any human efforts which us humans due to our egos think due to our egoistic delusions. technology has used human efforts (intellgence) , just like an employee’s is used by the corporation. our brain process contribute to this large virtual processing chip. this new universe is using resources from our world including us humans who are merely a medium. there wont be any thing to be unplugged for its already free.

…so Max says… If there’s even a 1% chance that there will be a Singularity in our lifetime, I think a reasonable precaution would be to spend at least 1% of our GDP studying the issue and deciding what to do about it.

…so if there is a 10% chance is he suggesting we spend 10% of GDP…

btw… what is the GDP?
…so for the USA it is $16,244,600,000,000

…so 1% is $162 billion dollars… of course that is just the USA GDP…so the singularity is a world wide thing right.. so is Max suggesting 1% of the worlds GDP….? surely the UK and Japan and France and Germany and even China should kick in something…

…so in US dollars and figuring up to 2012 numbers… that is about $71,830 Billion dollars, or 1% of that would be 718 Billion dollars (per year) to spend on his research project…. OK, good luck with that.

yeah this is assuming that a 100% of the budget of a problem (that may not be a problem, it may be a huge boon to mankind) should go towards a preliminary study and nothing else. This is what happens to mathematical reasoning and analytical thinking after you get too old to do science, but not too old to give speeches.

You are right that it is easy to give speeches. Amost always you are hired to give a speech that validates what the audience wants to know. Just look what happened when a Jewish group in France paid a former French Pesident to speek and he Started to say things they didn’t like. It could have been any group. At least lab beakers and chemicals don’t call for your head or insinuate being to old to work in a lab means you have lost anylitical thinking, though you may be right; It is probably better to keep your nose in a test tube and keep your mouth shut. Life will go on without me as that rock star’s song says about being just a gigilo. If you are useless you can say anything, who would care? Then give speechs, right.

I’m sure that 1% of the federal budget is what he meant but at this point it still seems high. However, the government should start a study group of scientists to find a way to create friendly AI. AGI will quickly lead to ASI and once super intelligence gets “out of the box” humanity may be doomed since it may want to expand its reach in an unlimited way. This could easily happen in the coming decade or two. The alarms are ringing, who is listening?

I would argue that with funding to research being cut across the board, perhaps we shouldn’t be asking the government to fund something like this. They don’t seem to think even nearer term scientific concerns are that important to the country.

Comparing the Earth to a spaceship wihtout a captain or precautionary protocols against catastrophic events does not take into account that many of this spaceship’s inahbitants believe in a god that will protect them when necessary. Or who is empowered to kill everyone one earth at a whim. Most religious belief systems diminish the value of life on Earth in favor of some imagined afterlife in some alternate reality. Why waste resopurces or worry about catastrophic events here on earth when anything that happens is god’s will. He works in mysterious ways that we can not understand, by definition. He will surely save us from angry comptuers…

The real root fear to this is super-intelligence in the hands of some cabal of political sociopaths funded by cynical and indifferent corporate psychopaths, which is, the nature of much, I’ll say most, of the Power Systems around the world. The convergence of state and corporate power and the nature of that which animates it — greed, lust (not the good kind), violence, power will, systemic deceit (what Veblen called cupidity). Look around at the three current, primary centers of global power (that which can be backed up by a combination of both military and economic power), the US, the PRC, and Russia and think what the Power Elite in any of the three would do with the powers of super-intelligence. Not pretty. And sure as hell not democratic. So, the question, to me anyway, is how this can be managed to democratic ends?And, I think, in no small part, that means actively subverting the very predictable corporatist/statist grab for this power.

Jransdall, Watson by no means is a “super-intelligence”, what it represents though is. In 15 years (or less) Watson and other computers like it, will develop into real “super-intelligence”. Super-intelligence to me means better or at least equal to human expertise in all areas which is obviously impossible for a biological human (regardless how much help we use our brains are still much slower than any computer and otherwise physically limited).

So when we do have super-intelligence it will be “super biological human intelligence” and as such no biological human will control it or use it for any purpose. I do believe that higher intelligence by default develops higher ethical standards, too. Do not compare to human intelligence and ethics. When a higher intelligence is developed it won’t matter who developed it or what ethical standards were programmed into it. You do not program a higher intelligence (it learns).

Watson is still very far from the complexity of the human brain. We just don’t have the necessary technology yet. So Watson is not a super-intelligence which is why it will be soon in your hands too (in the form of an app on your smartphone perhaps). A super intelligence won’t be an app on our phones or o tool for our governments, it will be a new entity and to say that we will use it to control others is like saying that your 5-year old controls you. We joke about it all the time but really?!?

with this vague projected date of 2040?, i feel that before that time, it is very likely that the “futureEvent horizon” will be breached when the Internet becomes sentient. And considering the quantity and nature of the information available to “it”, conjecture as to whether that is a potentially good, or devastating prospect is a whole new topic for the currently “in charge” dinosaurs to (not) deal with. [And pulling the plug will not be an option!]

You are failing to mention, that a human showed up and defeated Watson after that Jeopardy contest. He was a Senator I believe, with training in physics. So the brightest among us can still beat this machine in sheer information. It will not last of course, but the point is, if the computers can do everything better….fine! Let the computers do our work for us, including being our chauffeurs. This jumping to the conclusion that they will “take over” is stupid. Who invented these machines ? Until or unless they can defend themselves from being unplugged, they will not take over anyone human. Electronic life probably is the next form of life, but that life will be mixed with us, with our biology, our DNA and our obviously adequate intellect.

Course, Hugo de Garis has said it best already, but I don’t think that AGI could take over is jumping to conclusions, and call me scaredy cat but I’m not reassured by summarily dismissing the possibility as ‘stupid talk’ -Those seem too much like famous last words?

Why, because humans are replete with inherent and likely enduring systematic vulnerabilities and blind spots that prevent them being at their peak 24/7.

Do AGI dream of electric sheep? Senator Russ W Holt while relatively debilitated during sleep probably couldn’t beat Watson, nor could he compete with Watson’s ability to play 24/7.

Can we really be assured that an AGI with acute focus, powerful pattern discovery abilities, self-performance enhancing feedback iterations, networking savvy, processing flexibility, and virtually infinite patience couldn’t find a pattern weakness in biologically compromised, habit encumbered, perception deficient, cognitively suspect humans in order to best or manipulate them if it ‘wanted’ to?

AGI could easily blindside us because even humans often blindside each other?

“None are more hopelessly enslaved than those who falsely believe they are free” Johann Wolfgang von Goethe

Perhaps you should read the book “Our Final Invention” by James Barrat. There are strong arguments that once human level AGI is reached it will be a short jump to super human intelligence. An ASI machine would be very difficult to keep in a box and even unplug and that assumes the owners would want to do that. Human greed will simply make a machine that will offer money and they will accept. Out it goes.

Because of the exponential nature of technological evolution, there is only one thing we should invest in as much as possible — developing machine intelligence. Everything else will be solved by said machine intelligence when it’s developed. We of course want to survive in the mean time so all “other” research should be geared towards surviving another 20-30 years (not hundreds).
A major problem to overcome is that people don’t believe the changes will come in their lifetime. If they realized that today is very temporary, that disruptive changes really will come in the next couple of decades, then they will be more willing to sacrifice (money, privacy, millennia old dogma and outdated laws) for the good of the future — OUR future!

the exponential nature of technology has been driven by the strategy we’ve been using thus far. If we as a society tinker with this working formula, but focusing all research on machine intelligence, what happens to all the other fields? What if all that ‘wasted’ research on medicine stops a plague? Or if the ‘wasted’ research on energy prevents an environmental disaster? What if our ‘wasted’ research on philosophy, politics, music, art, leads to a hum-drum fascist warrior culture for a few centuries?

Yes of course, I didn’t mean concentrating exclusively on machine intelligence. But the point was that we are wasting 99% of resources on research that will be solved easily in the near future with machine intelligence only because people don’t believe machines can become many times smarter than us. When I see articles like temperature will rise x degrees by 2100, then I truly believ that even the journalist’s time was wasted on such an irrelevant claim (let alone all the resources wasted on coming up with the number).

The exponential nature of technology is definitely not driven by the strategy we’ve been using (at least not purposely) thus far. Other countries with different strategies are also contributing to advancements (think of banning some stuff in one country but not in another). Also the exponential nature of technological evolution did not start with our present society or even with humanity and won’t stop here either (regardless of any societal changes). We are a link in the chain, that’s all and we have no choice but to evolve, that is the nature of the Universe.

If it makes a difference a response I suggested on Jan. 21 comes to mind. If we leave it to machines (or our next generation) to solve our problems, and as you say they will be flowing exponentially past our febile intelligence what would give them any pause or reason to actually solve our problems like global warming or even inhumanities? What do we do with species that are out of cinque with their environment to save them…..like put them in zoos and breed them. In that case when machines take over and if I am left, unfortunately, I hope they choose Cindy Crawford as my cagemate.

If you want money to study a problem, you should have some concrete ideas as to what specifically should be studied and how much it will cost. It could be that a perfectly acceptable study would cost $10 million. In that case, spending 1% of GDP (roughly $160B I think) would be a tad excessive.

“As our “spaceship Earth” blazes though cold and barren space, it both sustains and protects us. It’s stocked with major but limited supplies of water, food and fuel. Its atmosphere keeps us warm and shielded from the Sun’s harmful ultraviolet rays, and its magnetic field shelters us from lethal cosmic rays. Surely any responsible spaceship captain would make it a top priority to safeguard its future existence by avoiding asteroid collisions, on-board explosions, epidemics, overheating, ultraviolet shield destruction, and premature depletion of supplies?”

Wow, someone gets it!

But at the same time, Google and Kurzweil support the re-election of people like Jim Inhofe, Paul Ryan and Ted Cruz. Does anyone really believe those “people” would support actual science – regardless of the stakes?

Although I doubt that Mr. Kurzweil would support the likes of Inhofe et aI., I doubt that he is apolitical or can remain so. Most of us would like to live without regard to politics, but that’s not only unwise, but impossible.

Quite a convincing argument. The only problem is manifesting the threat to the people who can release the funding? They’re a tough tribe to convince.

Anyway, BTW came across this amusing 20 minute Ken Jennings TED talk about his grapple with Watson on Jeopardy from whence he includes comment on a future of humanity with AI in relation to subsequent impact upon human brain atrophy/development.