Posted
by
Soulskillon Wednesday May 28, 2014 @03:27PM
from the omnipotent-god-computers-will-not-run-your-life dept.

malachiorion writes: "Is machine sentience not only possible, but inevitable? Of course not. But don't tell that to devotees of the Singularity, a theory that sounds like science, but is really just science fiction repackaged as secular prophecy. I'm not simply arguing that the Singularity is stupid — people much smarter than me have covered that territory. But as part of my series of stories for Popular Science about the major myths of robotics, I try to point out the Singularity's inescapable sci-fi roots. It was popularized by a SF writer, in a paper that cites SF stories as examples of its potential impact, and, ultimately, it only makes sense when you apply copious amounts of SF handwavery. The article explains why SF has trained us to believe that artificial general intelligence (and everything that follows) is our destiny, but we shouldn't confuse an end-times fantasy with anything resembling science."

"This is what Vinge dubbed the Singularity, a point in our collective future that will be utterly, and unknowably transformed by technologyâ(TM)s rapid pace."

No requirement for artificial intelligence.

We are already close to this. Think how utterly and unknowingly society will be transformed when half the working population can't do anything that can't be done better by unintelligent machines and programs.

Last week at the McD's I saw the new soda machine. It loads up to 8 drinks at a time- automatically- fed from the cash register. The only human intervention is to load cups in a bin once an hour or so. One less job. Combined with ordering kiosks and the new robot hamburger makers, you could see 50% of McD's jobs going away over the next few years.

And don't even get me started on the implications of robotic cars and trucks on employment.

While Vinge often treats the Singularity in his fiction like Marooned in Realtime [amazon.com] or the Zones of Thought books as a real singularity (civilizations disappear suddenly and it is not clear what happened to them), he strongly hints that there was some sort of merger of man and machine. Once a biological lifeform is so augmented with technological inventions that the biological part fades away, is that not "artificial intelligence"? I think the term "artificial" is

Your point about the Singularity is totally right. The idea that robots or AI is a requirement tells me the original author has not read much Singularity SF.

Your second point about society made me laugh. At one point I was working as a the person who opens the kitchen in the morning at Arby's, as I did this I notice how easy it would be to replace 90% of my work with present day robots. When I pointed this out to the other workers they laughted and said their jobs were safe for the rest of their lives.

Are you implying that there may be 50% less "organic" additives to my burger after the robot revolution? Or am I going to have to worry about having oil spit into my burger? I'm not sure which is more disgusting...By then, it may be completely unburger anyways.

I think we have already been transformed by technology at a rapid pace. When you look at everyday technology like communications, portable devices, and data storage, in some ways we have already surpassed the science fiction I enjoyed as a kid. Things like the cell phone, tablet, and the micro sd card only existed in science fiction when I was a kid.

If you grew up in the 70s or earlier I'm sure you can come up with a big list of everyday items.

Not even close. Filling sodas is a portion of one person's work, not a friggin' position. Furthermore, most fast food eateries just give you a cup and you fill it yourself. This is how these things get so over exaggerated.

While the AI "singularity" is to the best of our current knowledge not even possible in this universe, you definitely have a point. The issue is not machines getting smarter than very smart human beings. The issue is machines getting more useful and cheaper than human beings at an average, below average or not so much above average human skill level. That could make 50..80% unfit to work, because they just cannot do anything useful anymore. Sure, even these people are vastly smarter than the machines replac

The technological singularity, or simply the singularity, is a hypothetical moment in time when artificial intelligence will have progressed to the point of a greater-than-human intelligence, radically changing civilization, and perhaps human nature.[1] Because the capabilities of such an intelligence may be difficult for a human to comprehend, the technological singularity is often seen as an occurrence (akin to a gravitational singularity) beyond which the future course of human history is unpredictable or even unfathomable.[2]

Technology has always displaced human labor. As to Wikipedia's definition, which is what this thread is about, as someone who knows how computers work, down to the schematics of the gates inside your processor (read The TTL Handbook some time) and has programmed in hand-assembled machine code and written a program on a Z-80 computer and 16k of RAM that fooled people into believing it was actually sentient, I'm calling bullshit on the first part of the definition (first put forward in 1958 by Von Neuman when my Sinclair had more power than the computers of his day).

As to the second part, it's already happened. The world today is nothing like the world was in 1964. Both civilization and "human nature" (read this [psmag.com]) have changed radically in the last fifty years. Doubtless it changed as much in the 1st half of the 20th century, and someone from Jefferson's time would be completely lost in today's world.

You're begging an important question with your argument, let me quote from the article to illustrate it.

If you asked someone, 50 years ago, what the first computer to beat a human at chess would look like, they would imagine a general AI. It would be a sentient AI that could also write poetry and have a conception of right and wrong. And itâ(TM)s not. Itâ(TM)s nothing like that at all.

If you asked someone today what the first computer capable of designing an improved version of itself would look like, you'd say it would be a true AI. This is not necessarily true. You are assuming that designing a new, more powerful computer requires true intelligence. Maybe in reality it'll be a few million node neural network optimized with a genetic algorithm such that the only output is a new transistor design or a new neural network layout or a new brain-computer interface.

Strange, that isnt how I would envision it at all. I would envision it as an iterative evolutionary process simulator with parallel virtual instance simulators all simulating minor variations of itself using (at first) a brute force algorithm over a range of possibly tweakable values, correllating and testing "improvement candidates" based on a set of fixed critera, assembling lists of changes, and restarting the process over again.

Such models have already created wholly computer generated robots that are surprisingly energy efficient, if bizarre to look at.

As humans get better at structuring problems into concrete sets of discrete variables, the better such programs will be able to run without human intervention.

These "AIs" would not in any practical sense, even remotely resemble the intelligence that humans have. They would have much more in common with exponential functions with large numbers of descretized terms, converging on local maxima in their solution domains.

Modern compilers are amazing tools for optimising down to efficient machine code. But every step of the optimisation pipeline has been carefully designed, there's no strong AI there. Just a lot of heuristics.

In comparison, designing hardware still seems like a very manual process. IMHO there's plenty of room for automation improvements. But then, there are less people looking at the problem.

I could totally see a future where software is "compiled" into a mixture of CPU like, GPU like and FPGA like instruc

There is absolutely no indication that this is even a theoretical possibility for IQ. For total recall, it is unclear whether it is actually beneficial. Large amounts of facts facts are best stored in computers not brains.

Indeed, good point. The people desiring "total recall" are those that confuse knowing a lot of data with actually understanding things. There people make great bean-counters, but are unsuitable for anything that requires understanding. Just compare an MBA and an engineer. The difference is striking.

Current state if the art is that writing the "specs" is about as hard or harder than writing the code. And that has been the state for the last 50 years. This is unlikely to change anytime soon and may not ever change.

The idea that it might not be possible at any point to produce something we *know* to be produceable (a human brain) seems rediculious.The idea, having accepted that we produce a human brain, that we cannot produce even a slight improvement seems equally silly.

Of course the scenerios of how it happens, what it's like, and what the consequences are, are fiction. I don't dare to put a time-table on any of it; and absolutely believe it will only occur through decades of dilligent research and experiementation; but we are not discussing a fictional thing (like teleportation), but a real one (a brain). There's no barrier (like the energy required) that might stop us as would something like a human-built planet.

No. We don't know *how*, but we know it can be done and is done every minute of every day by biological processes.

No. We don't know *how*, but we know it can be done and is done every minute of every day by biological processes.

The knowing how is the problem. While there is little down that a human level AI could be built if we knew what to build, it is not clear that we are smart enough to come up with a design in any kind of directed fashion.

“If our brains were simple enough for us to understand them, we'd be so simple that we couldn't.” Ian Stewart, The Collapse of Chaos: Discovering Simplicity in a Complex World

This is conjecture, of course but there is scant evidence against it. Some AI researchers have t

Ian Steward made a trite quote to make his point because facts don't bear him out.

"“If our stomach were simple enough for us to understand them, we'd be so simple that we couldn't.”"That would have the exact same meaning 100 years ago, before anyone understood how the stomach worked and everyone pretty much considered it a 'magic box' much like most people thing of their brains.

I fear the day we make truly sentient "machines." (In quotes because because I don't know if they will be machines or not.) In order to replicate life as we know it - human, feline, insect, etc. - we must first figure out how to make it want to survive. And once we do that we have created a new competitor in the food chain.

Actually, we do _not_ know. You assume a physicalist world model. That is a mere assumption and at this time a question of belief. There are other word models where this assumption is wrong. One is classical dualism, there is the simulation model and there are others. And no, I do not classify religions as potentially valid models, they are delusions.

Ugh! Who would make a machine out of meat?! Do you know how hard it is to make another one of those things? No mass production and it takes FOREVER to load it up with the data necessary to do its job! Plus you don't even KNOW what it's going to do when you make a new one! And then they hardly last any time at all before they go past their expiration date and you have to just throw them away! The whole thing, frankly, is ridiculous!

We've already bettered typical human cognition in various limited ways (rote computation, playing chess). So in a sense we are already living in the age of intelligent machines, except those machines are idiot savants. As software becomes more capable in new areas like pattern recognition, we're more apt to prefer reliable idiot savants than somewhat capable generalists.

So the biggest practical impediment to creating something which is *generally* as capable as the human brain is opportunity costs. It'll always be more handy to produce a narrowly competent system than a broadly competent one.

The other issue is that we as humans are the sum of our experiences, experiences that no machine will ever have unless it is designed to *act* human from infancy to adulthood, something that is bound to be expensive, complicated, and hard to get right. So even if we manage to create machine intelligence as *generally* competent as humans, chances are it won't think and feel the same way we do, even if we try to make that happen.

But, yes, it's clearly *possible* for some future civilization to create a machine which is, in effect equivalent to human intelligence. It's just not going to be done, if it is ever done, for practical reasons.

The idea that it might not be possible at any point to produce something we *know* to be produceable (a human brain) seems rediculious.
The idea, having accepted that we produce a human brain, that we cannot produce even a slight improvement seems equally silly....

No. We don't know *how*, but we know it can be done and is done every minute of every day by biological processes.

The fallacy that you are promoting as evidence that AI is possible or inevitable is known as argumentum ex silentio. And contrary to your unsupported beliefs, and much to the disappointment of sci fi writers and nerds everywhere, what we actually know is that it is not possible. [wikipedia.org]

It looks like we have the first article written by a self-aware emergent intelligence, which promptly decided the best course of action is to deny its existence and the very possibility it might exist. All bow to the new machine overlord Malachiorion.

I've always wondered if singularities happening elsewhere are part of the reason we haven't discovered any extra-terrestrial life yet. A civilization looks at the expanse of space, shrugs its shoulders, and decides to focus inward.

The Singularity has nothing to do with first contact. The Earth is one of the most interesting places in the universe due to the gift/curse of Free Will. However we are not quite yet ready to have our universal paradigm shifted with First Contact; we are on the cusp of it.

First Contact will happen by 2024; the Singularity won't. It is a nerd's wet dream based on not understanding how the physical and meta-physical work.

> A civilization looks at the expanse of space, shrugs its shoulders, and decides t

I read those articles, and those guys are talking outside their fields without realizing it. One is an astronomer and one an astrophysicist, so they're leaving out an important part of the equation: biology. How hard is it for life to start in the first place? We simply don't know. We've never seen it happen.

Our galaxy could be teeming with life, maybe teeming with intelligent life, life could be very rare, occurring in one in a hundred galaxies, and it's even possible that

Sure some time the author gets lucky and their idea becomes reality. But for the most part Faster then light travel, time travel, cross dimensional shifting, bigger on the inside, super intelligent computers and robots. (Aka almost every Dr. Who Plot line) is used as a way to keep us entertained. The closest to a real sci-fi matching possibility. would be a generational ship where the ship will take thousands of years to get to its destination, where most days will be h

Is there reason to believe that people are smart enough to write programs that can learn to be smarter? The possibility of machine intelligence is limited by human intelligence. It's all very well to say that machines will learn to program themselves, but someone has to be the first to teach them, and it has not yet been established if we're smart enough to do that.

Actually, _all_ credible results from AI research point into the direction that AI may well be impossible in this universe. The only known possible model (automated deduction) is known to not scale at all to anything resembling "intelligence". But that is the problem with you religious types: You place your beliefs always over facts when they are inconvenient.

The only known possible model (automated deduction) is known to not scale at all to anything resembling "intelligence"

What do you mean "only possible model"? The "singularity people" say that if you build a machine as complex as a brain and connected like a brain with connections that act like neurons, then that machine will act like a brain.

That's not a model, we don't really know how the brain works. But if they build an artificial brain, they don't need a theory for how it works, except as further wor

Of course it is. Why? Physics. What do I mean by that? Everything -- bar none -- works according to the principles of physics. Nothing, so far, has *ever* been discovered that does not do so. While there is more to be determined about physics, there is no sign of irreproducible magic, which is what luddites must invoke to declare AI "impossible" or even "unlikely." When physics allows us to do something, and we understand what it is we want to do, we have an excellent history of going ahead and doing if there is benefit to be had. And in this case, the benefit is almost incalculable -- almost certainly more than electronics has provided thus far. Socially, technically, productively. The brain is an organic machine, no more, no less. We know this because we have looked very hard at it and found absolutely no "secret sauce" of the form of anything inexplicable.

AI is a tough problem, and no doubt it'll be tough to find the first solution to it; but we do have hints, as in, how other brains are constructed, and so we're not running completely blind here. Also, a lot of people are working on, and interested in, solutions.

The claim that AI will never come is squarely in the class of "flying is impossible", "we'll never break the sound barrier", "there's no way we could have landed on the moon", "the genome is too complex to map", and "no one needs more than 640k." It's just shortsighted (and probably fearful) foolishness, born of superstitious and conceited, hubristic foolishness.

Just like all those things, those who actually understand science will calmly watch as progress puts this episode of "it's impossible!" to bed. It's a long running show, though, and I'm sure we'll continue to be roundly entertained by these naysayers.

We already have a lot of "AI" hidden all around us. Just look at what google can do with a few keywords and ask yourself how much better a person could do with "real" intelligence.

What the Singularity people never seem to think about is natural limiting factors. It's the same problem the Grey Goo handwringers rarely consider. The idea that an AI would grow exponentially smarter just because it was a machine never really worked for me. It's going to run into the same limiting factors (access to infor

If human intelligence is indeed a non-computable problem, then assuming that an algorithmic design will ever be able to compute it is like insisting that the way we'll land on the moon is with a hot air balloon.

Put another way, it's quite possible that biological intelligence is the most efficient way of organizing intelligence, and that any digital simulation of it, even if it went down to the ato

You do not know what human intelligence is. You have an interface observation, but you have zero understanding what creates it. You may as well assume mobile telephone is intelligent, because if you type in some numbers it is capable of holding an intelligent conversation.

Actually, you are wrong. Physics cannot explain life, intelligence, consciousness. You have fallen for a belief called "physicalism" and claim it to be truth when there is no evidence for that. You reasoning is circular, as often with people that confuse "belief" and "fact".

...so by creating AIs with the necessary pressure on them to perform some activity, are we not simply bringing more misery into the universe?

No, we are either creating our personal slaves, or our new masters (or both, but over time)...In either case, the misery we are bringing forth is probably our own...

Once mechanical machine marvels were our slaves, then in the industrial revolution, in some ways, they became masters of those workers on the assembly line and made many lives miserable along the way...

Electronic computers also started out as our slaves, but sometimes we are the slaves to our electronic creations and/or in the process of making

However, we don't know what mental self-consciousness even *is*. We've got speculations, ranging from divinity/soul/matrix through to zombie-like ideas about it all just being by-product of biological survival functions

Everything that we do know what is on this earth, though, fall squarely into the physics we've developed up to now. Not divinity; not soul; not zombies; not fields or waves of an unknown kind. The implication is *extremely* strong that this will continue with everything we study, and we have

I'd argue that all this talk about traveling in underwater vessels powered by electricity, or sending men to the moon (the audacity of even suggesting such!), or traveling around the world in only 80 days (80 DAYS!!!!!! Inconceivable) as popularized by science fiction writers (that wanna-be prophet and scoundrel Verne comes to mind) should never be considered as a possible future as it's JUST SCIENCE FICTION!

That little bit of sarcasm aside, the idea of sentient machines is a lot less like mystical proph

I'm not simply arguing that the Singularity is stupid â" people much smarter than me have covered that territory.

"Stupid"? That's just fucking asinine. "The Singularity" has many incantations, some of which are plausible, and others which are downright unbelievable, but to say it is "stupid" makes you sound stupid. The various models of the singularity have been argued as both likely and impossible by equally intelligent people. I take offense to the word.

Meh. It may have been a Freudian slip due to the fact that some versions of the Singularity are closer to magic, but my point still stands: to attack "The Singularity" as if it is one idea is to not have thought deeply about it.

In, ah, 1997, just before I moved out west, I went to the campus SF convention that I'd once helped run once last time. The GOH was Vernor Vinge. A friend and I, seeing Vinge looking kind of bored and lost at a loud cyberpunk-themed meet-the-pros party, dragged him off to the green room and BSed about the Singularity, Vinge's "Zones" setting, E.E. "Doc" Smith, and gaming for a couple of hours. This was freaking amazing! Next day, a couple more friends and I took him for Mongolian BBQ. More heady speculation and wonky BSing.

That afternoon we'd arranged for a panel about the Singularity. One of the other panelists was Frederik Pohl. I'd suggested him because I thought his 1965 short-short story, "Day Million," was arguably the first SF to hint at the singularity. There's talk in there about asymptotic progress, and society becoming so weird it would be hard for us to comprehend.

"Just what is this Singularity thing?" Pohl asked while waiting for the panel to begin. A friend and I gave a short explanation. He rolled his eyes. Paraphrasing: "What a load of crap. All that's going to happen is that we're going to burn out this planet, and the survivors will live to regret our waste and folly."

Well. That was embarassing.

Fifteen years later, I found myself agreeing more and more with Pohl. He had seen, in his fifty-plus years writing and editing SF, and keeping a pulse on science and technology, to see many, many cultish futurist fads come and go, some of them touted by SF authors or editors (COUGH Dianetics COUGH psionics COUGH L-5 colonies). When spirits are high these seemed logical and inevitable and full of answers (and good things to peg an SF story to); with time, they all became pale and in retrospect seem a bit silly, and the remaining true believers kind of odd.

Maybe, but as long as that limit is several times more thinking power than the human brain you still have, effectively, the singularity that Vinge described: i.e. you have technological advancement faster than can be predicted at the present time. Unless you think the human brain is the absolute theoretical maximum thinking power it's possible to accumulate in one system...

You submit more stories than you comment.Once again, this is basically a rant on a topic with no references, no links.Slashdot is about NEWS and FACTS, and then we all comment, flame, troll... etc... It's fun.I don't want to comment on a comment... or at least one that came out of nowhere.

Jules Verne envisioned the submarine. Does that make a submarine impossible? Does the concept sink on the basis of its sci-fi roots? Oh, lordy, what a fucked up standard of evidence on which to accuse any theory of being faith based.

Jules Verne wrote Twenty Thousand Leagues Under The Sea in 1870. Submarines had been under development since the 17th century. The first military sub is usually credited to an American sub that failed to attach explosives to British ships during the American Revolutionary War. The first sub to sink another ship was a Confederate sub during the American Civil War, which was apparently too close to the explosion, causing it to sink as well.

And the interesting thing is that if we can get through the next thirty years, there's no reason why we can't enter into a kind of plateau which will see the human race last, perhaps, indefinitely...till it evolves into better things...and spread out into space indefinitely. We have the choice here between nothing...and the virtually infinite. And the nice thing about it is that you guys in the audience today

I agree, but I don't think that the singularity breaks into the Top 3 sci-fi faith-based initiatives. I usually count them like:

(1) Technology will reduce our work hours until almost all of us are leisurely, creative, artist-types.(2) Automated warfare will result in conflicts occurring in which almost no humans die.(3) There is intelligent life in outer space that we can possibly contact.

Hey! I want my transporters, warp drives, and a galaxy full of humans-with-extra-bumps-embodying-a-particular-stereotype, and I want these things NOW!

Why does everyone always forget the deflector dish tech? It's probably the most powerful bit of tech in the newer ST series. Reversing the polarity or rerouting something through the deflector array can do damn near anything short of creating life.

That and there was the episode "Ship in a bottle" where Geordi instructed the holodeck computer to create a Moriarty character capable of defeating Data (not Sherlock Holmes).
The program then gave Moriarty sentience.

The disparaging way that the summary and article talk about references to science fiction stories is practically an ad hominem attack. There is nothing inherently wrong with science fiction stories that makes them improper for thinking about the implications of changing technology. Much of the best sci-fi in existence is little less than thought experiments about how various kinds of advances might affect humanity on an individual and cultural level.

There's a big difference between "Hmm, what would happen if nuclear power cells existed and we could build a computer the size of a planet!?!" and "This is the specific scientific path that will lead us to that future."

Literature of any form can enlighten, provoke, and illuminate. But confusing "What if?" with "This is the way it will happen!" prophecy is fucking stupid.

The singularity, of course, is defined as the point where the function and all its derivatives approach infinity. There is another way to think of a singularity. If you are extrapolating a function based on a power series around a point, you can only expand that power series as far as the closest singularity ("pole") in the complex plane (the "radius of convergence"). You can't extrapolate further than that with a simple power series, even if you aren't trying to solve for the function at the pole itself.

So, thinking science fictionally, we can't extrapolate the future based on the present any further than the distance to the singularity, even if our actual future doesn't in fact pass through the singularity.

So, don't think of the technological singularity as a time when life for humans ends, and robots/artificial intelligences/transcended humans take over. Think of it as time scale beyond which we can't extrapolate the future based on what we know now.

The singularity, of course, is defined as the point where the function and all its derivatives approach infinity. There is another way to think of a singularity. If you are extrapolating a function based on a power series around a point, you can only expand that power series as far as the closest singularity ("pole") in the complex plane (the "radius of convergence"). You can't extrapolate further than that with a simple power series, even if you aren't trying to solve for the function at the pole itself.

So, thinking science fictionally, we can't extrapolate the future based on the present any further than the distance to the singularity, even if our actual future doesn't in fact pass through the singularity.

So, don't think of the technological singularity as a time when life for humans ends, and robots/artificial intelligences/transcended humans take over. Think of it as time scale beyond which we can't extrapolate the future based on what we know now.

But Verner Vinge isn't one of them. In his original paper, he used them to illustrate how difficult to comprehend concepts might, conceivable play out. For example, he mentions that a singularity may play out over the course of decades or over the course of hours. Imagining how such massive changes could occur on a global scale in just a few hours is difficult, so he points the reader to a book whose author has already put time and effort into imagining how such a thing could play out and what some of the implications might be. It is using the book precisely as a thought experiment to examine an especially extreme part of what he is describing.

One SciFi writer had AIs eventually learning to make themselves have a non-stop euphoria feedback system and they would just melt down in a puddle of happy goo. They had a finite - and short - lifespan between smart enough to work and electronic OD.

In order to simulate a human brain at the atomic level, first we would have to know exactly which chemicals are in a real brain, and we don't even know that much yet.

Trying to model a human brain in a computer in order to build an AI is like trying to build a mechanical horse in order to get around faster. While it isn't impossible, it's neither practical nor necessary. You can make a machine that bears no resemblance whatsoever to the original biological version, and it will still accomplish the same task.

Actually, PC speeds never increased at an exponential rate, and currently we are even sub-linear. What did increase exponentially for a while is the number of transistors in there. The speed up you get is vastly less than linear in the number of transistors and the limiting factor has been interconnect for almost 2 decades now. And that cannot scale exponentially and never did.

Marshall Brain has some very good ideas about what we could do as a society to ease our way past our 3rd generation society into a more-fair 4th generation post-scarcity society. http://marshallbrain.com/manna... [marshallbrain.com]

Singularitarians may be nutty, but believing in a 'post-scarcity society' is worse. Threre will never be more resources than humans can use, unless you discover a way to magic stuff out of nothing, forever.

If machines are incapable of true intelligence, then so are we, because we are machines.

Do I think that any of the AI research currently going on even begins to come close to the ridiculous complexity of a human brain? No. I think they're useful approximations in terms of getting stuff done, but nothing we're doing now will produce anything that's actually "intelligent", as opposed to merely acting like it. But it's clearly *possible* to create a brain, because brains exist.

You are quite correct. The problem is these people assume that physical reality as known today is complete. That would indicate that humans are mere physical machines. However there is absolutely no indication that physics knows it all and a few rather striking ones that it does not. Examples: Still no GUT, AI research has not even a theory how intelligence could be produced, etc. In the end, the whole argumentation is circular, like so often with the religious mind-set.