If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

View Poll Results: I think the singularity...

Voters

5. You may not vote on this poll

...will probably happen in our lifetime, or at least within the 21st century.

The Singularity: Will it happen in our lifetime? Or at all?

From wikipedia (and this is only an excerpt, so especially if you're unfamiliar with the topic, I'd encourage you to read the whole thing, and also follow up on some other sources such as this):

Technological singularity

The technological singularity is the theoretical emergence of greater-than-human superintelligence through technological means.[1] Since the capabilities of such intelligence would be difficult for an unaided human mind to comprehend, the occurrence of a technological singularity is seen as an intellectual event horizon, beyond which events cannot be predicted.

Proponents of the singularity typically state that an "intelligence explosion",[2][3] where superintelligences design successive generations of increasingly powerful minds, might occur very quickly and might not stop until the agent's cognitive abilities greatly surpass that of any human.

The term was popularized by science fiction writer Vernor Vinge, who argues that artificial intelligence, human biological enhancement, or brain-computer interfaces could be possible causes of the singularity. The specific term "singularity" as a description for a phenomenon of technological acceleration causing an eventual unpredictable outcome in society was coined by mathematician John von Neumann, who in the mid 1950s spoke of "ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue." The concept has also been popularized by futurists such as Ray Kurzweil, who cited von Neumann's use of the term in a foreword to von Neumann's classic "The Computer and the Brain."

Some analysts expect the singularity to occur some time in the 21st century, although their estimates vary.

Basic Concepts

Many of the most recognized writers on the singularity, such as Vernor Vinge and Ray Kurzweil, define the concept in terms of the technological creation of superintelligence, and argue that it is difficult or impossible for present-day humans to predict what a post-singularity would be like, due to the difficulty of imagining the intentions and capabilities of superintelligent entities.[4][5][6] The term "technological singularity" was originally coined by Vinge, who made an analogy between the breakdown in our ability to predict what would happen after the development of superintelligence and the breakdown of the predictive ability of modern physics at the space-time singularity beyond the event horizon of a black hole.[6]

Some writers use "the singularity" in a broader way to refer to any radical changes in our society brought about by new technologies such as molecular nanotechnology,[7][8][9] although Vinge and other prominent writers specifically state that without superintelligence, such changes would not qualify as a true singularity.[4] Many writers also tie the singularity to observations of exponential growth in various technologies (with Moore's Law being the most prominent example), using such observations as a basis for predicting that the singularity is likely to happen sometime within the 21st century.[8][10]

A technological singularity includes the concept of an intelligence explosion, a term coined in 1965 by I. J. Good.[11] Although technological progress has been accelerating, it has been limited by the basic intelligence of the human brain, which has not, according to Paul R. Ehrlich, changed significantly for millennia.[12] However, with the increasing power of computers and other technologies, it might eventually be possible to build a machine that is more intelligent than humanity.[13] If superhuman intelligences were invented, either through the amplification of human intelligence or artificial intelligence, it would bring to bear greater problem-solving and inventive skills than humans, then it could design a yet more capable machine, or re-write its source code to become more intelligent. This more capable machine could then go on to design a machine of even greater capability. These iterations could accelerate, leading to recursive self-improvement, potentially allowing enormous qualitative change before any upper limits imposed by the laws of physics or theoretical computation set in.[14][15][16]

The exponential growth in computing technology suggested by Moore's Law is commonly cited as a reason to expect a singularity in the relatively near future, and a number of authors have proposed generalizations of Moore's Law. Computer scientist and futurist Hans Moravec proposed in a 1998 book that the exponential growth curve could be extended back through earlier computing technologies prior to the integrated circuit. Futurist Ray Kurzweil postulates a law of accelerating returns in which the speed of technological change (and more generally, all evolutionary processes[17]) increases exponentially, generalizing Moore's Law in the same manner as Moravec's proposal, and also including material technology (especially as applied to nanotechnology), medical technology and others.[18] Like other authors, though, he reserves the term "singularity" for a rapid increase in intelligence (as opposed to other technologies), writing for example that "The Singularity will allow us to transcend these limitations of our biological bodies and brains ... There will be no distinction, post-Singularity, between human and machine".[19] He also defines his predicted date of the singularity (2045) in terms of when he expects computer-based intelligences to significantly exceed the sum total of human brainpower, writing that advances in computing before that date "will not represent the Singularity" because they do "not yet correspond to a profound expansion of our intelligence."[20]

The term "technological singularity" reflects the idea that such change may happen suddenly, and that it is difficult to predict how such a new world would operate.[21][22] It is unclear whether an intelligence explosion of this kind would be beneficial or harmful, or even an existential threat,[23][24] as the issue has not been dealt with by most artificial general intelligence researchers, although the topic of friendly artificial intelligence is investigated by the Singularity Institute for Artificial Intelligence and the Future of Humanity Institute.[21]

Many prominent technologists and academics dispute the plausibility of a technological singularity, including Jeff Hawkins, John Holland, Jaron Lanier, and Gordon Moore, whose Moore's Law is often cited in support of the concept.[25][26]

I voted eventually, but probably far in the future. There's still a fundamental difference in the way computers "think" and the way the human mind "thinks," and until that difference is erased I don't think this will really happen.

I voted eventually, but probably far in the future. There's still a fundamental difference in the way computers "think" and the way the human mind "thinks," and until that difference is erased I don't think this will really happen.

To play devil's advocate why? If we were to develop "super intelligence" wouldn't that be more likely if the thinking was not like a humans? Is the idea simply if you could combine the computing power of computers with the flexibility of human thought it would be this new thing?

Originally Posted by MrPoon

man with hair like fire can destroy souls with a twitch of his thighs.

Its because computer intelligence has limited parameters. Its scope is limited to what we can define. I don't know how we get around that. But here are my thoughts about it. Eventually we will be able to fit the mental ability of a human. Processing power I guess. And given enough time without some sort of ELE its only a matter of time when actually even considering this someone comes up with something that attacks this from a POV that we aren't even considering now that can atleast duplicate the way we think in a way that is almost indistinguishable from our own to the layman.

To play devil's advocate why? If we were to develop "super intelligence" wouldn't that be more likely if the thinking was not like a humans? Is the idea simply if you could combine the computing power of computers with the flexibility of human thought it would be this new thing?

Just as one example that they learned when reading Watson to play Jeopardy, context is still really hard for a computer to understand. That's why, in what turned into a bit of a joke during Watson's games on Jeopardy, he answered that Toronto was a US city. The context obviously threw him off, and made every human watching laugh because they instantly knew he was wrong.

Another is that these computers have yet to do any actual thinking. They're not formulating answers based on any logical reasoning process, or coming up with new concepts. They can still only spit out whatever is put into them. I watch a pretty fair amount of Jeopardy, and at least a few times an episode I'm able to get a couple answers based solely on context clues, even if I don't otherwise know the answer. At this point, we don't have computers that could do that. Watson basically used a brute-force method, using a huge amount of data and almost running a Google search on keywords in every question. Obviously, you know that's not how the human mind works.

Eventually, though, these problems will probably be overcome. Just.. not yet.

Just as one example that they learned when reading Watson to play Jeopardy, context is still really hard for a computer to understand. That's why, in what turned into a bit of a joke during Watson's games on Jeopardy, he answered that Toronto was a US city. The context obviously threw him off, and made every human watching laugh because they instantly knew he was wrong.

Another is that these computers have yet to do any actual thinking. They're not formulating answers based on any logical reasoning process, or coming up with new concepts. They can still only spit out whatever is put into them. I watch a pretty fair amount of Jeopardy, and at least a few times an episode I'm able to get a couple answers based solely on context clues, even if I don't otherwise know the answer. At this point, we don't have computers that could do that. Watson basically used a brute-force method, using a huge amount of data and almost running a Google search on keywords in every question. Obviously, you know that's not how the human mind works.

Eventually, though, these problems will probably be overcome. Just.. not yet.

But will that make them super inteligent... or will they be just as bat at it as we are.

Originally Posted by MrPoon

man with hair like fire can destroy souls with a twitch of his thighs.

But will that make them super inteligent... or will they be just as bat at it as we are.

I would say that until they have the ability to reason and reach their own conclusions, without needing the information already programmed in, then human intelligence is basically a ceiling they can't break through.

It's sort of like those woo woo machines people advertise that are supposed to be able to output more energy than you put into them, which is obviously impossible. I feel like right now, computers are stuck at being limited to only what can be put into them and aren't able to go beyond it.

I would say that until they have the ability to reason and reach their own conclusions, without needing the information already programmed in, then human intelligence is basically a ceiling they can't break through.

It's sort of like those woo woo machines people advertise that are supposed to be able to output more energy than you put into them, which is obviously impossible. I feel like right now, computers are stuck at being limited to only what can be put into them and aren't able to go beyond it.

I would say until they have the ability to reach their own conclusions human intelligence is a floor they can't break through.

Again intelligence is a difficult thing to define. Answering jeopardy questions or say playing Chess is a very simple form of intelligence... these can be done by calculating all the possibilities and coming up with the one with the highest probability of success.... their advantage here is RAM. Our RAM (working memory) is very limited.

But with ill-defined problems. With an infinite number of possible solutions, or in a social situation where you must reason how a certain action will make you and the person opposite you feel a computer cant even begin to do anything but examine base rates.

Originally Posted by MrPoon

man with hair like fire can destroy souls with a twitch of his thighs.

I would say until they have the ability to reach their own conclusions human intelligence is a floor they can't break through.

Again intelligence is a difficult thing to define. Answering jeopardy questions or say playing Chess is a very simple form of intelligence... these can be done by calculating all the possibilities and coming up with the one with the highest probability of success.... their advantage here is RAM. Our RAM (working memory) is very limited.

But with ill-defined problems. With an infinite number of possible solutions, or in a social situation where you must reason how a certain action will make you and the person opposite you feel a computer cant even begin to do anything but examine base rates.

Playing chess, while impressive, is still basically just a complex math problem. There are literally only so many moves that can be done, even fewer that are likely to be done, and so it just becomes calculating those and going with the best of them, basically.

But, computers also don't infer anything. If you tell a computer something like.. this cake is really rich. It isn't really going to go beyond that fact. You, on the other hand, could infer that I ate the cake, that if someone ate too much of the cake they'd likely get sick, etc. There is no way, at least that I'm aware of, to program a computer to reason like that.

Playing chess, while impressive, is still basically just a complex math problem. There are literally only so many moves that can be done, even fewer that are likely to be done, and so it just becomes calculating those and going with the best of them, basically.

But, computers also don't infer anything. If you tell a computer something like.. this cake is really rich. It isn't really going to go beyond that fact. You, on the other hand, could infer that I ate the cake, that if someone ate too much of the cake they'd likely get sick, etc. There is no way, at least that I'm aware of, to program a computer to reason like that.

That's my point... Without this Artifical systems are not going to come close to the floor of human intelligence

Originally Posted by MrPoon

man with hair like fire can destroy souls with a twitch of his thighs.

There is no way it would happen in our lifetime. The idea that a computer could take in completely random information and process it like a human is mind blowing. Can a human create an AI that was equally smart or smarter then the creator itself? Like it says here

Since the capabilities of such intelligence would be difficult for an unaided human mind to comprehend

It's really/impossibly hard to imagine how this technology would be created. At that point could human brains be recreated and placed in an actual human?

The technological singularity is a fallacious religious statement best summed up as an orgy of inappropriate and incorrect drawings of exponentials linked to dodgy predictions backed up by science fiction assertions.

Playing chess, while impressive, is still basically just a complex math problem. There are literally only so many moves that can be done

only 140.1 x 10^6 logical positions after move 1

Originally Posted by natepro

even fewer that are likely to be done, and so it just becomes calculating those and going with the best of them, basically.

How do you determine what is "best"? How do you quantify it to something that a computer can "understand" and process?

Without their game databases (which is literally the only advantage that a computer has over a Grandmaster), computers are actually pretty terrible at chess. Actually, just take away their opening book, any grandmaster can beat a computer with relative ease.

Just about 20 years ago (when opening and endgame tablebases have not been created yet), every computer engine, and i mean every, would blunder many simple positions that even your normal club player would laugh at it.