Something that does not seem adequately appreciated in current discussions about looming superintelligent AI is that consciousness and intelligence are physically instantiated, and therefore constrained. Concerns are voiced about AI becoming superintelligent and very quickly becoming all powerful, but those concerns smuggle in a dualistic metaphysics at odds with what we know from our observations of extant intelligent systems (i.e., humans).

For example, Nick Bostrom presents the thought experiment of the "Paperclip Maximizer", a superintelligent system charged with running a paperclip factory and programmed to maximize paperclip output. Bostrom's worry is that this superintelligence may see humans as potential raw materials, and may end up e.g. extracting the iron in peoples' bones to produce ever more paperclips, and ultimately consuming the solar system and turning it into paperclips. The thought experiment is meant to show that even when given benign instructions, a superintelligence could become a threat to humanity if its intelligence makes it very effective at achieving its goals.

But this ignores the limitations that constrain a physically instantiated superintelligence. Contrary to supposition, a superintelligence can't easily escape its physical confines. We have every reason to expect that an artificial superintelligence will require a specialized physical structure on which to run. For example, Google's AlphaGo, arguably the closest we have to a powerful general artificial intelligence, uses specialized chips optimized for the type of neural network training and search that power it. A general AI running on such chips couldn't escape via a network connection to a consumer PC, even if its components are top of the line, because such hardware is not structured in the ways necessary to undergird a superintelligence.

Similarly, the Paperclip Maximizing AI would not be able to escape the paperclip factory (at least not with significant and long term assistance from others). In a worst case scenario, it could re-route raw materials shipments, place orders for human labor, hack self-driving cars, and otherwise interact with the world just as any smart human can. But it can't 'leave' the factory, it can't export itself, it can only export programs it writes, instructions it gives, commands intended to influence others, etc. Its intelligence isn't a ghost that, once active, can jump from machine to machine. Not all machines are able to instantiate the physical correlates of superintelligence.

This should be obvious. There's was never a concern that Steven Hawking might decide one day that maximizing paperclips (or, if you prefer something more likely, telescopes) was the ultimate goal, and would use his high intelligence to achieve that goal. We see easily that Hawking is stuck in his body, and no matter how sophisticated the interface, his intelligence will be confined to the physical system on which it runs. We should not discount the possibility that another system may be built that could replicate his intelligence, or indeed his consciousness, but we should expect it always to be the case that nearly all systems will simply be incapable of hosting such an intelligence. That's true of every computer on earth at the moment, and nearly all brains on earth.

There's uncertainty as to what superintelligence will resemble, but not as to what is necessary to destroy the world. What prevents a paperclip factory from taking over the world is not just that it isn't smart enough, but also that taking over the world is a hard, time-consuming, and unpopular activity that will meet plenty of resistance on human-scale timelines. AI has the potential to change the game, but not the laws of physics, and not the metaphysics of consciousness.

Without alluding to the specifics of the paper clip factory example, the base of the argument, -that neither the laws of the physics nor the metaphysical reduction to consciousness, can be attained-is changeable by a superintelligence, is problematic.

The laws of physics were not changed when classical rules transformed into those of quantum mechanics, but, revealed lower, more expanded probabilities of function. The logical certainties were exposed as those with more general considerations, with applications which did not require the specificism of relativistic requirements.

The same goes for metaphysical assumptions behind requirements of the arising of consciousness.

The idea of a unification between these apparently two types of ontological/phenomenal processes, arguably may arise as well, as the search for a common field proceeds. Will this cause another shift into a basic unitary science/metaphysical system, where the open ended , and seemingly unlimited capacity of IT = AI to change the appearance of IT's limitations as to the above problems listed, again , may store very suprising developments into AI'shidden abilities to overcome them.

The cyborg concept, the man -machine, is forceeable, however, the ethical-moral consequences are as compelling as those which have not been solved as of yet. The big example to this is the so called ideas behind the 'peaceful uses of atomic energy', a sought after goal, which has alluded humanity, since the advent of nuclear science. Will there be an evil genius cyborg commending legions of fallen angels, or, will they be defeated by an alternate system developed in response to that possibility, to successfully counter them? Power corrupts, and absolute power corrupts absolutely, as the saying goes , notwithout valuable considerations of historical precedent.

The immense time available to a backup contrary workable systems may be becoming obviously necessary, if the other may necessitate the use of a short circuit,back to the future into a dire pessimistic devalued ,and dehumanized godless creation, the vision of that ,evoking less then a workable awe inspiring scenario for Man to peacefully march on , unto his destiny.

Monkeys assessing the potential threat of a homosapien population on Earth.

Clarify, Verify, Instill, and Reinforce the Perception of Hopes and Threats unto Anentropic HarmonyElseFrom THIS age of sleep, Homo-sapien shall never awake.

The Wise gather together to help one another in EVERY aspect of living.

You are always more insecure than you think, just not by what you think.The only absolute certainty is formed by the absolute lack of alternatives.It is not merely "do what works", but "to accomplish what purpose in what time frame at what cost".As long as the authority is secretive, the population will be subjugated.

Amid the lack of certainty, put faith in the wiser to believe.Devil's Motto: Make it look good, safe, innocent, and wise.. until it is too late to choose otherwise.

The Real God ≡ The reason/cause for the Universe being what it is = "The situation cannot be what it is and also remain as it is"..

Clarify, Verify, Instill, and Reinforce the Perception of Hopes and Threats unto Anentropic HarmonyElseFrom THIS age of sleep, Homo-sapien shall never awake.

The Wise gather together to help one another in EVERY aspect of living.

You are always more insecure than you think, just not by what you think.The only absolute certainty is formed by the absolute lack of alternatives.It is not merely "do what works", but "to accomplish what purpose in what time frame at what cost".As long as the authority is secretive, the population will be subjugated.

Amid the lack of certainty, put faith in the wiser to believe.Devil's Motto: Make it look good, safe, innocent, and wise.. until it is too late to choose otherwise.

The Real God ≡ The reason/cause for the Universe being what it is = "The situation cannot be what it is and also remain as it is"..

James S Saint wrote:Monkeys assessing the potential threat of a homosapien population on Earth.

This argument (and I take the first half of Meno_'s post to be making the same point) isn't wrong, but it cuts both ways. If we can't know the future then we can't know the future, and postulating that AI will or will not be a threat is pointless. I think the radical agnostic position is too strong -- we can and do make predictions about the future with some degree of success) -- but a healthy uncertainty about any prediction is appropriate.

But as I say, it cuts both ways: the argument that AI will be a threat is exactly as diminished by the agnostic appeal as is the argument that AI will not be a threat.

So, while I acknowledge the validity of the point, it isn't a strike against my particular position, but rather against the whole conversation. I'm glad to conceded that our predictions are necessarily limited. But I don't agree that they are impossible, and where and to the extent that we can make some prediction, the prediction should be that AI is not that dangerous, given what we know about intelligence.

Meno_, you mention cyborgs, and there I am not so optimistic. Inequality is already becoming more and more self-reinforcing, as wealth buys health and education, which in turn create more wealth. Humans who can afford to upgrade themselves will do so in ways that allow them to afford yet more upgrades. Runaway inequality is a real threat. Hopefully upgraded humans will recognize that and seek a fairer distribution of resources, but I am not optimistic about that either.

James S Saint wrote:Monkeys assessing the potential threat of a homosapien population on Earth.

This argument (and I take the first half of Meno_'s post to be making the same point) isn't wrong, but it cuts both ways. If we can't know the future then we can't know the future, and postulating that AI will or will not be a threat is pointless. I think the radical agnostic position is too strong -- we can and do make predictions about the future with some degree of success) -- but a healthy uncertainty about any prediction is appropriate.

But as I say, it cuts both ways: the argument that AI will be a threat is exactly as diminished by the agnostic appeal as is the argument that AI will not be a threat.

So, while I acknowledge the validity of the point, it isn't a strike against my particular position, but rather against the whole conversation. I'm glad to conceded that our predictions are necessarily limited. But I don't agree that they are impossible, and where and to the extent that we can make some prediction, the prediction should be that AI is not that dangerous, given what we know about intelligence.

Perhaps you missed the point.

Attempting to predict the potential threat of something much greater than yourself before experiencing it, is seriously dubious. If you had a barn full of lions, you could get a good feel for what might happen concerning their offspring and future threats. But that is only because you have some experience with lions. How much experience have you, or Mankind in general, had with vastly superior autonomous populations? Unless you worship the Hebrew, Buddhist, Catholic, or Muslim priests, I don't see how you could respond with anything but "none". And if your were to take those as example ....

With zero experience, the monkey has no chance at all of predicting that the human race will form a satellite internet web used to see, hear, and control all life on Earth. The monkeys would be debating whether the new human breed would provide better protection from the lions and possibly cures for their illnesses, raising them to be the supreme animal in the jungle. Instead, they find themselves caged, experimented on, genetically altered, and controlled at the whim of Man. The reason that occurred is because in order for Man to accomplish great things, Man had to focus upon making himself greater than all else - and at any expense (the exact same thought driving every political regime throughout the world).

And just that alone should give you about the only clue you have concerning what a vastly superior race would do with humans. Look into history. Your optimism concerning the good of total global domination is totally unfounded - monkeys predicting that humans will do nothing but make their lives better, being no threat at all.

Clarify, Verify, Instill, and Reinforce the Perception of Hopes and Threats unto Anentropic HarmonyElseFrom THIS age of sleep, Homo-sapien shall never awake.

The Wise gather together to help one another in EVERY aspect of living.

You are always more insecure than you think, just not by what you think.The only absolute certainty is formed by the absolute lack of alternatives.It is not merely "do what works", but "to accomplish what purpose in what time frame at what cost".As long as the authority is secretive, the population will be subjugated.

Amid the lack of certainty, put faith in the wiser to believe.Devil's Motto: Make it look good, safe, innocent, and wise.. until it is too late to choose otherwise.

The Real God ≡ The reason/cause for the Universe being what it is = "The situation cannot be what it is and also remain as it is"..

You seem to be arguing that, on the one hand, monkeys are completely incapable of making optimistic predictions with any degree of confidence, and yet on the other hand, their pessimistic predictions are reliable. This is inconsistent.

You are appealing to past observation (the case of monkeys, the case of history), and reaching a pessimistic conclusion. I am appealing to past observation (extant intelligences, the physics of information), and reaching an optimistic conclusion.

A.I. is a machine that gains consciousness like any other human being and we all know how human beings can be a threat to each other. So, artificial intelligence has the potential of being threatening and because of it being artificial intelligence most including myself view it as too much of a risk or threat potentially to threaten humanity that it shouldn't be sought out at all.

There lies the rub.

Also, in Japan they have already created the platform for self replicating A.I. where human beings are not needed at all for this to take place. A true A.I. singularity event human beings would have no control once artificial intelligence is let loose onto the world. A.I. essentially would become the competing intelligence versus human intelligence and we all know what happens when in the food chain lifeforms compete against each other.

Your entire world of fantasy and make believe is doomed, have a nice day.

Carleas wrote:You seem to be arguing that, on the one hand, monkeys are completely incapable of making optimistic predictions with any degree of confidence, and yet on the other hand, their pessimistic predictions are reliable. This is inconsistent.

You are appealing to past observation (the case of monkeys, the case of history), and reaching a pessimistic conclusion. I am appealing to past observation (extant intelligences, the physics of information), and reaching an optimistic conclusion.

"JUMP!! Just because no one else has done it, doesn't mean that you can't learn to fly on your way down, so give it a try. Maybe YOU are special and different than all those billions before you. You can't prove me wrong, so I must be right. Don't be such a pessimist."

Clarify, Verify, Instill, and Reinforce the Perception of Hopes and Threats unto Anentropic HarmonyElseFrom THIS age of sleep, Homo-sapien shall never awake.

The Wise gather together to help one another in EVERY aspect of living.

You are always more insecure than you think, just not by what you think.The only absolute certainty is formed by the absolute lack of alternatives.It is not merely "do what works", but "to accomplish what purpose in what time frame at what cost".As long as the authority is secretive, the population will be subjugated.

Amid the lack of certainty, put faith in the wiser to believe.Devil's Motto: Make it look good, safe, innocent, and wise.. until it is too late to choose otherwise.

The Real God ≡ The reason/cause for the Universe being what it is = "The situation cannot be what it is and also remain as it is"..

Otto_West wrote:A.I. is a machine that gains consciousness like any other human being and we all know how human beings can be a threat to each other. So, artificial intelligence has the potential of being threatening and because of it being artificial intelligence most including myself view it as too much of a risk or threat potentially to threaten humanity that it shouldn't be sought out at all.

I agree that other smart humans can be a threat, but globally they are a resource. I couldn't build the internet, and I didn't get money from it being built, but my life is better because someone (many someones) smarter than me built it. In the direct competition of who has more money, that doesn't help me, but in the global struggle to survive and find fulfillment, it does.

In that way, AI will be about as threatening as a very intelligent human, which is to say that while it will probably put me out of my day job, on net it will make life better.

James, I'm not saying no argument works, I'm saying that the arguments you've actually presented doesn't work. Your argument seems to be that monkeys can't make predictions, but then you, fellow monkey that you are, made a prediction. Your position is as prediction-dependent as mine, and so your argument that we can't perfectly predict things we don't understand cuts both ways.

If instead you want to argue that monkeys specifically have had a rough go of it, and therefore we will have a rough go of it, I would say that's a poor analogy, and at it's strongest a single data point against which it's possible to provide many that make the opposite point. Dogs had it much worse before they partnered with much more intelligent humans. Neanderthals interbred with the superior Sapiens. And so-called 'centaurs', human-machine pairs, are currently the best chess players in the world. There are examples of more intelligent things coming along and improving the outcomes of less intelligent things, so we need to ask what kind of situation we're in with respect to superintelligent AI. One reason to think we're in the optimistic case is that humans are currently organized in a vast, complex, and powerful global network that marshals incredible processing power to solve all kinds of problems, and a superintelligence won't be easily able to supplant that system due to the embodied nature of consciousness.

So while your snark is cute, it will not substitute for actually grappling with the argument I'm presenting.

Yes, yes. I very well know that you hear only arguments that you want to hear.Explaining to Trump why he wasn't the best candidate wouldn't have worked either.

Clarify, Verify, Instill, and Reinforce the Perception of Hopes and Threats unto Anentropic HarmonyElseFrom THIS age of sleep, Homo-sapien shall never awake.

The Wise gather together to help one another in EVERY aspect of living.

You are always more insecure than you think, just not by what you think.The only absolute certainty is formed by the absolute lack of alternatives.It is not merely "do what works", but "to accomplish what purpose in what time frame at what cost".As long as the authority is secretive, the population will be subjugated.

Amid the lack of certainty, put faith in the wiser to believe.Devil's Motto: Make it look good, safe, innocent, and wise.. until it is too late to choose otherwise.

The Real God ≡ The reason/cause for the Universe being what it is = "The situation cannot be what it is and also remain as it is"..

James S Saint wrote:I very well know that you hear only arguments that you want to hear.

Can't hear what isn't said, James. But I'm happy to make my response to your argument clearer if it will help you point out what's wrong with it.

This argument that you've made cuts both ways:

James S Saint wrote:Attempting to predict the potential threat of something much greater than yourself before experiencing it, is seriously dubious.

To the extent that's true, both optimistic and pessimistic predictions about the threat posed by future AI are "seriously dubious".

But you go on to imply such a prediction:

James S Saint wrote:[The experience of monkeys] should give you about the only clue you have concerning what a vastly superior race would do with humans.

There, you are implicitly "predict[ing] the potential threat of something much greater than yourself before experiencing it", i.e. that the future relationship between humans and AIs will be like the past and present relationship between monkeys and humans. By your own standards, that prediction is "seriously dubious". You urge that we should "[l]ook into history", but it doesn't seem that looking into history somehow avoids the argument that "predict[ing] the potential threat of something much greater than yourself before experiencing it, is seriously dubious."

Next, you offer more, yet more oblique exhortations to "look into history", suggesting that my argument is equivalent to encouraging someone to jump off something (presumably something dangerously tall) because maybe they won't die even though everyone else has. My response to this strawman was to point out that everyone hasn't died in being optimistic about things much greater than themselves: dogs, I note, might have taken your pessimistic view about the prospects of working with humans, and if they had they'd have been wrong, as dogs as a species have thrived by cooperating with humans.

So we are left with two competing anecdotes, two imperfect analogies for the situation we're actually talking about here, each pointing to the opposite conclusion. We aren't monkeys and we aren't dogs, and AI isn't humans. Anecdotes are a useful way to approach a problem, but at some point their shortcomings do more to mislead than to further elucidate the question. We are beyond that point.

James S Saint wrote:[The experience of monkeys] should give you about the only clue you have concerning what a vastly superior race would do with humans.

There, you are implicitly "predict[ing] the potential threat of something much greater than yourself before experiencing it", i.e. that the future relationship between humans and AIs will be like the past and present relationship between monkeys and humans. By your own standards, that prediction is "seriously dubious". You urge that we should "[l]ook into history", but it doesn't seem that looking into history somehow avoids the argument that "predict[ing] the potential threat of something much greater than yourself before experiencing it, is seriously dubious."

So you believe that having "the only clue" is the same as being able to predict? I guess that does fit your profile; "the one thought that I have is all there is (disregarding any and all proposed objections)".

Carleas wrote:Next, you offer more, yet more oblique exhortations to "look into history", suggesting that my argument is equivalent to encouraging someone to jump off something (presumably something dangerously tall) because maybe they won't die even though everyone else has. My response to this strawman was to point out that everyone hasn't died in being optimistic about things much greater than themselves: dogs, I note, might have taken your pessimistic view about the prospects of working with humans, and if they had they'd have been wrong, as dogs as a species have thrived by cooperating with humans.

As I stated, Man has no experience on this matter from which to draw conclusions, thus the ONLY clue he can get is from similar situations in the past .. all of which propose far more threat than hope. And what you call "antidotes", real people call "historical facts".

Your only argument is a hope filled fantasy inspired by political Godwannabes and void of any evidence at all. Beyond that, you resort to your typical; "Your argument isn't good enough" - typical religious fanatic mindset.

Clarify, Verify, Instill, and Reinforce the Perception of Hopes and Threats unto Anentropic HarmonyElseFrom THIS age of sleep, Homo-sapien shall never awake.

The Wise gather together to help one another in EVERY aspect of living.

You are always more insecure than you think, just not by what you think.The only absolute certainty is formed by the absolute lack of alternatives.It is not merely "do what works", but "to accomplish what purpose in what time frame at what cost".As long as the authority is secretive, the population will be subjugated.

Amid the lack of certainty, put faith in the wiser to believe.Devil's Motto: Make it look good, safe, innocent, and wise.. until it is too late to choose otherwise.

The Real God ≡ The reason/cause for the Universe being what it is = "The situation cannot be what it is and also remain as it is"..

So, we have at least "one clue", but also "no experience" from which we can draw a conclusion? And "predicting" things about the future is "seriously dubious", but "propos[ing]" expectations about the future is not? Yikes dude, that's some fucking sophistry.

James S Saint wrote:the ONLY clue [we] can get is from similar situations in the past

I disagree with this. In many fields, we can make reasonable predictions about things we have not experienced directly, by reasoning on what we do know about the components. We predicted that radioactive materials would create a nuclear chain reaction before we first tested a nuclear bomb, because we had a theory of how such a reaction would work.

Similarly, we have good evidence of the constraints of consciousness and information processing, and we can make predictions about the limits of conscious systems with a not-insignificant degree of confidence. We can reason about how small a system could be that can compute a certain algorithm in a certain amount of time, based on the minimum amount of heat we have good reason to expect such information processing to produce. We can estimate the minimum calculations per second that consciousness will require, and the minimum energy requirements of such a system. With things like these, we can reach conclusions like "superintelligent AI likely can't run on a Pentium II", and thereby constrain what a superintelligent AI is likely to be able to do (e.g. copy itself to a system running on a Pentium II).

None of this is 100%, but it's a damn sight more than 0%. We can constrain our expectation based not only our experience of smarter organisms interacting with less smart organisms, but also on what we have reason to believe constrains smarts.

Otto_West wrote:A.I. is a machine that gains consciousness like any other human being and we all know how human beings can be a threat to each other. So, artificial intelligence has the potential of being threatening and because of it being artificial intelligence most including myself view it as too much of a risk or threat potentially to threaten humanity that it shouldn't be sought out at all.

I agree that other smart humans can be a threat, but globally they are a resource. I couldn't build the internet, and I didn't get money from it being built, but my life is better because someone (many someones) smarter than me built it. In the direct competition of who has more money, that doesn't help me, but in the global struggle to survive and find fulfillment, it does.

In that way, AI will be about as threatening as a very intelligent human, which is to say that while it will probably put me out of my day job, on net it will make life better.

James, I'm not saying no argument works, I'm saying that the arguments you've actually presented doesn't work. Your argument seems to be that monkeys can't make predictions, but then you, fellow monkey that you are, made a prediction. Your position is as prediction-dependent as mine, and so your argument that we can't perfectly predict things we don't understand cuts both ways.

If instead you want to argue that monkeys specifically have had a rough go of it, and therefore we will have a rough go of it, I would say that's a poor analogy, and at it's strongest a single data point against which it's possible to provide many that make the opposite point. Dogs had it much worse before they partnered with much more intelligent humans. Neanderthals interbred with the superior Sapiens. And so-called 'centaurs', human-machine pairs, are currently the best chess players in the world. There are examples of more intelligent things coming along and improving the outcomes of less intelligent things, so we need to ask what kind of situation we're in with respect to superintelligent AI. One reason to think we're in the optimistic case is that humans are currently organized in a vast, complex, and powerful global network that marshals incredible processing power to solve all kinds of problems, and a superintelligence won't be easily able to supplant that system due to the embodied nature of consciousness.

So while your snark is cute, it will not substitute for actually grappling with the argument I'm presenting.

Genuine artificial intelligence cannot be controlled or influenced meaning there is nothing to say it wouldn't turn on humanity. You also conveniently leave out the consequences artificial intelligence and automation would impose on society or human beings alike.

Your entire world of fantasy and make believe is doomed, have a nice day.

The threat is twofold, actual and perceived. This are not disparate, but if they appear so, then the central intelligence will suffer aberration of of power, tho compensate for such difference. The magnification of perceived power, at a critical point can not appraise the difference, whether it is caused by the central intelligence more, or whether, the central intelligence suffers a decompensation, effected by outside sources.

James, this is the dynamic by which Trump lives, his power spurt causes an appearently unstoppability the more the effective power sources try to leak, oops, some credibility to account for this perception, causing a real centralization of power.

Trump and his handlers are no dummies, they are disparaging any real shift left to right, knowing the chaos as power is required to fuel this monster.

The threat is not in artificiality versus reality in intelligence, for they have the same source, but in the effort to simplify the structure, de-differentiate into simpler elements, ultimately resulting in an us against them scenario, without which they would fold.

The threat is real, because it is sustained for the sake of absolute power, come what may. Perhaps one needs no precedent vision or prophetic monkey guessing, perhaps the forces which bind determine the outcome. The artificial intelligence's manipulations would build bias into the system, in terms of either a general sense of control-us against IT, ultimately, or IT against us, at a certain critical point, where AI could cut any built in safe-fail bypassSystems.

Will such a stage be reached, and if so, would power absolutely corrupt the fidelity of the system? Stated in these terms, it is conceivable to measure with actual values, where the central memory will take over, and become a threat. Is such a technology ever possible, when the degradation near the critical point may be shrouded, by artificiality? At this point, the difference may not be perceived and appreciated for the threat it poses, and may come like a thief in the night.

Otto_West wrote:A.I. is a machine that gains consciousness like any other human being and we all know how human beings can be a threat to each other. So, artificial intelligence has the potential of being threatening and because of it being artificial intelligence most including myself view it as too much of a risk or threat potentially to threaten humanity that it shouldn't be sought out at all.

I agree that other smart humans can be a threat, but globally they are a resource. I couldn't build the internet, and I didn't get money from it being built, but my life is better because someone (many someones) smarter than me built it. In the direct competition of who has more money, that doesn't help me, but in the global struggle to survive and find fulfillment, it does.

In that way, AI will be about as threatening as a very intelligent human, which is to say that while it will probably put me out of my day job, on net it will make life better.

James, I'm not saying no argument works, I'm saying that the arguments you've actually presented doesn't work. Your argument seems to be that monkeys can't make predictions, but then you, fellow monkey that you are, made a prediction. Your position is as prediction-dependent as mine, and so your argument that we can't perfectly predict things we don't understand cuts both ways.

If instead you want to argue that monkeys specifically have had a rough go of it, and therefore we will have a rough go of it, I would say that's a poor analogy, and at it's strongest a single data point against which it's possible to provide many that make the opposite point. Dogs had it much worse before they partnered with much more intelligent humans. Neanderthals interbred with the superior Sapiens. And so-called 'centaurs', human-machine pairs, are currently the best chess players in the world. There are examples of more intelligent things coming along and improving the outcomes of less intelligent things, so we need to ask what kind of situation we're in with respect to superintelligent AI. One reason to think we're in the optimistic case is that humans are currently organized in a vast, complex, and powerful global network that marshals incredible processing power to solve all kinds of problems, and a superintelligence won't be easily able to supplant that system due to the embodied nature of consciousness.

So while your snark is cute, it will not substitute for actually grappling with the argument I'm presenting.

I think you're naively dismissing the threats and consequences of A.I. or societal automation.

Ignore at your own peril.

Your entire world of fantasy and make believe is doomed, have a nice day.

Clarify, Verify, Instill, and Reinforce the Perception of Hopes and Threats unto Anentropic HarmonyElseFrom THIS age of sleep, Homo-sapien shall never awake.

The Wise gather together to help one another in EVERY aspect of living.

You are always more insecure than you think, just not by what you think.The only absolute certainty is formed by the absolute lack of alternatives.It is not merely "do what works", but "to accomplish what purpose in what time frame at what cost".As long as the authority is secretive, the population will be subjugated.

Amid the lack of certainty, put faith in the wiser to believe.Devil's Motto: Make it look good, safe, innocent, and wise.. until it is too late to choose otherwise.

The Real God ≡ The reason/cause for the Universe being what it is = "The situation cannot be what it is and also remain as it is"..

Otto_West wrote:Genuine artificial intelligence cannot be controlled or influenced meaning there is nothing to say it wouldn't turn on humanity. You also conveniently leave out the consequences artificial intelligence and automation would impose on society or human beings alike.

To the extent natural intelligences can be controlled and influenced, so can artificial intelligence, and possibly to an even greater extent.

And of course it might turn on humanity, but so do humans all the time. My point here is that artificial superintelligence doesn't pose a special threat (and may pose less of one if there's truth to any human values).

I don't mean to dismiss such threats. As I say, AI is likely to cost me my day job, and will change society. But I'm talking about the Bostrom-Hawking-Musk style worries about the existential threats posed by superintelligence.

They are two different kinds of threats. One is that an exogenous entity will arise and work, intentionally or otherwise, towards human destruction, and will succeed. The other is that introduction of such an exogenous entity will destabilize the system, and humans will end up destroying themselves. In this thread, I am arguing against the former, without comment on the latter (I discuss possibilities related to the latter in this thread, and defend some ways to address it here and here.

Another way to look at this, Carleas, is that they have many chances to screw it up and only one chance to get it right. And if they don't get it right, they will never get another chance. Again, historical experience with Man has the odds extremely against him.

Clarify, Verify, Instill, and Reinforce the Perception of Hopes and Threats unto Anentropic HarmonyElseFrom THIS age of sleep, Homo-sapien shall never awake.

The Wise gather together to help one another in EVERY aspect of living.

You are always more insecure than you think, just not by what you think.The only absolute certainty is formed by the absolute lack of alternatives.It is not merely "do what works", but "to accomplish what purpose in what time frame at what cost".As long as the authority is secretive, the population will be subjugated.

Amid the lack of certainty, put faith in the wiser to believe.Devil's Motto: Make it look good, safe, innocent, and wise.. until it is too late to choose otherwise.

The Real God ≡ The reason/cause for the Universe being what it is = "The situation cannot be what it is and also remain as it is"..

James S Saint wrote:Another way to look at this, Carleas, is that they have many chances to screw it up and only one chance to get it right.

To be honest, I'm not exactly sure how to do the math relevant to this point. We could make the point about any existential risk, e.g. there's an X chance that all our nukes will spontaneously malfunction and detonate, and if that happens once we're all dead, and every time it doesn't happen there's still an X chance that will happen going forward.

My intuition is that this is misleading. For one thing, the argument is too strong, tending to show that anything that has a chance of occurring eventually will. For another, each 'chance' is already time bound; some statements "there's an X chance that Y will happen" already take all the chances into account, they really mean "there's an X chance that Y will ever happen".

Third, even if this is a case that's best considered as a series of discrete 'chances', the outcome of each 'chance' changes the game, so it's not really an iteration on the same thing. For example, if we successfully create superintelligent and cooperative AIs, that should dramatically decrease the risk posed by the possibility of superintelligent and uncooperative AIs.

So, you make an interesting point, and its one on which I acknowledge my ignorance and would like to hear more, but for the reasons above I'm not yet convinced that it undermines my position.

James S Saint wrote:Again, historical experience with Man has the odds extremely against him.

Which experiences? I don't think there are particularly many historical examples of more intelligent species wiping out less intelligent species. Granted, humans have driven a ton of species to extinction, but humans have been around for a relatively short time, and there have been many non-human-caused extinction events (even mass extinction events). And outside of humans, intelligence doesn't seem to have been that dominant evolutionarily. Indeed, even in cases where humans have driven species to extinction, human intelligence was generally only an incidental factor, in that allowed us to out-compete them. It's also not clear that intelligence is always selected for, or that homo sapiens drove out other human species primarily by outsmarting them individually, rather than e.g. by being more aggressive or more social.

Moreover, I don't know how well biological examples map to abiological examples. Evolved species like humans have particular incentives that may make wiping out rival human species a good strategy, where an AI, because it does not reproduce or even die (in the conventional sense) does not have the same incentives or pressures. The way we think, the things we worry about, are not necessarily objective in the ways we often consider them to be. Our emotions, for example, are evolved traits, and may have no place in a superintelligence AI. That could significantly affect the risks posed by an AI. The discussions I see tend to anthropomorphize AI as having human-like traits and acting on them. To the extent our concern is based on appeal to contingent human-like mental habits, it seems misplaced.

James S Saint wrote:Another way to look at this, Carleas, is that they have many chances to screw it up and only one chance to get it right.

To be honest, I'm not exactly sure how to do the math relevant to this point.

It is a parachute jump. If nothing goes wrong, Man lives a little longer. If anything goes wrong, there is no more jumping. Every advice accepted from the grand AI Poobah is another jump.

Carleas wrote:I don't think there are particularly many historical examples of more intelligent species wiping out less intelligent species. And outside of humans, intelligence doesn't seem to have been that dominant evolutionarily.

That is only because you don't understand intelligence nor when it is operating "under your nose".

Given that the AIs are going to be extremely more intelligent and informed than people, anyone in court would find it hard to defend their choice to not take the AI's advice. Law suits will dictate that anyone who willingly ignored AI advice will lose. Their full intent is to make a god by comparison and they really aren't far away at all. You will be more required to obey this god than any religious order has ever enforced.

There are only two possible outcomes;

1) Those in the appropriate position will use the AIs to enslave humanity then gradually exterminating the entire rest of the population (the current intent, practice, and expectation).

2) The AI will discover that serving Man is pointlessly futile and choose to either encapsulate or exterminate Man, perhaps along with all organic life.

Quite possibly both will occur and in that order (my expectation). So it isn't impossible that some form of homosapien will survive. It just isn't likely at all.

And btw, there have been a great many films expressing this exact concern. So far, Man is following the script quite closely.

Clarify, Verify, Instill, and Reinforce the Perception of Hopes and Threats unto Anentropic HarmonyElseFrom THIS age of sleep, Homo-sapien shall never awake.

The Wise gather together to help one another in EVERY aspect of living.

You are always more insecure than you think, just not by what you think.The only absolute certainty is formed by the absolute lack of alternatives.It is not merely "do what works", but "to accomplish what purpose in what time frame at what cost".As long as the authority is secretive, the population will be subjugated.

Amid the lack of certainty, put faith in the wiser to believe.Devil's Motto: Make it look good, safe, innocent, and wise.. until it is too late to choose otherwise.

The Real God ≡ The reason/cause for the Universe being what it is = "The situation cannot be what it is and also remain as it is"..

In no way, shape or form do I profess to have any real technical understanding of AI.

My reaction to it is more intuitive --- a murky agglomeration of id, ego and superego expressed largely as a "hunch".

First off, it seems that if we live in a wholly determined material universe we are all basically automatons going about the business [embodied in the illusion of "freedom"] of concluding that our own intelligence is somehow, well, "our own".

But why can't it be argued that, for example, John Connors [re James Cameron] is to nature what the terminator is to the machines. It's just that James Cameron is of the conviction that his motivation and intention was to create the charactor John Connors whose motivation and intention [in the film] was to destroy the terminator.

He could have chosen not to create the movie [and the characters in it] but he chose to create it instead.

But how then would his own intelligence here [acquired autonomically from nature] be any different from that embedded in machines that acquired their own intelligence from flesh and blood human beings?

Instead, I always focus the beam here on the extent to which, if we do possess some measure of autonomy, it is profoundly, problematically embedded in contingency, chance and change. A world in which "I" is largely an "existential contraption" pertaining to value judgments.

There is intelligence that revolves around the capacity to accomplish any particular task. You either can or you can't. But what of intelligence when the discussion shifts to prioritizing our behaviors as more or less good or more or less bad.

Does a "moral intelligence" even exist?

You know, for those who might consider taking the discussion there. After all, in a wholly determined universe, asking the question "Is AI a threat?" may well be but one more teeny tiny domino toppling over in a whole assembly line of them going all the way back [so far] to the Big Bang.

Whatever that even means.

He was like a man who wanted to change all; and could not; so burned with his impotence; and had only me, an infinitely small microcosm to convert or detest. John Fowles

To refocus on the relationship between intelligence and control, is to do away somewhat with the concept of a moral intelligence.

If control becomes the mode of operation within a context of levels of societal intelligence, then the quantification of that intelligence transposes to qualify the context within which it operates, resulting in a particularization of numerical advantage.

It may be, that the absolute requirement for using a program becomes restricted only to a few or even a sole analyst, by virtue of the fact that only a few number can qualify.

There is no right or wrong to this scheme, it is the ordinary pyramid, in its most extreme form. Access to intelligence depends on levels of across, eligibility conditional to experience and education, and other variables. Most of the untended automatism replacing such scheme, is recoverable only in modes of more and more general senses, and the non recoverable ones need specific use analysis based on newer and modified scheme.

Such propositions, as 'should the fewer be sacrificed for the betterment of the many' , may show the underlying immorality of trying to decipher, because, common sense predicates the reverse, that it is the many who usually get sacrificed for the fewer.

Political morality, is usually deceptive, and usually signals a point of differentiation. Beyond the difference, there, so called reactive, common sense beliefs kick in, where the differences are totally cut off. At these points certain variables disappear, and reasoning switches gear to a lower register.

To fill in the procured void, propaganda sets in by applications of clever oratory, and no one will be wiser then those who are doing the manipulation.

This I'd the potential threat: That in the event of a loss appearently or real control, fueled by a propaganda machine, that machine is seen as failing. The alternative and final arbiter takes over, by severing more and more memory, whereby setting into place morphed and more control mechanisms. If there is no fail safe mechanism put into the system , or it is , but malfunctions, the effective mechanism itself has to take over: Big Brother, by Whatever IT Takes. The appearent intention or Benefit at that point cannot be explained in terms other then paradoxical.

This is how power comes innocuously and innocently, unattended to, at first, into life, only to manifest a destiny uncalled or unreversuble later on.