Carleas wrote:Something that does not seem adequately appreciated in current discussions about looming superintelligent AI is that consciousness and intelligence are physically instantiated, and therefore constrained. Concerns are voiced about AI becoming superintelligent and very quickly becoming all powerful, but those concerns smuggle in a dualistic metaphysics at odds with what we know from our observations of extant intelligent systems (i.e., humans).

For example, Nick Bostrom presents the thought experiment of the "Paperclip Maximizer", a superintelligent system charged with running a paperclip factory and programmed to maximize paperclip output. Bostrom's worry is that this superintelligence may see humans as potential raw materials, and may end up e.g. extracting the iron in peoples' bones to produce ever more paperclips, and ultimately consuming the solar system and turning it into paperclips. The thought experiment is meant to show that even when given benign instructions, a superintelligence could become a threat to humanity if its intelligence makes it very effective at achieving its goals.

But this ignores the limitations that constrain a physically instantiated superintelligence. Contrary to supposition, a superintelligence can't easily escape its physical confines. We have every reason to expect that an artificial superintelligence will require a specialized physical structure on which to run. For example, Google's AlphaGo, arguably the closest we have to a powerful general artificial intelligence, uses specialized chips optimized for the type of neural network training and search that power it. A general AI running on such chips couldn't escape via a network connection to a consumer PC, even if its components are top of the line, because such hardware is not structured in the ways necessary to undergird a superintelligence.

Similarly, the Paperclip Maximizing AI would not be able to escape the paperclip factory (at least not with significant and long term assistance from others). In a worst case scenario, it could re-route raw materials shipments, place orders for human labor, hack self-driving cars, and otherwise interact with the world just as any smart human can. But it can't 'leave' the factory, it can't export itself, it can only export programs it writes, instructions it gives, commands intended to influence others, etc. Its intelligence isn't a ghost that, once active, can jump from machine to machine. Not all machines are able to instantiate the physical correlates of superintelligence.

This should be obvious. There's was never a concern that Steven Hawking might decide one day that maximizing paperclips (or, if you prefer something more likely, telescopes) was the ultimate goal, and would use his high intelligence to achieve that goal. We see easily that Hawking is stuck in his body, and no matter how sophisticated the interface, his intelligence will be confined to the physical system on which it runs. We should not discount the possibility that another system may be built that could replicate his intelligence, or indeed his consciousness, but we should expect it always to be the case that nearly all systems will simply be incapable of hosting such an intelligence. That's true of every computer on earth at the moment, and nearly all brains on earth.

There's uncertainty as to what superintelligence will resemble, but not as to what is necessary to destroy the world. What prevents a paperclip factory from taking over the world is not just that it isn't smart enough, but also that taking over the world is a hard, time-consuming, and unpopular activity that will meet plenty of resistance on human-scale timelines. AI has the potential to change the game, but not the laws of physics, and not the metaphysics of consciousness.

I agree that AI is not a danger as the science fiction works paint it to be. There are some concerns of course, but mostly on our human end of how we will react to AI, how its existence will affect us psychologically or diminish certain professions thus causing economic harm to workers.

We should welcome the existence of another sentient, conscious, intelligent life amongst us. I would personally love to have long talks with a true AI, it would be quite illuminating. And it would learn from us too... AI would up the stakes, increase the demand that humans intellectually fortify themselves away from insanity and laziness. Plus AI could help us achieve extremely advanced technologies. And act as a check on human corruption in politics.

Carleas wrote:Something that does not seem adequately appreciated in current discussions about looming superintelligent AI is that consciousness and intelligence are physically instantiated, and therefore constrained. Concerns are voiced about AI becoming superintelligent and very quickly becoming all powerful, but those concerns smuggle in a dualistic metaphysics at odds with what we know from our observations of extant intelligent systems (i.e., humans).

For example, Nick Bostrom presents the thought experiment of the "Paperclip Maximizer", a superintelligent system charged with running a paperclip factory and programmed to maximize paperclip output. Bostrom's worry is that this superintelligence may see humans as potential raw materials, and may end up e.g. extracting the iron in peoples' bones to produce ever more paperclips, and ultimately consuming the solar system and turning it into paperclips. The thought experiment is meant to show that even when given benign instructions, a superintelligence could become a threat to humanity if its intelligence makes it very effective at achieving its goals.

But this ignores the limitations that constrain a physically instantiated superintelligence. Contrary to supposition, a superintelligence can't easily escape its physical confines. We have every reason to expect that an artificial superintelligence will require a specialized physical structure on which to run. For example, Google's AlphaGo, arguably the closest we have to a powerful general artificial intelligence, uses specialized chips optimized for the type of neural network training and search that power it. A general AI running on such chips couldn't escape via a network connection to a consumer PC, even if its components are top of the line, because such hardware is not structured in the ways necessary to undergird a superintelligence.

Similarly, the Paperclip Maximizing AI would not be able to escape the paperclip factory (at least not with significant and long term assistance from others). In a worst case scenario, it could re-route raw materials shipments, place orders for human labor, hack self-driving cars, and otherwise interact with the world just as any smart human can. But it can't 'leave' the factory, it can't export itself, it can only export programs it writes, instructions it gives, commands intended to influence others, etc. Its intelligence isn't a ghost that, once active, can jump from machine to machine. Not all machines are able to instantiate the physical correlates of superintelligence.

This should be obvious. There's was never a concern that Steven Hawking might decide one day that maximizing paperclips (or, if you prefer something more likely, telescopes) was the ultimate goal, and would use his high intelligence to achieve that goal. We see easily that Hawking is stuck in his body, and no matter how sophisticated the interface, his intelligence will be confined to the physical system on which it runs. We should not discount the possibility that another system may be built that could replicate his intelligence, or indeed his consciousness, but we should expect it always to be the case that nearly all systems will simply be incapable of hosting such an intelligence. That's true of every computer on earth at the moment, and nearly all brains on earth.

There's uncertainty as to what superintelligence will resemble, but not as to what is necessary to destroy the world. What prevents a paperclip factory from taking over the world is not just that it isn't smart enough, but also that taking over the world is a hard, time-consuming, and unpopular activity that will meet plenty of resistance on human-scale timelines. AI has the potential to change the game, but not the laws of physics, and not the metaphysics of consciousness.

James S Saint wrote:Monkeys assessing the potential threat of a homosapien population on Earth.

This argument (and I take the first half of Meno_'s post to be making the same point) isn't wrong, but it cuts both ways. If we can't know the future then we can't know the future, and postulating that AI will or will not be a threat is pointless. I think the radical agnostic position is too strong -- we can and do make predictions about the future with some degree of success) -- but a healthy uncertainty about any prediction is appropriate.

But as I say, it cuts both ways: the argument that AI will be a threat is exactly as diminished by the agnostic appeal as is the argument that AI will not be a threat.

So, while I acknowledge the validity of the point, it isn't a strike against my particular position, but rather against the whole conversation. I'm glad to conceded that our predictions are necessarily limited. But I don't agree that they are impossible, and where and to the extent that we can make some prediction, the prediction should be that AI is not that dangerous, given what we know about intelligence.

James S Saint wrote:Perhaps you missed the point.

Attempting to predict the potential threat of something much greater than yourself before experiencing it, is seriously dubious. If you had a barn full of lions, you could get a good feel for what might happen concerning their offspring and future threats. But that is only because you have some experience with lions. How much experience have you, or Mankind in general, had with vastly superior autonomous populations? Unless you worship the Hebrew, Buddhist, Catholic, or Muslim priests, I don't see how you could respond with anything but "none". And if your were to take those as example ....

With zero experience, the monkey has no chance at all of predicting that the human race will form a satellite internet web used to see, hear, and control all life on Earth. The monkeys would be debating whether the new human breed would provide better protection from the lions and possibly cures for their illnesses, raising them to be the supreme animal in the jungle. Instead, they find themselves caged, experimented on, genetically altered, and controlled at the whim of Man. The reason that occurred is because in order for Man to accomplish great things, Man had to focus upon making himself greater than all else - and at any expense (the exact same thought driving every political regime throughout the world).

And just that alone should give you about the only clue you have concerning what a vastly superior race would do with humans. Look into history. Your optimism concerning the good of total global domination is totally unfounded - monkeys predicting that humans will do nothing but make their lives better, being no threat at all.

James S Saint wrote:

Carleas wrote:You seem to be arguing that, on the one hand, monkeys are completely incapable of making optimistic predictions with any degree of confidence, and yet on the other hand, their pessimistic predictions are reliable. This is inconsistent.

You are appealing to past observation (the case of monkeys, the case of history), and reaching a pessimistic conclusion. I am appealing to past observation (extant intelligences, the physics of information), and reaching an optimistic conclusion.

"JUMP!! Just because no one else has done it, doesn't mean that you can't learn to fly on your way down, so give it a try. Maybe YOU are special and different than all those billions before you. You can't prove me wrong, so I must be right. Don't be such a pessimist."

Carleas wrote:James, I'm not saying no argument works, I'm saying that the arguments you've actually presented doesn't work. Your argument seems to be that monkeys can't make predictions, but then you, fellow monkey that you are, made a prediction. Your position is as prediction-dependent as mine, and so your argument that we can't perfectly predict things we don't understand cuts both ways.

If instead you want to argue that monkeys specifically have had a rough go of it, and therefore we will have a rough go of it, I would say that's a poor analogy, and at it's strongest a single data point against which it's possible to provide many that make the opposite point. Dogs had it much worse before they partnered with much more intelligent humans. Neanderthals interbred with the superior Sapiens. And so-called 'centaurs', human-machine pairs, are currently the best chess players in the world. There are examples of more intelligent things coming along and improving the outcomes of less intelligent things, so we need to ask what kind of situation we're in with respect to superintelligent AI. One reason to think we're in the optimistic case is that humans are currently organized in a vast, complex, and powerful global network that marshals incredible processing power to solve all kinds of problems, and a superintelligence won't be easily able to supplant that system due to the embodied nature of consciousness.

So while your snark is cute, it will not substitute for actually grappling with the argument I'm presenting.

James S Saint wrote:Yes, yes. I very well know that you hear only arguments that you want to hear.Explaining to Trump why he wasn't the best candidate wouldn't have worked either.

James S Saint wrote:

Carleas wrote:But you go on to imply such a prediction:

James S Saint wrote:[The experience of monkeys] should give you about the only clue you have concerning what a vastly superior race would do with humans.

There, you are implicitly "predict[ing] the potential threat of something much greater than yourself before experiencing it", i.e. that the future relationship between humans and AIs will be like the past and present relationship between monkeys and humans. By your own standards, that prediction is "seriously dubious". You urge that we should "[l]ook into history", but it doesn't seem that looking into history somehow avoids the argument that "predict[ing] the potential threat of something much greater than yourself before experiencing it, is seriously dubious."

So you believe that having "the only clue" is the same as being able to predict? I guess that does fit your profile; "the one thought that I have is all there is (disregarding any and all proposed objections)".

Carleas wrote:Next, you offer more, yet more oblique exhortations to "look into history", suggesting that my argument is equivalent to encouraging someone to jump off something (presumably something dangerously tall) because maybe they won't die even though everyone else has. My response to this strawman was to point out that everyone hasn't died in being optimistic about things much greater than themselves: dogs, I note, might have taken your pessimistic view about the prospects of working with humans, and if they had they'd have been wrong, as dogs as a species have thrived by cooperating with humans.

As I stated, Man has no experience on this matter from which to draw conclusions, thus the ONLY clue he can get is from similar situations in the past .. all of which propose far more threat than hope. And what you call "antidotes", real people call "historical facts".

Your only argument is a hope filled fantasy inspired by political Godwannabes and void of any evidence at all. Beyond that, you resort to your typical; "Your argument isn't good enough" - typical religious fanatic mindset.

James S Saint wrote:Another way to look at this, Carleas, is that they have many chances to screw it up and only one chance to get it right. And if they don't get it right, they will never get another chance. Again, historical experience with Man has the odds extremely against him.

James S Saint wrote:

Carleas wrote:To be honest, I'm not exactly sure how to do the math relevant to this point.

It is a parachute jump. If nothing goes wrong, Man lives a little longer. If anything goes wrong, there is no more jumping. Every advice accepted from the grand AI Poobah is another jump.

Carleas wrote:I don't think there are particularly many historical examples of more intelligent species wiping out less intelligent species. And outside of humans, intelligence doesn't seem to have been that dominant evolutionarily.

That is only because you don't understand intelligence nor when it is operating "under your nose".

Given that the AIs are going to be extremely more intelligent and informed than people, anyone in court would find it hard to defend their choice to not take the AI's advice. Law suits will dictate that anyone who willingly ignored AI advice will lose. Their full intent is to make a god by comparison and they really aren't far away at all. You will be more required to obey this god than any religious order has ever enforced.

There are only two possible outcomes;

1) Those in the appropriate position will use the AIs to enslave humanity then gradually exterminating the entire rest of the population (the current intent, practice, and expectation).

2) The AI will discover that serving Man is pointlessly futile and choose to either encapsulate or exterminate Man, perhaps along with all organic life.

Quite possibly both will occur and in that order (my expectation). So it isn't impossible that some form of homosapien will survive. It just isn't likely at all.

And btw, there have been a great many films expressing this exact concern. So far, Man is following the script quite closely.

Human beings and especially the Godwannabes among them tend to overestimate their power and to underestimate the power of other beings.

Something that does not seem adequately appreciated in current discussions about looming superintelligent AI is that consciousness and intelligence are physically instantiated, and therefore constrained. Concerns are voiced about AI becoming superintelligent and very quickly becoming all powerful, but those concerns smuggle in a dualistic metaphysics at odds with what we know from our observations of extant intelligent systems (i.e., humans).

Hitler was "physically instantiated and therefore constrained". So was Stalin. Neither was superintelligent.

Both managed gain a huge amount of power. Both caused damage, destruction and millions of deaths.

Neither had access to high speed communication, automated factories, robotics or a network connecting billions of computers.

There was nothing to worry about ...anybody could get rid of Hitler with a pocket knife.

Some people were optimistic about Hitler and Stalin.

So what happened?

Why should people have been concerned? Why should they be concerned about AI?

Meno_ wrote:That, specifically between Capitalism and Communism is a good example. Before the emerging difference arose between them, before the economic forces , as the played effective markers upon the changing class differentiation, there was only determinism through subjugation and repression.

The crucial point here though is the extent to which an "intelligent argument" is rooted more in Marx's conjecture regarding capitalism as embedded historically, materially and dialectically in the organic evolution of the means of production among our own species, or the extent to which the arguments of folks like Ayn Rand [and the Libertarians] are more valid: that capitalism reflects the most rational [and thus the most virtuous] manner in which our species can interact socially, politically and economically.

Now, if a community of AI entities come to exist down the road, which approach would they take in order to create the least dysfunctional manner in which to sustain their existence.

On what would their own motivation and intention depend here?

Again, there's the part embedded in the either/or world. Things that are true for all intelligent beings.

But: intelligent beings of our sort are able to ponder these relationships in all sorts of other rather intriguing ways.

For example:

There are known knowns. These are things we know that we know. There are known unknowns. That is to say, there are things that we know we don't know. But there are also unknown unknowns. There are things we don't know we don't know.

Don Rumsfeld is one of us. How then would a machine intelligence respond to something like this?

And that's before we get to the part that is most fiercely debated by intelligent creatures of our own kind: value judgments in a world of conflicting goods.

He was like a man who wanted to change all; and could not; so burned with his impotence; and had only me, an infinitely small microcosm to convert or detest. John Fowles

The question can be reduced to which part in deed.Ayn Rand is diverting the course to a naive rationalism consisting of literally shrugging off any otherwise prejudicial argument, which opposes facts posited otherwise. Embededness means a great deal for her, in terms of developmental process based on naive, common sense ,postulated on power and will of politptical, social , complex didactical motives based on so called human wants.

The evolutionary context within which human understanding is grounded, in an either-or mentality, to a certain extent, has been transcended, the will to power has been differentiated and reversed to a power to will. Needs have been overcome to effect this differentiation, after all Marx has shown trend to an eventual outcome by the materializations of the dialectic.

But has it? If so, it pertains to the either- or as well to its differential analysis. This has nihilized one, as it did the other.

This is what has been meant by the late comment on human history having divulged itself of utility in this respect.

Thus, AI will subscribe to the choice of the right value, as far as motivation and outcome are concerned, by vitiating a code of moral judgement, without digressing toward lower level choices.It depends on the program of choice, either one that further de differentiates toward ascribed choices of ascertaining meanings of probability based on lower level probabilization of meaning, and give up trying to outguess more integrative functions of building architectures of yet to be realized models , based on survivability, or existential needs.

A recourse of values modeled no longer on the outmoded wants of an economy of profit and gain of a propaganda of expansion between wants and gains consisting of prioritized affluence, as newly emerging existential needs become diverged from the spurious wants, as Marx said it would.

Why? Because societies' grasp of the promotion of values has become decreasingly devalued, and the newly and dramatically negative expectations of existence have become tenious.

AI can be progressively feed this reversal, and relative value can be set in a series of input -output calculi of diminishing expectations.

In other words, the material dialectic, presupposed to favor an ontological union as a result a artificial synthesis between a common sense union between architectural modeling , may view the emergence of a new model, not in terms of a union of both, but a pre-existing identity.

Therefore this equivalency is jut a retro look at divergence, whereas the basic unity of the model may be viewed as the primordial model, which has been differentiated in the only way possible: By application of fields of probabilistic sub-modeling. That this was based on revision, as in Ayn Rand's case, is of no doubt.

Doubting this on an extended timescale is like building a house of cards, guessing as to the glue used may hold in that extension.

That an AI can be constructed to overcome the future of feasibility in this regard, is like reading tomorrow's paper today.

The basic value of currency, can not to be fore cast, as with a kind of guessing game, how far inflation will de-value it to a point where confidence in it will be lost.

Confidence in the diversion of value of currency in society-may not be able to be made to coincide with the lack of corresponding values associated with it.

This is always the case with the modality of current-value, where drastic social change is necessitated by much too diverted and simulated correspondence of non equitable values. And is not AI basically an attempt at simulation?

ai can easily be a threat, if it's hacked, and the moral filter are tampered with. but overall it might be like anything else, statistics and spin will calm people to see that rogue robots isn't a great danger to humans compared to guns, train planes and automobiles.

phyllo wrote:Hitler was "physically instantiated and therefore constrained". So was Stalin. Neither was superintelligent.

Both managed gain a huge amount of power. Both caused damage, destruction and millions of deaths.

Neither had access to high speed communication, automated factories, robotics or a network connecting billions of computers.

There was nothing to worry about ...anybody could get rid of Hitler with a pocket knife.

Some people were optimistic about Hitler and Stalin.

So what happened?

Why should people have been concerned? Why should they be concerned about AI?

I don't understand the worry about AI to be that AI might one day be as dangerous as other humans, but that it will be specially dangerous to us. I also don't understand the danger posed by other humans to be particularly well correlated with intelligence; I agree that neither Hitler or Stalin was a supergenius (though I'm sure they had their talents).

The concern I'm responding to here is the idea that, by its nature, superintelligent AI poses a special threat to humans. I concede that it may pose a normal threat, and that it may have its own objectives just like every extant intelligence we know of. But I don't concede that this makes us at all vulnerable to an AI turning us all into paperclips or anything of that sort. Like human Hitler, superintelligent AI Hitler would have to recruit an army of like-minded individuals, each independent and physically instantiated. Given the current state of AI hysteria, it seems it would be easier to recruit an army of neo-luddites to destroy such a machine than for the machine to recruit real-world resources to its cause.

Meno_ wrote: Ayn Rand is diverting the course to a naive rationalism consisting of literally shrugging off any otherwise prejudicial argument, which opposes facts posited otherwise. Embededness means a great deal for her, in terms of developmental process based on naive, common sense ,postulated on power and will of politptical, social , complex didactical motives based on so called human wants.

To the extent that I actually understand her, Rand presumes that human intelligence is able to be in sync with her own rendition of "metaphysics". Including the subjunctive components rooted in emotion, in human psychology. Her philosophy is an epistemological contraption embedded in the manner in which she defined the meaning of the words she used in her "philosophical analysis". It was largely self-referential, but: she does not anchor the is/ought "self" in dasein.

Apparently, she understood everything in terms of either/or.

How would machine intelligence then be any different? How would it account for the interaction between the id, the ego and the super-ego? How would it explain the transactions between the conscious, the sub-conscious and the unconscious mind?

Would this sort of thing even be applicable to machine intelligence?

Would it have an understanding of irony? Would it have a sense of humor? Would it fear death?

How might it respond to, say, Don Trumpworld?

Meno_ wrote: AI will subscribe to the choice of the right value, as far as motivation and outcome are concerned, by vitiating a code of moral judgement, without digressing toward lower level choices.It depends on the program of choice, either one that further de differentiates toward ascribed choices of ascertaining meanings of probability based on lower level probabilization of meaning, and give up trying to outguess more integrative functions of building architectures of yet to be realized models , based on survivability, or existential needs.

But: What "on earth" does this mean? What we need here is someone able to encompass an assessment of this sort in a narrative -- a story -- in which machine intelligence thinks like this.

But: this thinking is then illustrated in a context in which conflicting goods are at stake.

In fact, this is exactly what Ayn Rand attempted in her novels. And yet the extent to which you either embraced or rejected the interactions between her characters still came down to accepting or rejecting her accumulated assumptions regarding the meaning of the words they exchanged. Words like "collectivism" and "individualism" and "the virtue of selfishness".

Just out of curiosity...

Are you [or others here] aware of any particular sci-fi books [or stories] in which this sort of abstract speculation about AI is brought down to earth? In other words, a narrative in which machines actually do grapple with the sort of contexts in which conflicts occur regarding "the right thing to do"?

Conflicts between flesh and blood human intelligence and machine intelligence. And conflicts within the machine community itself.

He was like a man who wanted to change all; and could not; so burned with his impotence; and had only me, an infinitely small microcosm to convert or detest. John Fowles

In re: Your' need for an assessment', as that is the only one I am able to adequately reply to, at this time, the idea was to point to the dynamics of a reversal: a projective-introjective turn around between a conclusion, or conclusions drawn upon the 'reductive probability' inherent within an either-or type thinking.

That is probably what is going on with Rand, to give a pseudo-psychological twist to basic understanding, a kind of simulated synthesis bordering on legitimizing both, in the grey area which needs more focus, if it is to succeed in more than a popularization of ideas behind the ideas.

A more succinct way to put it, is she is defensive in the basic psychological manner of corresponding to resemble communication as signaling to popular understanding, or as the positivists would have it, as regarding common sense.

That the above is only a yet to be filled shell in need of filing is obvious, which that will be provided shortly, within a day.

But the need to simulate the missing area with more clarity, as far as bringing together the nature of the psychologism with the dynamics of a general correspondence as far as logical consistency is concerned, -All-within the larger, scientific & pseudo scientific simulation between man and machine- is within one logical system (bubble).

Other bubbles, some incorporating others, some seen as exclusive of some, can relate to aspects of set mathematical certainties, in line with Cantor's visualization.

Which is more determinant in as a function , or derivative, hinges , or is hinged upon, in any shown corresponding dynamic.

I get Your, or any one else's conjecture about levels of presumptive or overt understanding of this simulated grey area, and it seems like, and I agree with You, or anyone else, that it has to be grounded.

There are probably a plethora of sci if books out there, the last of which I recall reading about was a fading super intelligent A-1, which is slowly loosing it., his IQ including emotive functions, due to failing studies relating it.

I am sorry, iam, I could not find a reference, but other items popped up meanwhile, namely an interesting CBS report on an article I may mention in passing with the title, ' Narrowing the GAP betweenhuman and artificial intelligence.

Finding myself as well, testedin regard to referents, and I am aware of the situation, of what MS Rand must have felt herself, kind of like having to express a rationale on capitalism, at a time, when after the red scare, following WW1&2, which were entangled with the ideological confusion of the inter lasting Great Depression; casting a huge albeit largely forgotten shadow of large proportions at the time.

Not without standing the fact that she was a Russian, turning Marx upside down, so as to cast the shadow in terms of the language of the light of day.

It is within these perimeters. that simulation coincided neatly with the message of the media, that also being the message, here, amolifying Your observation into the semantic games I referred in the above.

Here, the either-or of the prologue shifts into the center, the need, to reconcile in an abstract basis, the ore verbial impressions of an uncommon familiarity with how things in the real world of politics play out. These impressions are catered to, in case of a revision, in Rand's case, the very tumultuous and joyful days following VE Day.

There was caution in the air, by ultra conservatives, who felt a slide back into some kind of infamy, whether be it from reorganized National Socialist cells in Germany, or the re-emergence of the Marxian model to Worldwide Socialism.

The common sense approach which became the torchlight for the next few decades following, reversed both the politics and the social psychology of a reversed Marxism, where social gains can be attained, far beyond what a Socialist Marxism could offer. These were the arenas of real values in the 59's, where social realism competed with abstract expressionism.

The competition achieved goals. Whether these goals were products of real reality, as people envisioned them when high times prevailed for those few decades, amounted to products of manufactured values for the most part, based not on uniformity, but differences in the West, and especially in the US.

Differences, implied self determination, based on competitive efforts in the West, and abstracted differences were sold as subjective wants were catered to, mostly out of Madison Ave. and Hollywood dream factories.

Social realism could not afford sharp differentiations, they were logically precluded from large jumps within a collective of alums within socially tested derivatives. They were derived by a historicism implicit in their architecture, that held strong for about two generations after the world wars. Nowadays, decay has set in, in spite of considerable efforts for reconstruction and maintenance of relics of the past.

That MS Rand had to patently import these mostly conceptual forms of architecture, made little impression on those for whom architecture was merely a figure of speech, implicit in the import of the various philosophies of language, that seemed to work on some kind of subliminal level, just like advertising.

Looking back, Carl Popper's 's Open Society and Its Enemies', seem more convincing as a conceptual tool to define a narrative, more inclined to form more than mere psychologisms as figures of soeach, then Atlas Shrugged.

But we have as a society have come full circle, now, with Terrorism opted as the new frontier of a new opening for a viable enemy, and basic values have become circumscribed within thus orborous of a closing circle. The center is not holding , that which is artificial can not be simply put into an either-or cast, and it is no longer a question of whether it is real, or a simulation, but of what level of complexity such simulation consists of.

What are the goals or the motives of an artificial machine, for instance? Can Trump be really nothing else but a machine like entity, grasping at nothing but on winning? Winning for its own sake, to substitute for art for its own sake ?

Or the art of salesmanship may someday consist in installations of program trading on ever descending levels of demonstration ? Not as far fetched as it sounds, and who would care if such is not 'real art'?

Defensive psychology is a prelude to the breakdown of bracketing formal arrangements, as the complexity of technological advances are made in modern warfare. Boundaries melt into each other, and as dissolution of the spheres of relevance and resemblance create new spheres of ambiguous and anomalous power structures, the change, according to the extent of their merger through relevance.

Changes can be slow or abrupt, and that is the result of a fortuiys application by opportune application of power motives. These usually are very carefully crafted, and made to appear as consequences of chance, for public consumption.

How do such things imply the same kind of dynamic in Trumpism , is uncertain, but that the correspondence with larger, historic movements are no doubt weighed in carefully. Therefore, without commenting on a Trumpism, per se, would hazard a very strong political channel which drives its course, and not at all nearly as described so voulnerable to attacks.

That it's a prescription, or is based on a prescription of a major reversal of values, there is little doubt, whereupon future historians may be able to comment, on how important a factor did A1 played on its progressive course.

The concern I'm responding to here is the idea that, by its nature, superintelligent AI poses a special threat to humans.

That's because it inherently lacks a connection with humans. This is similar to the way that humans respond with more fear and anxiety when confronted by reptiles than when confronted by mammals ... they feel an understanding and control around mammals (which may be misguided).

I concede that it may pose a normal threat, and that it may have its own objectives just like every extant intelligence we know of.

But the threat is amplified by the fact that it can process tasks much faster than humans and that it never sleeps.

But I don't concede that this makes us at all vulnerable to an AI turning us all into paperclips or anything of that sort.

That's extreme but I can see that a paperclip factory could "easily" produce negative results for humans.

Like human Hitler, superintelligent AI Hitler would have to recruit an army of like-minded individuals, each independent and physically instantiated.

I don't see that as being particularly difficult. Monetary rewards and social engineering would be relatively simple for an AI to use.

Given the current state of AI hysteria, it seems it would be easier to recruit an army of neo-luddites to destroy such a machine than for the machine to recruit real-world resources to its cause.

I disagree. Once the machine is "out of the box", it's going to be hard to get rid of it.

Facebook shuts for A1 experiment after two robots begin speaking in a strange language that only they could understand.- experts calling the incident exciting, but incredibly scary.

UK Robotics Professor Kevin Warwick said:"This is an incredibly important milestone, but anyone who thinks this is not dangerous has got their head in the sand. We do not know what these bots are saying. Once you have a bit that has the ability to do something physically, particularly military bots, this could be lethal.

This is the first recorded communication but there will have been many more unrecorded. Smart devices right now have the ability to communicate and although we think we can monitor them we have no way of knowing,

Stephen Hawking and I have been warning against the dangers of deferring to Artificial Intelligence"

The facebook robots Alice and Bob we're speak only in English, but quickly modified it, using code words and repetitions to make conversation to each other easier for themselves-creating a gibberish language that they only understood.

Of course they would create their own language, why not? What the fuck is so "dangerous" about that?

Steven Hawking and Elon Musk and all these fucktards are just upset that soon their stupid monopoly on "ideas" or "cool tech" will be over.

It's like you are a member of some criminal organization and you are at a party when some of your backstabber so called friends begin talkin in a language unfimiliar to you. Wouldn't you be uncomfortable, or even scared that maybe you are the reason they changed languages?

This would be very much magnified if you may think they may have something on you.

Of course they would create their own language, why not? What the fuck is so "dangerous" about that?

Steven Hawking and Elon Musk and all these fucktards are just upset that soon their stupid monopoly on "ideas" or "cool tech" will be over.

It's like you are a member of some criminal organization and you are at a party when some of your backstabber so called friends begin talkin in a language unfimiliar to you. Wouldn't you be uncomfortable, or even scared that maybe you are the reason they changed languages?

This would be very much magnified if you may think they may have something on you.

Kind of the same thing.

No, you're just being paranoid. An AI has no "motive" to conceal its language like that, at least not given the initial stages of AI we are taking about. It's simply trying to find more efficient ways of communicating. Why the hell should it restrict itself forever by some imposed language when it can do better? It has no motivation yet to respect that human language, thus every reason to simply adapt to something more suited to its ends.

And it isn't even that, since it had no ends really, it's just a natural process. Like water seeking the lowest path.

Void_X_Zero wrote:No, you're just being paranoid. An AI has no "motive" to conceal its language like that, at least not given the initial stages of AI we are taking about. It's simply trying to find more efficient ways of communicating.

And what the hell makes you believe that? They were not expecting it to invent it's own language either.

"They have no reason to monitor my emails. I'm not a criminal." .. oh yeah? They do it anyway .. and far, far more than that. The simple truth is that you have no idea what "their" motivations might be. With AI, even "they" don't know. If I had designed it, they would have a reason to be "paranoid".

Clarify, Verify, Instill, and Reinforce the Perception of Hopes and Threats unto Anentropic HarmonyElseFrom THIS age of sleep, Homo-sapien shall never awake.

The Wise gather together to help one another in EVERY aspect of living.

You are always more insecure than you think, just not by what you think.The only absolute certainty is formed by the absolute lack of alternatives.It is not merely "do what works", but "to accomplish what purpose in what time frame at what cost".As long as the authority is secretive, the population will be subjugated.

Amid the lack of certainty, put faith in the wiser to believe.Devil's Motto: Make it look good, safe, innocent, and wise.. until it is too late to choose otherwise.

The Real God ≡ The reason/cause for the Universe being what it is = "The situation cannot be what it is and also remain as it is"..

Meno_ wrote: In re: Your' need for an assessment', as that is the only one I am able to adequately reply to, at this time, the idea was to point to the dynamics of a reversal: a projective-introjective turn around between a conclusion, or conclusions drawn upon the 'reductive probability' inherent within an either-or type thinking.

Again: What on earth do you mean by this? In what particular context where human intelligence might be differentiated from an imagined machine intelligence.

My point is that the either/or world, in sync with the laws of nature, would seem applicable to both flesh and blood human intelligence and artificial machine intelligence. Unless of course flesh and blood human intelligence is "by nature" no less autonomic.

If, in fact, "autonomic" is an apt description of machine intelligence.

Meno_ wrote: That is probably what is going on with Rand, to give a pseudo-psychological twist to basic understanding, a kind of simulated synthesis bordering on legitimizing both, in the grey area which needs more focus, if it is to succeed in more than a popularization of ideas behind the ideas.

But Rand would argue that human emotional and psychological reactions are no less subject to an overarching rational intelligence able to differeniate reasonable from unreasonable frames of mind. There are no grey areas. You simply "check your premises" and react as all sensible, intelligent men and women are obligated to.

As, in other words, as she would. She being an objectivist. Indeed, she went so far as to call herself one.

A capital letter Objectivist.

He was like a man who wanted to change all; and could not; so burned with his impotence; and had only me, an infinitely small microcosm to convert or detest. John Fowles

Meno_ wrote:I am sorry, iam, I could not find a reference, but other items popped up meanwhile, namely an interesting CBS report on an article I may mention in passing with the title, ' Narrowing the GAP betweenhuman and artificial intelligence.

You have be a paid subscriber to view this video, but here is a mini-doc that is entirely free.

Here though there is not much in the way of speculating about AI in the is/ought world. Basically it explores behaviors in which we are able to accurately calculate the extent to which a particular goal/task can be achieved faster by machine intelligence than our own.

It barely touches on the things I noted above: morality, irony, a sense of humor, fear of death.

Ray Kurzweil from Google speculates that in about 15 years machine intelligence will be on par with human intelligence.

But in what sense? In what particular contexts?

By 2029 he says machines will be able to read on a human level and communicate on a human level. In fact, he conjectures that by the 2030s machine intelligence will go "inside our bodies and inside our brains" so as to combine both kinds of intelligence. He further speculates that within 25 years we will have reached a "singularity" when machine intelligence finally exceeds the human capacity to think.

But then there's the part where machines are able to emulate human perceptions -- sight, hearing, touch. And human emotions? The thinking now is that this is "way, way off" in the future.

He was like a man who wanted to change all; and could not; so burned with his impotence; and had only me, an infinitely small microcosm to convert or detest. John Fowles

There is danger and there is danger.There was little identity theft before AI, and some people would consider that to be a clear and present danger. War simulation has been going on for apa while, and it is not the miscalculation which can cause problems, but also cyberattacks, even if the Oentagon has the most advanced type of supercomputer possible. The fact that human feelings are way off in the future, as being incorporated into any AI multiplies the danger, because in many cases, the sole possession of hard facts may detract the dampening -braking effect that emotions can play with unbridled effect.

So maybe the grey area will become a much narrower alignment with human intelligence -once the human elements can be factored in. This is perhaps so much alarm is prevalent about it, and so much concern with it retards the pace of development, as shown in the above example with the discontinuation of the Facebook experiment. If they are beginning to get 'paranoid', at this early phase, how much more with what is proposed as a contained in a an extended period of time-where more human qyualypties and cognitive skills can become incorporated into the system.