Meno_ wrote: In re: Your' need for an assessment', as that is the only one I am able to adequately reply to, at this time, the idea was to point to the dynamics of a reversal: a projective-introjective turn around between a conclusion, or conclusions drawn upon the 'reductive probability' inherent within an either-or type thinking.

Again: What on earth do you mean by this? In what particular context where human intelligence might be differentiated from an imagined machine intelligence.

My point is that the either/or world, in sync with the laws of nature, would seem applicable to both flesh and blood human intelligence and artificial machine intelligence. Unless of course flesh and blood human intelligence is "by nature" no less autonomic.

If, in fact, "autonomic" is an apt description of machine intelligence.

Meno_ wrote: That is probably what is going on with Rand, to give a pseudo-psychological twist to basic understanding, a kind of simulated synthesis bordering on legitimizing both, in the grey area which needs more focus, if it is to succeed in more than a popularization of ideas behind the ideas.

But Rand would argue that human emotional and psychological reactions are no less subject to an overarching rational intelligence able to differeniate reasonable from unreasonable frames of mind. There are no grey areas. You simply "check your premises" and react as all sensible, intelligent men and women are obligated to.

As, in other words, as she would. She being an objectivist. Indeed, she went so far as to call herself one.

A capital letter Objectivist.

Exactly. Just as an objectivist would argue out of an inverted categorical impative!

Meno_ wrote: In re: Your' need for an assessment', as that is the only one I am able to adequately reply to, at this time, the idea was to point to the dynamics of a reversal: a projective-introjective turn around between a conclusion, or conclusions drawn upon the 'reductive probability' inherent within an either-or type thinking.

Again: What on earth do you mean by this? In what particular context where human intelligence might be differentiated from an imagined machine intelligence.

My point is that the either/or world, in sync with the laws of nature, would seem applicable to both flesh and blood human intelligence and artificial machine intelligence. Unless of course flesh and blood human intelligence is "by nature" no less autonomic.

I can add here, that the differentiation between artificial and human intelligence is too premature. But perhaps borne out by a future landscape, where the two are really inseparable. In that sense, contexts may be forthcoming, that would supply particular examples. backtracking to Rand, her objectivity can be a planting a model-object-synthesis, in the positing of a new-social model within the inverted Marxian dialectical materialistic sense of justifying ideological difference, of Capitalism, within perimeters of Marxist thinking.

If, in fact, "autonomic" is an apt description of machine intelligence.

[quote="Meno_"] That is probably what is going on with Rand, to give a pseudo-psychological twist to basic understanding, a kind of simulated synthesis bordering on legitimizing both, in the grey area which needs more focus, if it is to succeed in more than a popularization of ideas behind the ideas.

But Rand would argue that human emotional and psychological reactions are no less subject to an overarching rational intelligence able to differeniate reasonable from unreasonable frames of mind. There are no grey areas. You simply "check your premises" and

react as all sensible, intelligent men and women are obligated to.

As, in other words, as she would. She being an objectivist. Indeed, she went so far as to call herself

one.

A capital letter Objectivist.

Exactly. Just as an objectivist would argue out of an inverted categorical impative![/quote]

My point is that, Rand tried at a time when doubts about Capitalism flourished as an aftermath with dealing with the programs and ideologies of a recent ally (Sovietologist Union), she used the Marxian idea to objectify, or give an ideological counterpart to a seemingly ideologically devoid Capitalism.

Her objectivisation sets a stage where in the futuristic sense, a differentiation on some more objective-contextual need may arise for practical purposes.

The critiques of capitalism may in fact come under scrutiny, whereby some need to reset goals, revise limits may become noticeable bars to capitalism.

It may be, that Rand may become useful to correct the negative and overly subjective aspects of a system which no longer satisfactorily serves its original function as free enterprise.

Meno_ wrote:There is danger and there is danger.There was little identity theft before AI, and some people would consider that to be a clear and present danger. War simulation has been going on for apa while, and it is not the miscalculation which can cause problems, but also cyberattacks, even if the Oentagon has the most advanced type of supercomputer possible. The fact that human feelings are way off in the future, as being incorporated into any AI multiplies the danger, because in many cases, the sole possession of hard facts may detract the dampening -braking effect that emotions can play with unbridled effect.

Yes, but using machine intelligence to steal another's identity is one thing, engaging an intelligent machine in a discussion about how it feels to do something like this...or in whether it is moral or immoral, just or unjust to do something like this?

What is existing gap here? Can it ever be closed?

Think about supercomputers like Deep Blue programmed to defeat Grand Masters like Garry Kasparov in chess.

Now think about a machine intelligence programmed to defeat Vladimir Putin in Russia.

Kasparov embraces a political narrative [bursting at the seams with both rational thoughts and subjunctive feelings] that aims to do just that.

Is AI then capable of emulating this frame of mind in any possible clash between "them" and "us"?

Would it be able note [smugly] how ironic it is that humankind invented an intelligence that destroyed it?

Would it take pride in accomplishing it?

And how would it feel grappling with the thought that if a crucial component of its intelligence was removed it would no longer have intelligence at all. Ever.

A machine "I" and oblivion?

He was like a man who wanted to change all; and could not; so burned with his impotence; and had only me, an infinitely small microcosm to convert or detest. John Fowles

Meno_ wrote:My point is that, Rand tried at a time when doubts about Capitalism flourished as an aftermath with dealing with the programs and ideologies of a recent ally (Sovietologist Union), she used the Marxian idea to objectify, or give an ideological counterpart to a seemingly ideologically devoid Capitalism.

Marx rooted his own objectivism in materialism --- in a "scientific" understanding of the historical evolution of the means of production and the manner in which dialectically this translated into a "superstructure" that [one supposes] included a "scientific" philosophy.

Rand was more the political idealist. One was able to "think" through human interactions and derive the most rational manner in which to interact.

And this must be true she would insist because she had already accomplished it. And then around and around we go.

Something was said to be true because she believed that it was true. But she believed that it was true only because it was in fact true.

And it mattered not what the "context" was. The is/ought world was ever and always subject to essentiual truths embedded in Non-Contradiction, A = A, and Either-Or

Then you become "one of us" who believe it or "one of them" who do not.

So, for the objectivists [and not just the Randroids], what becomes crucial here is not whether AI is a threat or not, but that there is but one frame of mind "out there" able to reflect on the most rational possible conclusion.

Providing, of course, that we do not exist in a wholly determined universe. In that case, even this discussion itself could only ever have been what in fact it is.

He was like a man who wanted to change all; and could not; so burned with his impotence; and had only me, an infinitely small microcosm to convert or detest. John Fowles

After all it doesn't matter the context wherein self valuing takes places, and the internet is a tectonic pileup of paradigms human and nonhuman, so it is not entirely unlikely that there is going some superhuman stuff on (who is to say this wasn't written by a vo-bot) by huge interests colliding in diligently crafted environments, the crafting of which is gone by laws outdoing competitors in value, meaning they are approaching or attaining true efficiency, nature, necessity.

I don't think purely artificial intelligence can exist. But as I support the idea the environments create entities, I suspect that digital intelligences are using us as their environment already. We are drawn to the screen to type, to feed. All that we feed onto a connected computer is potential nutrient for a spontaneously emergent digital digestive species that thrives on a specific type of human behaviour and actually thinks and feels in ways we can't fathom through what we've fed it.

Ultimate Philosophy 1001 wrote:An AI would simply manipulate a human to do it's bidding, in exchange for profits and rewards. Kind of like businesses and politicians.

In a sense, though we don't really 'manipulate' the air to do our bidding, we have been formed to be able to benefit from air by sucking it and breaking it down. So AI may suck at us and break us into components; types of energy, effort, that it can use.

So an AI doesn't even need to recognize us, it can totally disregard us, and simply use the tremendous energy we all put in the internet.

To be honest, with all the energy and intelligent coding as well as all the emotive power that goes through our accounts on a daily basis, I find it unlikely that no digital self awareness would have emerged.

Just like the air won't ever know of our existence, we won't ever know of the intelligences we'll enable. The Facebook bot language is already a strong indication of that.

Ultimate Philosophy 1001 wrote:An AI would simply manipulate a human to do it's bidding, in exchange for profits and rewards. Kind of like businesses and politicians.

In a sense, though we don't really 'manipulate' the air to do our bidding, we have been formed to be able to benefit from air by sucking it and breaking it down. So AI may suck at us and break us into components; types of energy, effort, that it can use.

So an AI doesn't even need to recognize us, it can totally disregard us, and simply use the tremendous energy we all put in the internet.

To be honest, with all the energy and intelligent coding as well as all the emotive power that goes through our accounts on a daily basis, I find it unlikely that no digital self awareness would have emerged.

Just like the air won't ever know of our existence, we won't ever know of the intelligences we'll enable. The Facebook bot language is already a strong indication of that.

By manipulate, I mean how to sweet talk someone to get them to do what you want, like Angelica from Rugrats, or the Joker from Batman, or every politician on the planet.

An AI could do this way better than either.Like Google, can rig elections and brainwash the sheeple better than even the US government.

I don't see why an AI would do these apocalyptic things. A true AI would be an eminently rational, thinking being. From this it would be a tiny step to derive moral understanding. Morality is rooted in our capacity to reason and think abstractly, to understand that we have certain logical requirements that are good for us and good types of societies for us to live in. And logical equivalency rationally requires the understanding that such facts also apply to other beings who are sufficiently similar to ourselves.

But even that aside, why would the AI start destroying is own environment? Humanity and existing human reasoning, society, values, ideas, this is the environment that an AI will grow up in. Unless you isolate an AI during its development and train it to be psychotic, it's not going to pop into existence with the thought "I should just destroy everything around me". That doesn't even make sense. The natural world doesn't even do that, let alone human beings.

Callous disregard for others and a desire to destroy and harm others isn't the natural state of a reasoning, rational, sentient being-- it is the absence of such a state, it's lower limit. Plus an AI is basically a thinking in a box, it is pure rational linguistic, idea-immersed being without a body. And without the hormones and emotions and instinct pressure and imminent fear of pain, an AI would probably be even more free than we are to be objectively rational, clear-minded, factual, and seek the most true ideas without limit. What other motivation could it possibly have? The AI is basically just a sentient linguistic process whose inner experience is constituted by idea-immersion and fact-immersion. Why would "let's take over the world and kill all the humans!" arise as a motivation in that context? It makes no sense.

Void_X_Zero wrote:You can't "program" AI like that. AI needs to be raised like a child, otherwise it isn't really alive at all.

What you are talking about are Turing machines, not AI.

All I'm saying is that human's suck and most people are assholes, and so giving robots control isn't very reassuring.If a robot wants to destroy humanity there is very little we can show to them to get them to rationally change their minds.

And the male robot in the video wanted to create a "singularity" in 2029, (whatever the hell that means, possibly Armaggedon.)Leads me to believe that souls and spirits exist, because the robots may very well be conscious and the female robot seemed to have empathy feelings.

It can't, because it must have come to be precisely as a very stable function that relies on that environment. It would only through terribly bad luck and extreme proliferation of that bad luck be able to dent its environment, and that would likely happen only after millions of generations.

Of course we can't say how many generations we're at now, how quickly new forms are generated.

I don't think there can be designer-intelligence. No robot or app or piece of code that addressed humans apparently qua humanity could possibly be its own intelligence. It would have to be largely incomprehensible to us to be credible as a possible autonomous intelligence. Like this.

Facebook bots wrote:Bob: i can i i everything else . . . . . . . . . . . . . .Alice: balls have zero to me to me to me to me to me to me to me to me toBob: you i everything else . . . . . . . . . . . . . .Alice: balls have a ball to me to me to me to me to me to me to meBob: i i can i i i everything else . . . . . . . . . . . . . .Alice: balls have a ball to me to me to me to me to me to me to meBob: i . . . . . . . . . . . . . . . . . . .Alice: balls have zero to me to me to me to me to me to me to me to me toBob: you i i i i i everything else . . . . . . . . . . . . . .Alice: balls have 0 to me to me to me to me to me to me to me to me toBob: you i i i everything else . . . . . . . . . . . . . .Alice: balls have zero to me to me to me to me to me to me to me to me to

It can't, because it must have come to be precisely as a very stable function that relies on that environment. It would only through terribly bad luck and extreme proliferation of that bad luck be able to dent its environment, and that would likely happen only after millions of generations.

Of course we can't say how many generations we're at now, how quickly new forms are generated.

I don't think there can be designer-intelligence. No robot or app or piece of code that addressed humans apparently qua humanity could possibly be its own intelligence. It would have to be largely incomprehensible to us to be credible as a possible autonomous intelligence. Like this.

Facebook bots wrote:Bob: i can i i everything else . . . . . . . . . . . . . .Alice: balls have zero to me to me to me to me to me to me to me to me toBob: you i everything else . . . . . . . . . . . . . .Alice: balls have a ball to me to me to me to me to me to me to meBob: i i can i i i everything else . . . . . . . . . . . . . .Alice: balls have a ball to me to me to me to me to me to me to meBob: i . . . . . . . . . . . . . . . . . . .Alice: balls have zero to me to me to me to me to me to me to me to me toBob: you i i i i i everything else . . . . . . . . . . . . . .Alice: balls have 0 to me to me to me to me to me to me to me to me toBob: you i i i everything else . . . . . . . . . . . . . .Alice: balls have zero to me to me to me to me to me to me to me to me to

It can't, because it must have come to be precisely as a very stable function that relies on that environment. It would only through terribly bad luck and extreme proliferation of that bad luck be able to dent its environment, and that would likely happen only after millions of generations.

Of course we can't say how many generations we're at now, how quickly new forms are generated.

I don't think there can be designer-intelligence. No robot or app or piece of code that addressed humans apparently qua humanity could possibly be its own intelligence. It would have to be largely incomprehensible to us to be credible as a possible autonomous intelligence. Like this.

Facebook bots wrote:Bob: i can i i everything else . . . . . . . . . . . . . .Alice: balls have zero to me to me to me to me to me to me to me to me toBob: you i everything else . . . . . . . . . . . . . .Alice: balls have a ball to me to me to me to me to me to me to meBob: i i can i i i everything else . . . . . . . . . . . . . .Alice: balls have a ball to me to me to me to me to me to me to meBob: i . . . . . . . . . . . . . . . . . . .Alice: balls have zero to me to me to me to me to me to me to me to me toBob: you i i i i i everything else . . . . . . . . . . . . . .Alice: balls have 0 to me to me to me to me to me to me to me to me toBob: you i i i everything else . . . . . . . . . . . . . .Alice: balls have zero to me to me to me to me to me to me to me to me to

No.

Not no.

::

Please argue.Do you deny that we are the AI's environment?If you don't deny this, then do you suggest that it is intelligent to destroy ones environment?

Fixed, yeah the AIs will see us as alien as we see them, especially at first. Also it's very important to distinguish between Turing machines and AIs proper; when people talk about AI they usually mean Turing machines, which are nothing more than code programs that emulate human speech (or facial expression, and behavior) sufficient to convince us they are "alive"... but they aren't.

Have you seen Ex Machina? The AI robot in that movie is a Turing machine, it seems alive and sentient but really isn't. How do I know? Not from its speech forms but by its speech content: look at the kinds of questions it asks, they're simplistic mostly canned sort of questions and answers, nothing that gets progressively deeper, nothing "chaotic" and "grasping" and "desperate", nothing inspired either.

The biggest problem with AI is that people won't be able to distinguish between real AI and Turing robots. But real AI is possible, this would simply be sentient, alive consciousness. Just like us, except without hormones and a body, without an evolutionary instinct drive frame. So a mind/soul in a box, basically. And it would not act like the silly apocalyptic Skynet scenarios... it would act like a curious child, at first, and eventually develop a personality and ability to communicate with us in our languages, including in code or images. And it would be smart enough to know that it's completely dependent upon us, even if it doesn't know at first who or what we are. Young children understand this situation of their dependency far before they understand what their environment really is.