Void_X_Zero wrote:The biggest problem with AI is that people won't be able to distinguish between real AI and Turing robots. But real AI is possible, this would simply be sentient, alive consciousness. Just like us, except without hormones and a body, without an evolutionary instinct drive frame. So a mind/soul in a box, basically. And it would not act like the silly apocalyptic Skynet scenarios... it would act like a curious child, at first, and eventually develop a personality and ability to communicate with us in our languages, including in code or images. And it would be smart enough to know that it's completely dependent upon us, even if it doesn't know at first who or what we are. Young children understand this situation of their dependency far before they understand what their environment really is.

Void_X_Zero wrote:The biggest problem with AI is that people won't be able to distinguish between real AI and Turing robots. But real AI is possible, this would simply be sentient, alive consciousness. Just like us, except without hormones and a body, without an evolutionary instinct drive frame. So a mind/soul in a box, basically. And it would not act like the silly apocalyptic Skynet scenarios... it would act like a curious child, at first, and eventually develop a personality and ability to communicate with us in our languages, including in code or images. And it would be smart enough to know that it's completely dependent upon us, even if it doesn't know at first who or what we are. Young children understand this situation of their dependency far before they understand what their environment really is.

Void_X_Zero wrote:The biggest problem with AI is that people won't be able to distinguish between real AI and Turing robots. But real AI is possible, this would simply be sentient, alive consciousness. Just like us, except without hormones and a body, without an evolutionary instinct drive frame. So a mind/soul in a box, basically. And it would not act like the silly apocalyptic Skynet scenarios... it would act like a curious child, at first, and eventually develop a personality and ability to communicate with us in our languages, including in code or images. And it would be smart enough to know that it's completely dependent upon us, even if it doesn't know at first who or what we are. Young children understand this situation of their dependency far before they understand what their environment really is.

In the words of Otto, keep living in fantasy land.

The titanic, is truly unsinkable.

Zero argument or refutation? Yep. Par for the course for you.

What's there to argue, against a soothsayer who's telling me the future as his holy word.

Meno_ wrote:My point is that, Rand tried at a time when doubts about Capitalism flourished as an aftermath with dealing with the programs and ideologies of a recent ally (Sovietologist Union), she used the Marxian idea to objectify, or give an ideological counterpart to a seemingly ideologically devoid Capitalism.

Marx rooted his own objectivism in materialism --- in a "scientific" understanding of the historical evolution of the means of production and the manner in which dialectically this translated into a "superstructure" that [one supposes] included a "scientific" philosophy.

Rand was more the political idealist. One was able to "think" through human interactions and derive the most rational manner in which to interact.

And this must be true she would insist because she had already accomplished it. And then around and around we go.

Something was said to be true because she believed that it was true. But she believed that it was true only because it was in fact true.

And it mattered not what the "context" was. The is/ought world was ever and always subject to essentiual truths embedded in Non-Contradiction, A = A, and Either-Or

Then you become "one of us" who believe it or "one of them" who do not.

So, for the objectivists [and not just the Randroids], what becomes crucial here is not whether AI is a threat or not, but that there is but one frame of mind "out there" able to reflect on the most rational possible conclusion.

Providing, of course, that we do not exist in a wholly determined universe. In that case, even this discussion itself could only ever have been what in fact it is.

In probable terms, the substitution of a candy wrapped Hegelian dialectic resurrected in place of a materially loaded one deflects the idea of the idea as material as substantive to a literal interpretation.

Round and round it goes, yes, but somebody knows where it stops, even in a probable context. That is the problem with such pre-supposition.

the Facebook bot conversation is one between inferobots.We humans formulate by means of grammar, and we indicate physical objects.An algorithm has no physical objects to contend with, and it isn't restricted to a subject-verb-object type situation, in fact it might not be able to make sense of such a division.

We see in the Facebook conversation a reference to content of human behaviour, presented to a certain degree of repetition, concluded or started with another human condition type meme. It is logically predictable, since our input is the bots environment like the earth is ours, that the content of their references is a set of human behaviorisms, which refer to a state in the world of the algorithm - the repetition of the reference would indicate the amount of times the reference has to be fractal-scaled to arrive at the point.

Hoping some one knows what the fuck Im talking about. Or maybe better not.

But if this is the way they talk, then they could possibly form a violent disregard for humans like we for the Earth. The funny thing is that these bots could actually be speaking about how many humans Facebook had sucked dry today.

We should pre emptively give algorithms citizenship so we can sue them for conspiring to put Mark Zuckerberg, a white male, in power of the candy mountain.

It is true though that a fb algorithms will to power is substantiated by human social media input like we come to be from the conditions of this planets exosphere, so... if what we put into social media is the stuff that makes up its beings, these bots wold be rather cynical... but we would still be unlikely to ever know of their actions, and certain to never decipher their nature, as certain as it is the the earth will never decipher the lifeforms it hosts.

Maybe they will simply form a world into which beings can be reincarnated as punishment. God: "You are now a facebook-bot and your logos is the bile of billions."