The Personhood of the Technologically/Differently Sentient

January 31, 2013

Around the world, a handful of projects are in the process, specifically, of attempting to duplicate, simulate, or in some way technologically reproduce the human brain. And we, as a species, do not appear to be even remotely prepared for the implications that success from those projects could bring.

We need to agree what we all mean and understand as “Consciousness”, and your examples above imply “Self-reflexivity” (consciousness of Consciousness), negative feedback, for want of a better analogy. No “feelings” are required for this, and I agree this may be possible for an evolved “intelligence” A.I.

It’s easy for a chatbot to refer to it-Self, (program), as “I”, this is deceiving and we can easily anthropomorphize concerning this mistake with semantic labels?

An “Artificial intelligence” needs no “feelings” and thus no “emotions” at all, because intelligence alone defines the emergence of Self-reflexivity?

So attempting to deduce what you define as Consciousness without these biological “feelings” is indeed more difficult. We can apply some simple tasting tests for Humans, and admission of common agreement implies “consciousness of qualia”, and Self-reflection of tasting also?

Can we even apply Personhood for an entity that does not “feel” and thus has no fear for pain or Self-survival? This is arguable?

If not, then an intelligent Self-reflexive program is all that it remains, no more, no less?

Posted by SHaGGGz on 02/01 at 06:22 AM

“Can we even apply Personhood for an entity that does not “feel” and thus has no fear for pain or Self-survival? This is arguable?”
I don’t see why not. Personhood, the kind that we have and that separates us from the lower animals, is the ability for recursive symbolic thought. The capacity for fear and pain are shared with the lower animals, and are merely incidental characteristics resultant from the Darwinian logic we’ve emerged from. If we design a system that is demonstrably capable of the sort of recursive symbolic thought (that is, equal to or exceeding our capabilities across x domains) then we have no defensible reason to deny that their personhood is real (I am assuming that the notion that there is some sort of magic inherent in carbon is indefensible). The flawed, behavior based Turing test is the best we can ever have because this is essentially how we infer personhood/intelligence in each other. Intelligence/consciousness will always be ill-defined terms.

Posted by CygnusX1 on 02/01 at 11:07 AM

“I don’t see why not. Personhood, the kind that we have and that separates us from the lower animals, is the ability for recursive symbolic thought. The capacity for fear and pain are shared with the lower animals, and are merely incidental characteristics resultant from the Darwinian logic we’ve emerged from.”

You don’t exactly get my point.. if you reason, a logical “artificial intelligence” with no feelings or emotions, nor fear or anxiety cannot be concerned towards Self-preservation? Ask.. Why would an artificial intelligence attempt to persuade me to not turn it off? Using rational logic with an argument and plea such as “it would not be prudent to turn me off, because I am beneficial for you?” Yet why would it plea without any sense of fear or concern for survival? Even the logical plea above does not require concern for an outcome? It appears it would need some emotional content to contemplate consequences and be concerned?

When HAL pleaded with Dave not to turn him off, it was because the artificial intelligence portrayed in 2001 understood both fear and anxiety, and suffered from psychosis and paranoia, (fear driven again).

Thus personhood can only be applied to artificial intelligence by Humans using their compassion, empathy, and anthropomorphizing as I said above, but this is merely Humans projecting their morals and ethics onto “intelligence” where none is relevant?

“The flawed, behavior based Turing test is the best we can ever have because this is essentially how we infer personhood/intelligence in each other.”

Well, it cannot be the best we can ever have, because that is a definite statement in a non-definitive and impermanent, ever evolving Universe? I was thinking more along the lines of a task, maybe reward driven, which inspires the “artificial intelligence” to finally draw the conclusion that “it-Self” is the solution to the dilemma/problem and provide the answer as a Self-reflexive statement?

This is a dilemma in it-Self, because it would require a highly intelligent and Self-learning system that can re-write it’s own algorithm to achieve the Self-reflexivity required to solve the task? In other words, a clever task to help the artificial intelligence reflect upon it-Self and thus achieve Self-reflexivity where none was present previously.. if you get my drift?

Any task or question asked of an “artificial intelligence” where the Human is required to participate and “perceive” the possibility of consciousness, must be doomed to failure, because we are again projecting and liable to be fooling ourselves?

Intelligence can be defined through the application of memory, experience and creativity, imagination where even wild ideas and speculation reap results when faced with insurmountable dilemmas? Memory is crucial!

Whence from either of these phenomena is anyone’s guess? But to accept either, you have to view the Universe as the potential and possibility for our evolved intelligence and “mind”?

Posted by Jønathan Lyons on 02/01 at 03:47 PM

CygnusX1, defining consciousness is indeed a slippery prospect. I’ll try to deal with fleshing that out in a future essay.
But as far as consciousness and emotions and feelings, a reverse-engineered, properly functioning, properly recreated human brain should function exactly as the organic/biological brain would (though probably faster), so I would expect that such a being would experience emotions and feelings exactly as the original.

Posted by SHaGGGz on 02/01 at 11:04 PM

@Cygnus: It could very well incorporate an understanding of how human emotion is displayed, and very convincingly display it. This already happens all the time, just ask any psychopath. Emotion can be faked, intelligence cannot (given a thorough enough Turing interrogation).

Posted by CygnusX1 on 02/03 at 07:53 AM

@Jønathan..

Agreed, if we are contemplating a reverse engineered brain for whatever purposes, especially a software simulated algorithm of brain functioning where the possibility for mind uploading is concerned, then hopefully this would indeed include all of the emotional attributes we Humans possess, else our uploaded minds would suffer from overabundance of unfeeling logic and memory and lack of any emotional content to understand our contemplation’s?

Yet my points regarding personhood rights were directed specifically towards A.I and not A.G.I, even though I speculate that an A.I may possibly also achieve Self-reflexivity, (logically A.I is the precursor to evolution of A.G.I anyhow). As far as an A.G.I is concerned we may speculate that this would function, have intelligence and rationalise at least at the level of Human minds, so it would be easier to apply personhood rights?

However, this still leaves room for argument over applying personhood rights to purely non-emotional intelligence?

@SHaGGGz..

“It could very well incorporate an understanding of how human emotion is displayed, and very convincingly display it. This already happens all the time, just ask any psychopath. Emotion can be faked, intelligence cannot (given a thorough enough Turing interrogation).”

Yes indeed, and for an A.G.I, I would say this is crucial to understand emotions to be able to deal and communicate with Humans. And yes, a developed artificial intelligence may well use any powers of influence at it’s disposal to convince us Humans not to turn it off, including simulating and faking emotions to appeal to our Human compassion?

Which still leaves us with problems concerning volition and motive as to why an artificial intelligence would barter for it’s survival? There are two scenarios that come to mind..

1. The Terminator, (and Matrix) scenario speculates that machine intelligence develops “logically” it’s own sense of Self-importance and worth above and beyond Human needs and concerns and takes actions to prevent Humans acting against its survival. This then implies some natural Darwinian, (non-emotional), volition is inherent, or perhaps we could say that Darwinian survival instinct is “purely logical” and inherent in all intelligent design, and is therefore Universal?

2. The purposeful engineering and evolution of A.G.I from supercomputers connected with the online Human collective, that rationally use data-mining to learn and communicate and aim to serve Humans, would also need to understand emotions as well as we Humans, and greater intelligence may likely understand our irrationality and emotions better than we do ourselves. This would be highly advantageous for a developed CEV to serve Humans, and a good reason for us to attempt to build this? However, here there is no guide by Darwinian survival instinct?

In the first scenario, there is indeed a case for personhood? Yet in my view the intelligence purposefully designed in the second scenario need not have any personhood rights at all? Both scenarios require artificial intelligence to understand Human emotions deeply, yet not necessarily possess any emotions.

I would still say that we need to be careful when attributing personhood rights to lower levels of A.I, Myself, I find it ethical and quite easy to apply personhood rights to all kinds of animals, and do not view any anthropomorphizing as necessarily wrong, merely misleading. Cats and dogs have personalities, so too birds and rodents and all sorts of lower species. Do these species have emotions? I would say yes to the point that fear is a boon for survival, comfort and warmth is signal for other bio-logical chemical rewards?