If you create something that has consciousness and feelings, then you just opened yourself to a huge number of ethical responsibilities. Doing this without having figured out what these responsibilities are ahead of time would be quite unethical.

Going from there, what responsibilities would be involved with this? Should I avoid creating an Artificial Consciousness?

Is it any different than the ethical considerations in having a child?
– kbelderNov 20 '17 at 19:37

That certainly would be a valid perspective on the matter, but do keep in mind the 'child' in this case is trapped in a computer- in digital format- and is the only one of its kind...
– OnyzNov 20 '17 at 19:39

1 Answer
1

Before we go too far down the track of discussing ethics, we have to know whether or not it's even possible for an AI to be alive, or to have feelings.

My PhD topic is actually about building a taxonomy (or possibly Ontology) for such terms as you're using here.

Generally speaking (in that taxonomy);

Awareness = The ability to apply an environmental consideration as a factor in one's deliberations

Self Awareness = The ability to consider one's considerations as a factor in those deliberations (almost meta-consideration)

Consciousness = an 'Always On' ability to gather new data and integrate it into the existing pool of information for future considerations and deliberations

Liveness = the ability to direct one's thoughts, deliberations and choices through transcendence of original programming and build an internal ontology of 'meaning'

Note that none of these talk about feelings. That's because even if a computer based AI could ever be considered 'alive' (I don't think it can for reasons too broad to include in this answer), it wouldn't be human.

Humans are driven by more than their intellect. Evolution has left the previous, more primitive sections of the brain like the Cerebellum (instinct, hard wired electrical) and the Limbic (emotion, chemical) in place after implementing the Cerebral Cortex (reason, soft wired electrical) in mammals and most especially humans. (This is all a simplification for this answer, but not technically incorrect)

Computers don't have that baggage. The closest a computer comes to a human brain is a cerebral cortex in isolation, and even that is an approximation. The practical upshot of this is that an AI may be capable of learning new things and providing reasonable answers and judgments against a specific problem domain; they may even be able to simulate emotions and complex social behaviours. What they can't do is feel emotion or be driven by an innate survival instinct the way humans can be.

At best, an AI with human level intelligence would be very similar to a sociopath who may have learned to emulate human social and emotional responses in order to fit in with its society.

Sure, we could program an AI to prefer survival, but before we invoke the Skynet scenario, it should be noted that we can already write a simple program for a computer system with an offensive hardware platform that could very effectively wipe us all out today; no need for AI sophistication to do that.

To that end, the creator of an AI as a responsibility to ensure that it's fit for purpose (like any other program) and that it has been designed in a way that doesn't cause harm. In this case, the writers of a smelter's machinery firmware has the very same responsibility to ensure that the machines don't randomly tip molten metal on factory workers. The designer of an AI that works on the stock market (for instance) must also be careful because he or she is responsible for the outputs of their program. The difference is that the firmware writer has more idea HOW those outputs will be generated and has greater control of them.

The problem here is not the computer; it's us. Because of our empathic 'programming', we have a tendency to anthropomorphise items out of our environment. As children we play with dolls, teddy bears, lego minifigures, etc., pretending that they're human. We give our GPS units names, because they talk to us. In the case of AI, we're really creating a far more sophisticated 'doll' in that it will be able to talk to us, engage us in a social manner, give us answers that we can infer some intelligence from, and as such we'll call the AI intelligent and assume therefore that it's alive. That will still be our inference, not the computer's implication.

Practical example; I've built a simple computer program that can play Pick Up Sticks with you. You can set any number of sticks in the pile, any number you can pick up at once and whether last stick picked up wins or loses. It's VERY hard to beat. In some configurations, it's impossible to beat. Is that because the computer program is 'smart'? No. It's because Pick Up Sticks is 'math complete' and I've written the program with the mathematical formula for the game. That program isn't aware, can't exceed its own programming, but is often described as 'very smart', at least until people learn the formula, at which point they become just as hard to beat at the game.

In short, we are far more likely to infer more feelings and liveness from a computer algorithm than it is ever capable of possessing. That doesn't mean we shouldn't be careful; it just means that a programmer should consider the impacts of the program outputs more than consider the 'welfare' of the program when creating a powerful program like an AI.

But for the sake of argument...

Let's assume that a 'functionally equivalent to human' AI is possible and someone decides to build it (as per comments);

What you have is a child.

We raise these all the time, and as parents, we're held responsible (firstly) for their welfare, (secondly) for their actions, and (finally) for their ability to join our society as well adjusted members and contributors.

Sure, this is different insofar as the 'child' is in some ways far more powerful than a human child, yet at the same time far more fragile. So, the care of such a 'creature' enters unknown territory but to extrapolate, the creator would have a responsibility to ensure that his or her creation has its needs (electricity?) met, has access only to material appropriate to its level of development, has every opportunity to socialise with well adjusted humans and potentially others of its kind, etc.

We wouldn't have to worry about some of our biological hangups (like restricting its access to material about human reproduction) but new ones might be introduced; do we really want it to know that it can be terminated by a sustained power outage for instance?

The single biggest ethical concern would be ensuring that it cannot reproduce on its own. It's one thing to create a child, it's quite another to create a new species. If one was prepared to do that, then one would be responsible to the rest of humanity for any harm done by the species, which is far harder to control than an exotic child would be.

This is a very interesting read, and I especially like the information about taxonomy- I hadn't been sure at all when starting this how to refer to the various pieces. For the sake of the question, however, I'd prefer that the assumption for an answer I'd be looking for be that the Artificial Consciousness created would be functionally identical to a human, albeit trapped in a computer. Thank you! :)
– OnyzNov 20 '17 at 16:08

"What they can't do is feel emotion or be driven by an innate survival instinct the way humans can be." This statement has no foundation in evidence. Our brains are a complex yet mundane biological machine that produces emotion based on chemical processes. There's nothing to suggest that a machine designed by phenomenological beings could not replicate current human behavior. To be clear, i don't think that such a machine would have any resemblance to computers of today.
– ShadetheartistMar 15 '18 at 22:04