Artificial intelligence intervenes in human emotions What will we miss in the future world?

Time：2017-10-18 18:04

As the development of artificial intelligence technology accelerates, more and more possibilities are becoming impossible. The presence of artificial intelligence in human life has become more widespread and deeper, and it has even begun to intervene in human emotions.

Musk predicted that by the end of 2017, Tesla driverless cars will be able to travel safely across the United States without human intervention. Social robots live around humans and can accomplish many home or care tasks within a decade.

It is generally believed that by 2050, we will be able to make progress in areas other than these specific areas and eventually achieve universal artificial intelligence (AGI). AGI is part of Singularity, a new concept operating system released by Microsoft. The idea is that computers can surpass humans in any cognitive task, and human-computer integration has become extremely common. What happens after that, nobody can say.

Do you have an artificial intelligence strategy? Do you wish to have it?

One interesting idea is to install computer parts in the human body so that people can process data faster. Some people in the field of artificial intelligence conceived that "neural grid" can act as an extra cortex outside the brain, connecting us with electronic devices, and it is very fast and efficient. This will be a major innovation in machine parts, that is to say "semi-robot" electronic heart pacemaker and titanium alloy joints in the body. The future of artificial intelligence will focus on military and defense applications. The concept of fully autonomous weapons is very controversial. This weapon system can search, identify, select, and destroy an algorithm-based goal and learn from past security threats, but no human involvement. This is a rather terrible concept.

These ideas about artificial intelligence dominating the future of mankind are almost sci-fi dystopias, reminiscent of the Terminator scene.

Occasional discrimination

Humanity may have a long way to go before its destruction, but the warnings surrounding the ethics of AI have already sounded the alarm. Just last month, the machine learning algorithm has been criticized because it actively advises Amazon users to make bomb components, reflects gender inequality in job advertisements, and disseminates hate messages through social media. Most of the reasons for this error lie in the data quality and nature of machine learning. The machine will draw less than perfect conclusions from human data. Today, this result poses serious problems for algorithm management and artificial intelligence mechanisms in everyday life.

Recently, a young American man with a history of mental illness was denied a job because of his unsatisfactory attitude toward algorithmic personality testing. He believes he has been unfairly and illegally discriminated against, but because the company does not understand how the algorithm works, the Labor Law does not currently explicitly cover the contents of the machine's decision-making category, so he does not have access to the law. China's "social credit" plan also caused similar concerns. Last year, the program collected some data from social media (including friends' posts) to assess the quality of a person's "citizenship" and use it for making decisions, such as whether to provide loans to such people.

The need for artificial intelligence ethics and law

Establishing a clear ethical system for AI operations and supervision is necessary, especially when governments and businesses have priority in certain areas, such as acquiring and maintaining electricity. Israeli historian Yuval · Hellari discussed the paradox of the problem of driverless cars and trams. An innovative project such as the MIT Ethics Machine attempts to collect human data on machine ethics.

However, ethics is not the only area that involves artificial intelligence and human health issues. Artificial intelligence has had a major emotional impact on humans. Nevertheless, as a subject of artificial intelligence research, emotions are still ignored.

You can feel free to browse the 3542 peer-reviewed articles on artificial intelligence published on the Science Academic Database's website in the past two years. Only 43 of them, or 1.2%, included the word "emotional". There are even fewer articles that really describe artificial intelligence emotion research. When considering the Singularity system, emotions should be included in the category of artificial machine cognitive structure. However, 99% of artificial intelligence studies do not seem to recognize this.

Artificial intelligence understands how humans feel

When we talk about emotions in artificial intelligence, we are referring to several different things. The first is that the machine can recognize our emotional state and take appropriate action. The field of emotional computing is rapidly evolving and uses biometric sensors to test skin reactions, brain waves, facial expressions, and other emotional data. Most of the time, the calculations are accurate.

The application of this technique can be either good or evil. The company can get feedback based on your emotional reaction to a movie and sell it to you in real time through a smartphone. Politicians may meticulously create information that appeals to a specific audience. While social robots may adjust their reactions to help patients better in a medical or nursing environment, digital assistants may use a song to help boost your mood. Market forces will promote the development of this area, expand its coverage, and improve its capabilities.

How do we look at artificial intelligence?

This is the second emotional field of artificial intelligence. There has been no progress in the human emotional response to artificial intelligence. Humans seem to want to make contact with artificial intelligence, just as we treat most technologies, link human personality with inanimate objects, let electrical appliances have purpose, and project emotion into the technology we use, such as "It's very angry with me. That's why it doesn't work," and so on.

This is the so-called Media Equation. It involves a double thinking: we understand in intellectual cognition that machines are not conscious creatures, but we emotionally respond to them as if they had emotions. This may be due to the most basic human needs of ours, namely human relationships and emotional connections. Without them, humans will become depressed. This demand drives humans to make contact with other people and animals, even machines. Sensory experience is an important part of this combination of driving and rewarding mechanisms, and it is also a source of happiness.

False social

When there is no experience of connection and belonging in our environment, we will replicate this experience through television, movies, music, books, video games, and anything that can provide an immersive social world. This is called the Social Surrogacy Hypothesis, an empirically supported theory of social psychology that is beginning to be applied to artificial intelligence.

The basic emotions of human beings are justified, even in the face of virtual artificial intelligence. For example, in the face of the compliments of the digital assistant, the happiness expressed, the anger when facing the algorithm for rejecting the mortgage loan application, the fear expressed in the face of the driverless car, and the artificial intelligence in the face of Twitter refused to verify his account. Showed sadness (I'm still sad for this problem).

robot

Humans have stronger emotional responses to physical intelligence, which means robots. The more a robot looks like humans, the stronger our emotional response to it. We are attracted by humanized robots and express positive emotions to them. When we see them hurt, we feel sympathetic and unpleasant. If they refuse us, we may even feel sad.

However, it is interesting to note that if a robot is almost exactly like a human being, but it is not a perfect human being, our assessment of them will drop suddenly and reject them. This is the so-called "Horror Valley" theory. The resulting design concept is to make robots look less like humans at this stage unless one day we can make robots exactly like humans.

Gentle touch

Artificial intelligence now uses tactile technology, a touch-based experience, to further deepen the emotional bond between humans and robots. Perhaps this example is best known: There is a furry seal, Paro, which is useful in nursing organizations in different countries.

There are many potential uses for social and emotional robots. Some of these measures include taking care of the elderly, helping them to live on their own, helping isolated people, and people with dementia, autism or disabilities. Touch-based sensory experiences are increasingly being integrated into technologies such as virtual reality. This is part of it.

In other areas, artificial intelligence may be responsible for tasks such as day-to-day household chores or teaching. A survey of 750 Korean children between the ages of 5 and 18 found that although most of them had no problems accepting courses taught by artificial intelligence robots, many people played on artificial intelligence teachers. The emotional effect expressed concern. Can robots give students advice or emotions? However, more than 40% of people agree to use artificial intelligence robots instead of teachers.

As Harvard University psychologist Steven Pinker said, the experience of social substitution like the one described above allows us to deceive ourselves. We did not really experience social networking, but we deceive our brains and convince us that we are so good that we feel better. However, the effect of copying is not as good as it actually is.

Conclusion

Obviously, people can experience real emotions from the interaction with artificial intelligence. But will we miss something that is not so far away from ourselves, besides driverless cars, virtual assistants, robot teachers, cleaners and playmates?

This scene is reminiscent of Harry Harlow's famous experiment, where isolated monkeys can choose soft-haired "mother" without needing to receive milk from distributed via cold iron wire mesh. Can we technically achieve everything we want and realize that the basic human emotional needs and the fun of the real world sensory experience do not exist? As for the things that are more extravagant for the future, will we pursue something that is opposed to mass production of junk food, that is, a true sensory experience and contact with real people, not robots?

The answer is, I don't know yet. However, the fact that 99% of artificial intelligence studies do not focus on emotions shows that if emotions do play a greater role in artificial intelligence, then this is either an afterthought or it is because emotional data makes artificial intelligence devices and their employers Can have more power and money. The digital humanism plan may help us to remember that when we are moving toward the Singularity system and human-machine integration, we should not ignore our old mammalian brain and their need for emotional ties. The OpenAI project is a step toward this goal. Its goal is to make all people enjoy the benefits of artificial intelligence. Let us go further and consider emotional health in the field of artificial intelligence. Who knows what this will bring us?