This year’s CeBIT consisted of the Dronemasters Summit focus was on business applications for flying robots, as used for example by energy companies to monitor the condition of overhead powerlines and substations. But the crowd favorite at CeBIT had to have been “Pepper,” a humanoid robot developed by French-based Aldebaran and IBM which can speak 20 languages, recognizes its interlocutor’s emotions from his facial expressions, and is going to be used not only in Japanese temples of consumerism, but also, in the near future, on German cruise ships. Pepper is a robot designed for human interaction, he’s intended to make people happy – to enhance people’s lives, facilitate relationships, have fun with people and connect them to the outside world. He can recognize faces, speak, hear, and move around autonomously. He understands basic human emotions, like happiness and sadness. He can identify speech, inflections and tones in our voices and use these to determine whether his human is in a good or a bad mood. He can also learn from his interactions, as his 25 sensors and cameras provide detailed information about the environment and people it interacts with.

“We will see robots and AI do the utterly unexpected moving effortlessly beyond the limits of human imagination”

Will Robots will be our Managers and Bosses?

Sure there are people who prefer a robot to a human boss . If a robot can do something a human can do, it is only a matter of time before it does this cheaper and more efficiently. The robots will develop a capacity to deal with emotions such as, “stressed”, “fear”, “anger” and other emotions, and many will even be programmed to handle self-motivation. Although technological limitations are disappearing, social, moral and ethical ones remain, but will be enough to persuade us to trust artificial intelligence? Or to accept a robot as a member—or even as a manager? Will we be able to express our emotional concerns to a robot manager?

Although perhaps many would prefer we will have the opportunity to choose and leave the robotic jobs to the robots, and find more fulfilling work for humans to do, but in any case never obey to a Robot.

But the results of an experiment in 2014, carried out by James Young an assistant professor at the University of Manitoba and Derek Cormier a graduate student in Human-Computer Interaction at the University of British Columbia , show that many people will follow robots placed in positions of authority to do daily mundane things.

“When a good manager speaks, employees not only listen but act based on what is said. In at least some cases, robots may one day be the ones giving the instructions.”

So better be prepared for the rise of the Robot Bosses. Robots and software may soon be more than capable of doing your boss’s job, but the ultimate question may be whether you would choose to work for them.

Educating Robots as Humans

Robots and AI are often trained using a combination of logic and heuristics, and reinforcement learning. The logic and heuristics part has reasonably predictable results: we program the rules of the game or problem into the computer, as well as some human-expert guidelines, and then use the computer’s number-crunching power to think further ahead than humans can. This is how the early chess programs worked. While they played ugly chess, it was sufficient to win.

The challenge to educate robots as humans is “Anything that is inherently human is always very difficult to translate into a computer”.

How we should educate robots, or Should we say program robots, toward integrating robots and artificial intelligences into human society?

“Data-driven”, machine-learning approaches in which the machine is not constrained by human experience or expectations.

“Theory-driven” approaches that attempt to model mental processes in software. Abstract algorithms can mimic decision-making and other cognitive processes without worrying about how such processing occurs in the brain.

“Reinforcement-learning” approaches involve reward and punishment. Researchers believe the brain employs two distinct types of process in reinforcement-learning situations. One is a simple, rapid, habitual form that predicts the consequences of actions using expectations based on how often an action has been rewarded in the past. The difference between the predicted reward and the one actually obtained is a “reward prediction error,” which can be used to update expectations. The other is a slower, more deliberative form of goal-oriented control, which uses knowledge about the world to think through (often multiple) actions to assess probable consequences. This approach is more reliable, being able to rapidly adapt to changes in the environment, but is also much more intensive and costly.

How to teach ethics to Robots

In the next 10 to 20 years, robots will be doing everything from driving our cars to fighting our wars and will be taking the place of humans in the most intimate of roles. As the Robots are increasingly replacing humans in some of the most commonplace functions of everyday life, Robots will need ethical guidance. Our overarching interest in robot ethics ought to be the practical one of preventing robots from doing harm, as well as preventing humans from unjustly avoiding responsibility for their actions.

There are at least three things we might mean by “ethics in robotics”: the ethical systems built into robots, the ethics of people who design and use robots, and the ethics of how people treat robots.

The best known prescription for robots is the Three Laws of Robotics formulated by Isaac Asimov (1942):

A robot may not injure a human being, or through inaction, allow a human being to come to harm.

A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

A robot must protect its own existence as long as such protection does not conflict with the First or Second law.

These laws can be considered the first steps of robot ethics, but these three laws were for robot slaves. It was about to prevent robots from doing harm–harm to people, to themselves, to property, to the environment, to people’s feelings, etc. Today, robots have increased their abilities and complexity and it is necessary we develop more sophisticated safety control systems that prevent the most obvious dangers and potential harms.

As robots become more involved in the business of understanding and interpreting human actions, they will require greater social, emotional, and moral intelligence.

For robots that are capable of engaging in human social activities, and thereby capable of interfering in them, we might expect robots to behave morally towards people–not to lie, cheat or steal, etc.–even if we do not expect people to act morally towards robots.

Ultimately it may be necessary to also treat robots morally, but robots will not suddenly become moral agents. Rather, they will move slowly into jobs in which their actions have moral implications, require them to make moral determinations, and which would be aided by moral reasoning.

If we want robots to behave more like equals, robots will need to behave ethically and morally as we do. Unfortunately, ethics and morality are not reducible to heuristics or rules.

Will robots ever have the empathy and intrinsic morality of human beings?

Robots not only should be able to learn to imitate human empathetic and ethical behavior, they will have to acquire greater capabilities and ethical sophistication.

Avoiding create psychopath and sociopath Robots

If we do not know yet what powerful psychological forces make good people do bad things, we cannot take the risk that good robots will can do bad things.

If, as seems, the future will hold living with robots and many humans will have Robots as a manager or boss or leader, we must avoid from now that companies can build sociopaths or psychopaths robots.

As I am not sure that it can be achieved, given the moral and nature of enterprises, it is necessary that companies have mechanisms to avoid putting these robots devoid of moral and ethics at the forefront of processes that can endanger humans and other robots.

It is extremely important to open a debate to develop technology that force companies build robots capable of performing an ethical and moral learning. And it is also necessary to design and implement the necessary controls to ensure that the most advanced robots will be build according to new and more stringent laws of Ethics and Morals Robotics.

It is important to include in our universities the field of Computational psychiatry (a discipline that brings together bedfellows from disparate departments) that allow humans guarantee robots are working with ethics and moral. And we need companies to develop sophisticated tools for Computational Psychiatrists that may be able to disentangle the Every robot psychiatrist wants to know which treatment will work best for a given robot.

Whether you view it as ethic robots or just simple robots with machine-augmented human cognition, or human-assisted machine cognition, it comes back to one simple fact: to monitor and control robots behaviour we will trained people.