The first roboethics could be said to have been developed by Isaac Asimov and his Three Laws of Robotics. You may recall these from the movie I, Robot that starred Will Smith. They are:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

In January 2004, the first International Symposium on Roboethics was held in Sanremo, Italy. Here for the first time, the word Roboethics was officially used . The Roboethics Symposium aims to open a debate, among scientists and scholars of Sciences and Humanities, with the participation of people of goodwill, about the ethical basis which should inspire the design and development of robots.

During this symposium, it became clear that there are three positions held by people in the robotics community:

1. Those who are not interested in the ethics of robotics. They believe their work is strictly technical and they do not have a moral or social responsibility.

2. Those interested in short term ethical questions. They just want to know if something is good or bad now.

3. Those interested in long term concerns. These people are concerned about the global ethical questions in the long term.

South Korea is one of the world's most hi-tech societies. Citizens enjoy some of the highest speed broadband connections in the world and have access to advanced mobile technology long before it hits western markets. The government is also well known for its commitment to future technology. A recent government report forecast that robots would routinely carry out surgery by 2018. The Ministry of Information and Communication has also predicted that every South Korean household will have a robot by between 2015 and 2020.

It therefore should come as no surprize that South Korea was first to start drawing up The Robot Ethics Charter, expanding on Asimov's three laws, which is aimed at preventing robots abusing humans and humans abusing robots.

"Imagine if some people treat androids as if the machines were their wives," Park Hye-Young of the ministry's robot team told the AFP news agency.

The main focus of the charter appears to be on dealing with social problems, such as human control over robots and humans becoming addicted to robot interaction, just as people have become addicted to the internet and online activities. The document will also deal with legal issues, such as the protection of data acquired by robots and establishing clear identification and traceability of the machines.

In the UK, a government study predicted that in the next 50 years robots could demand the same rights as human beings. The European Robotics Research Network is also drawing up a set of guidelines on the use of robots.

They reckon that within the next 50 years, robots could have certain responsibilities such as voting, the obligation to pay taxes, and perhaps serving compulsory military service. Society would also have a duty of care to their new digital citizens. Countries would be obliged to provide social benefits including housing and even "robo-healthcare".

While some experts welcome the introduction of the Robot Ethics Charter and similar proposals, others think the idea is symptomatic of a social protection mentality that has gotten out of control.