Milgram asked test subjects to deliver increasingly powerful electric shocks to a stranger when they made mistakes on a memory test. Despite serious moral worries, many participants did so, without knowing that the squirming stranger was actually acting.

Milgram's motivation was the question of whether Nazi war criminals could really 'just be following orders'. His experiments suggested people will do horrendous things if someone in a position of authority tells them to. But further work in this area has been hampered by ethical concerns.

The UCL group got their volunteers to don VR helmets to experience a simulated version of the experiment. It was designed to be the same, but the strangers getting shocked were just computer animated avatars.

Yet the UCL team conclude their test subjects reacted on "the subjective, behavioural and physiological levels as if it were real in spite of their knowledge that no real events were taking place." Measurements of heart rate and heart rate variability showed they reacted as though the situation was real. They were just as aware and worried they were doing wrong, but shocked the stanger anyway.

Coverage of the work reports that the door is now open to try all kinds of what lead researcher Mel Slater calls "situations that are otherwise impossible whether for practical or ethical reasons."

I think it's more interesting to consider this as an insight into how people take their real life morals with them into cyberspace. It is common to read news coverage that paints online worlds, chat rooms and the internet in general to be morally bereft - places where people go to escape the moral shackles of real life.

To an extent this is true, but this experiment shows that people keep their values in what they know to be virtual arenas. I think the next step is to find out more. Here's three things I'd like to know.

1. Do the stronger morals like 'don't torture strangers' translate better than weaker ones like 'don't flirt with strangers' or 'don't tell lies'? Are there some moral values that most people just won't breach in virtual environments?

2. Is it the immersive experience that makes a difference? Will people always adhere more closely to their real-life values if the virtual experience is more real. It seems likely, perhaps the internet would be a cleaner place if we all had VR.3. How much of a difference does being watched make? Perhaps participants in the UCL trial would have been less concerned about hurting a simulated person if they did it from home, without real world observers.

Last of all, I am not convinced by Slater's idea that researchers can now use VR to study extreme social situations like "violence associated with football, racial attacks, gang attacks on individuals" with impunity.

Milgram's experiment caused ethical concern because of the use of deception and the fact experimental subjects were placed in a distressing situation. It seems to me that if we are to be morally consistent the results from the UCL group should trigger some of the same issues.

There was no deception. But as the team write, participants responded on "the subjective, behavioural and physiological levels as if it were real". If the same distress was caused, surely the experiment is just as unethical?

It is interesting to hear that people "take their morals with them" into cyberspace. I play World of Warcraft, and I see evidence of this every day. When an item 'drops' from a creature that has been killed by 2 or more players working together, there is the opportunity to steal (or ninja as it is known) the item. The vast majority of people would never dream of doing this, even if the group is composed of complete strangers. You do get the odd one who will, and they quickly get a bad name for themselves. This leads me to think that "reputation" is perhaps just as important as any inate moral code we may have.

If our "morals" come with us to enable us to function as a group, then those "morals" are also linked with an individual's reputation (or wish to have a good reputation). This could be why most people in these experiments (and indeed in real life) tend to be caught up in "mob rule" to do terrible things that they would not do as an individual. An "authority" lends us their reputation so we can act without damaging ours (and perhaps even threatens our reputation if we do not do as they say).

I think it depends upon how closely the e-world matches reality. My kids used to love loading up my latest vast metropolis in SimCity and destroying it with one disaster after another: tornados, earthquakes, fires, alien invaders, etc. I don't think they would be that callous in real life. Certainly, they had enough sense not to save their mayhem to the original file. (Perhaps that was self preservation?)

Thanks for your comments, I think that the 3 questions you raise are interesting.

I'm not convinced by the ethics argument though - here there was no pressure put on the participants to stay in the experiment unlike in the original experiment by Stanley Milgram, where subjects were told "You have no choice but to continue" etc.. Our participants were told from the start that they could withdraw any time without giving reasons, and they were also warned in advance that some people might find it stressful (or not).

The fact though that their automatic systems tended to respond as if it were real though, means that this paradigm could be used for study (though see the comments in the section "Speculations on Obedience in Virtual Reality" in the original paper).

I think that being watched does matter. One of the big problems with behavior on the internet is anonymity.

Though, it should also be noted that there seem to be a small number of internet sociopaths: People who would otherwise be perfectly normal humans in the real world, but who are incapable of perceiving any humanity in the other players in their virtual worlds, and thus behave in an absolutely reprehensible manner.