Meta

Tag: Ethics

Researchers have called for radical new legislation protecting people’s thoughts from being stolen and maybe even deleted.

Biomedical ethicists Marcello Ienca and Roberto Andorno believe that while rapid advances in neurotechnology have created opportunities in modern medicine, they also present new challenges for human privacy.

Writing in the journal Life Sciences, Society and Policy, the pair have warned that brain-hacking and “hazardous use of medical neurotechnology” could threaten the integrity of our thoughts.

The ethicists wrote: “We suggest that in response to emerging neurotechnology possibilities, the right to mental integrity should not exclusively guarantee protection from mental illness or traumatic injury but also from unauthorised intrusions into a person’s mental wellbeing performed through the use of neurotechnology, especially if such intrusions result in physical or mental harm.”

The proposal sets out four new human rights laws: the right to mental privacy, mental integrity, cognitive liberty and psychological continuity. It is hoped that, in the future, these laws could be used as safeguards preventing people’s brains from being read or stimulated without their consent.

Fear of cognitive intrusion is not paranoia borne out of science fiction, they say.

Last year, the US military successfully tested electrical brain stimulation technology aimed at enhancing the performance of soldiers in high-pressure situations.

In 2011, scientists at the University of California in Berkley used brain scans to recreate scenes of movies participants in the project had already seen beforehand.

Recently, Facebook announced they have set up a research shop known as Building 8, a project designed to develop technology that would allow the social media giant to read users’ minds.

There are currently no laws governing the collection of brain information and Ilenca and Andorno fear “the indiscriminate leakage of brain data across the infosphere”, in the same way as personal information is shared now.

Ienca said: “Neurotechnology featured in famous stories has in some cases already become a reality, while others are inching ever closer, or exist as military and commercial prototypes.

“We need to be prepared to deal with the impact these technologies will have on our personal freedom.”

If you’re like Nick Bostrom, Isaac Asimov, or (not to put myself on their level), me, you probably have a few, nay, probably many misgivings about the idea of artificial intelligence and the coming “robot revolution.” Asimov, in his typically perspicacious way, explored the ethics and moral issues of artificial intelligences and robots in his sci-fi classic, I, Robot, which was made into a film version. There, as we know, VICKI, an artificial intelligence super-computer, takes over the world’s robots and bascially imprisons humanity. For some of us, following the weirdness in financial markets for example, the “dark pools” and algorithmic trading that now constitutes the bulk of commodities and equities trading is tailor made for all sorts of A.I. trouble. Even the popular American television series (one of my favorites, incidentally) Person of Interest explores not only the dangers of A.I., but of two such artificial intelligences battling it out with each other, with humanity caught in the middle. In one episode, the “evil” A.I. gives a little demonstration of its “powers” when it deliberately crashes the stock markets in mere seconds, and then, just as quickly, rectifies it. Oxford philosopher Nick Bostrom has been sounding the warnings for many years about A.I.

Well, if the following story shared by Mr. A. is any indicator, Bostrom’s and Asimov’s concerns may be entirely justified:

Consider just the disturbing implications about the new robot “Sophia” as outlined in this paragraph:

It is important to note several things that Hanson mentions. Sophia first tells us that she would like to be “an ambassador” to humans, as well as to continue her evolution through formal education, studying art and eventually creating a business and having a family. Hanson explicitly states that Sophia will become as “conscious, creative, and capable as any human.” This statement is followed by a key mention of not having the rights of a human. This might seem absurd to the uninitiated, but this is a serious ethical discussion that has been taking place among “roboethicists.” This is all-but guaranteed to gain steam as robots are integrated in autonomous ways, whether it is on the battlefield, as self-driving vehicles (now programmed to sacrifice some humans over others), or certainly as they become visually and intelligently on par with human beings. Even the mainstream Boston Globe addressed this more than two years ago, citing a 2012 paper from MIT.

At this juncture, the article goes on to mention the existence of – get this! – a Society for the Prevention of Cruelty to Robots, this in a society that chops up the unborn, sells their parts, harvests human organs, and makes people pay for the whole “privilege.”

Joseph P. Farrell has a doctorate in patristics from the University of Oxford, and pursues research in physics, alternative history and science, and “strange stuff”. His book The Giza DeathStar, for which the Giza Community is named, was published in the spring of 2002, and was his first venture into “alternative history and science”.