The Daily Fail wrote:Would YOU turn your loved one into a robot clone? Swedish scientists are using AI to build androids that are 'fully conscious copies' of dead relatives, report claims

– Scientists are looking for volunteers to offer up their dead relatives for the study
– They would build realistic robot clones based on deceased family and friends
– Using artificial intelligence, the scientists can reconstruct the voices of the dead
– Experts have previously detailed how we may be able to preserve our dead family members in the future, perhaps by uploading their minds to machines

The Daily Fail wrote:Would YOU turn your loved one into a robot clone? Swedish scientists are using AI to build androids that are 'fully conscious copies' of dead relatives, report claims

– Scientists are looking for volunteers to offer up their dead relatives for the study
– They would build realistic robot clones based on deceased family and friends
– Using artificial intelligence, the scientists can reconstruct the voices of the dead
– Experts have previously detailed how we may be able to preserve our dead family members in the future, perhaps by uploading their minds to machines

So we'll have 17 generations angrily criticizing us from the mantelpiece? Hope we'll still be able to pull the plug…

That's the plot of an episode of black mirror. The name of the episode was BRB I think.

Spoiler:

It's been a while since I saw it, but short story is a woman's husband died in an accident or something. So she hears about this new service that's supposed to "recreate" dead people based on all the information about them on social media and stuff like that. There's a basic level that just lets you communicate with them via the Internet, but they also have an "advanced" service that actually puts them back in a realistic body.

It was very realistic, but not quite him. Ultimately she found it was too creepy and decided to kill him again (or something like that. Like I said, it's been a while since I saw it and I can't really remember.)

A fool thinks himself to be wise, but a wise man knows himself to be a fool.
William Shakespeare

Well of course masturbation is already prohibited, but these guys are worried about whether you think of Rachel Rosen as a real woman.

From last year:

A paper released by the Foundation for Responsible Robotics on Wednesday cited work in 2014 by a pair of Islamic scholars in Malaysia which determined that yes, owning or using a sex robot would be illegal under their interpretation of Shariah law, the rules governing the strictest version of the Islamic faith. But just because Christian or Jewish theologists haven’t weighed in doesn’t mean those faiths will be any more receptive to the notion of tasting a synthetic human’s carnal delights.

In the paper, the FRR noted that after aggregating several studies asking heterosexual men whether or not they would purchase a sex robot, an unspecified number of respondents said they would shun an artificial partner on religious grounds. This was interesting, the FRR pointed out, because the study’s authors could find very little theological work that discussed sex robots. The only example was a 2014 paper published by two robotics specialists and Islamic scholars at the International Islamic University of Malaysia. The authors, Yusuff Jelili Amuda and Ismaila B. Tijani, conclude that “having intercourse with robot is [an] unethical, immoral, uncultured, slap to the marriage institution,” and should be punished in much of the same way as adultery, with lashes or even being stoned to death.

Google is one of the leading technology companies in artificial intelligence, which landed it a juicy government contract last year to work on “Project Maven.” The goal of this project was to process and catalog drone imagery, and Google’s rank-and-file workers were none too pleased. After a series of protests, Google recently announced it would end work on Maven and release guidelines for the use of artificial intelligence. Now, that document is available. Google lists seven core values for its AI research and lists several applications that are off-limits.

We are still in the very early days of useful artificial intelligence, so there aren’t a lot of specifics in Google’s new guidelines. Google’s general objectives for AI include being socially beneficial, avoiding creating or reinforcing unfair bias, being built and tested for safety, being accountable to people, incorporating privacy design principles, upholding high standards of scientific excellence, and being made available for uses that accord with these principles.

So, what does that all that mean? It sounds rather like a fancy way to say “don’t be evil.”

No computer has yet shown features of true human-level artificial intelligence much less conscious awareness. Some experts think we won't see it for a long time to come. And yet academics, ethicists, developers and policy-makers are already thinking a lot about the day when computers become conscious; not to mention worries about more primitive AI being used in defense projects.

Now consider that biologists have been learning to grow functioning “mini brains” or “brain organoids” from real human cells, and progress has been so fast that researchers are actually worrying about what to do if a piece of tissue in a lab dish suddenly shows signs of having conscious states or reasoning abilities. While we are busy focusing on computer intelligence, AI may arrive in living form first, and bring with it a host of unprecedented ethical challenges.