How Hacked Sex Robots Could Murder People

How Hacked Sex Robots Could Murder People

Sex robots could be hijacked by hackers and used to cause harm or even kill people, a cybersecurity expert has warned. Artificial intelligence researchers have consistently warned of the security risks posed by internet-connected robots, with hundreds recently calling on governments to ban weaponized robots. The latest warning comes from a cybersecurity expert who made the…

The celebrity YouTuber Logan Paul has apologised after sparking outrage by posting a video showing the body of an apparent suicide victim in Japan.

The 22-year-old American, who has 15 million subscribers on YouTube, was labelled “disrespectful” and “disgusting” after he joked with his friends about discovering the body in Aokigahara forest, a notorious suicide spot at the base of Mount Fuji.

The video, which Paul posted on Sunday, received millions of views before it was removed.

Paul and his friends, who are filming from various locations in Japan, reportedly came across the body moments after entering the forest. Their video showed the body of a man, whose identity is unknown, from several angles but blurs his face.

A member of the group is heard remarking that he “doesn’t feel good”. Paul replies: “What, you never stand next to a dead guy?” and then laughs.

The exchange, and the decision to upload images of the victim, prompted a wave of criticism online.

The Breaking Bad actor Aaron Paul tweeted: “You disgust me. I can’t believe that so many young people look up to you. So sad. Hopefully, this latest video woke them up … Suicide is not a joke. Go rot in hell.”

Fellow YouTube star Kandee Johnson said: “Dear @youtube, after the Logan Paul video where he shows a dead body of a suicide victim, uses that for the title, makes heartless jokes next to the body, there needs to b age restrictions for certain creators. How is this allowed on YT? His followers are children! Horrifying.”

Paul later apologised to his 3.9 million followers on Twitter: “Where do I begin … Let’s start with this – I’m sorry,” he said.

“This is a first for me. I’ve never faced criticism like this before, because I’ve never made a mistake like this before.”

He added: “I didn’t do it for views. I get views. I did it because I thought I could make a positive ripple on the internet, not cause a monsoon of negativity. That’s never the intention. I intended to raise awareness for suicide and suicide prevention.”

But the initial statement was criticised by many, including the Game Of Thrones actor Sophie Turner, who tweeted: “You’re not raising awareness. You’re mocking. I can’t believe how self-praising your ‘apology’ is. You don’t deserve the success (views) you have. I pray to God you never have to experience anything like that man did.”

British Labour MP Melanie Onn, who had tweeted that she bought a Logan Paul hoodie as a Christmas present for her 10-year-old son, said the video was “dreadful”, adding: “I can’t believe he was able to put that up without any checks at all.”

Paul later issued a second statement of apology. “I want to apologise to anyone who has seen the video. I want to apologise to anyone who has been affected or touched by mental illness or depression or suicide, but most importantly I want to apologise to the victim and his family,” he said. “For my fans who are defending my actions, please don’t – they do not deserve to be defended.”

YouTube said Paul’s video violated its policies, but did not respond to calls to suspend him from the site.

“Our hearts go out to the family of the person featured in the video,” a YouTube spokeswoman said. “YouTube prohibits violent or gory content posted in a shocking, sensational or disrespectful manner. If a video is graphic, it can only remain on the site when supported by appropriate educational or documentary information and in some cases it will be age-gated.”

Aokigahara has gained worldwide notoriety as a suicide spot, with a record 105 bodies reportedly discovered there in 2003. Local police have stopped releasing the number of annual deaths in an attempt to reduce the area’s association with suicide.

The forest’s hiking trails are dotted with signs urging those with suicidal thoughts to consider their families and contact a suicide prevention group.

The number of Japanese who kill themselves has fallen in recent years, although the country still has the sixth highest suicide rate in the world.

The number of people who took their own lives dropped to 21,897 in 2016 – the lowest level in 22 years – according to government figures.

The number rose in the late 1990s and remained just above 30,000 for more than 10 years – a rate experts partly attributed to financial pressures caused by the collapse of the bubble economy in 1992 and the end of lifetime employment.

The lack of services for people with mental health problems, as well as debt and serious illness – particularly among elderly people – have also been cited as common causes of suicide in Japan.

The figure has remained below 30,000 a year since 2012.

In the UK, Samaritans can be contacted on 116 123. In the US, the National Suicide Prevention Lifeline is 1-800-273-8255. In Australia, the crisis support service Lifeline is 13 11 14. Other international suicide helplines can be found at www.befrienders.org.

Apple has apologised to customers for deliberately slowing the performance of older iPhone models without users’ consent.

The US tech company also announced a (£37) reduction in the cost of iPhone battery replacements, down from to , and an iOS (operating system) software update providing updates on iPhone battery health in early 2018.

The apology comes after Apple admitted to slowing down the iPhone 6, 6S, 7 and SE – when their batteries are either old, cold or have a low charge – to prevent abrupt shutdowns.

Apple said the problem was that ageing lithium batteries delivered power unevenly, which could cause iPhones to shut down unexpectedly – endangering the delicate circuits inside.

A statement on Apple’s website said: “We’ve been hearing feedback from our customers about the way we handle performance for iPhones with older batteries and how we have communicated that process. We know that some of you feel Apple has let you down.

“We apologize. There’s been a lot of misunderstanding about this issue, so we would like to clarify and let you know about some changes we’re making.

“First and foremost, we have never – and would never – do anything to intentionally shorten the life of any Apple product, or degrade the user experience to drive customer upgrades.

The company said it intentionally slowed the performance of the older iPhones because, when their batteries wear to a certain level, they can no longer sustain the required current demanded by the phones’ processors.

When the processor demands more current than the battery can supply, the phone abruptly shuts down to protect its internal components, as was the case with the iPhone 6S – for which Apple was forced to replace batteries.

HOW FACEBOOK TEACHES PHOTOS TO TALK

Facebook’s News Feed is a feast for the eyes, filled with photos, videos and status updates.

That’s not great for visually impaired individuals, so Facebook has turned to artificial intelligence to improve their experience. A blind person can now hear an audio message describing a friend’s photo that shows people dancing or riding bikes.

To do so, Facebook’s algorithm had to be taught what it was seeing.

Artificial intelligence is the secret sauce behind making a project like this possible. It can do everything from translate languages, understand human speech and identify diseases. But AI advances aren’t without flaws.

Even as artificial intelligence excels, the human element — which includes biases and oversights by those who train the system — surfaces in alarming ways. For example, a Microsoft bot named Tay once sparked outragewhen it tweeted attacks against Jews and feminists.

Dario Garcia, an artificial intelligence engineer at Facebook, is leading the project to identify what is happening in photos and read them out loud for the blind.

“If you get it wrong, the consequences are pretty bad,” said Dario Garcia, an artificial intelligence engineer at Facebook. “[Our project is] not a self-driving car, where someone will die if you get it wrong. But you can give a very misleading experience to people that most likely don’t have a clear way of knowing the algorithm is wrong.”

Garcia’s team gathered a sample of 130,000 public images that featured people. Staffers, called annotators, wrote a single line description of each photo. The images became examples that showed the AI what a photo of a person riding a bike or a horse looked like.

The team faced tricky questions. If only part of a person’s body appeared in an image, Garcia and the annotators would need to discuss how that influenced the description.

“You become almost obsessed with what the current definition of a person is,” Garcia said.

The conclusions of the group impacted how billions of photos are understood.

Over time, the algorithm learned what was happening in photos anddeveloped its own captions. After caption writing was tested, some images were relabeled to correct mistakes. The AI also learned from those corrections and strengthened its predictions in what Garcia calls a virtuous cycle.

When the system launchedin April 2016, it only identified objects and humans, but it has since been updated to identify 12 distinct actions in its captions.

To use the feature, a blind person needs to access Facebook with a screen-reader — software that helps a visually impaired reader by using a speech synthesizer or braille display — and focus on the image.

There’s still room to improve. The National Federation of the Blind recommends Facebook users who want the blind to have access to their photos include a detailed caption due to the limitations of the service.

Matt King, a blind engineer at Facebook who contributed to the project, compares today’s AI systems to machines from the 1980s that read books to the unsighted. Those machines were the size of washing machines, couldn’t read fancy typefaces, and the page of the book had to be clean.

“Artificial intelligence is creating a path to a world where everyone can communicate in ways they feel are most natural and can do so without leaving anyone feeling excluded,” King said.

He says he’s optimistic about Facebook’s progress so far.

Facebook’s advancements have also been helped along by Yann LeCun, the company’s director of AI Research. LeCun, who joined Facebook in 2013 and is also a professor at New York University, is one the biggest names in the AI field. He’s credited with developing the convolutional neural network, a popular AI technique that has been used for years in banks and ATMs to read the numbers on checks.

Despite its advancements, LeCun knows there are still limitations with AI. LeCun’s wife, who is French, cannot use voice recognition apps because they struggle to understand her accent.

“There’s not a lot of people speaking English with a French accent,” LeCun explained to CNN Tech. “It’s not that [engineers] don’t like French accented people. It’s just that there’s not much data.”