An Uber-bad way to handle a data breach

WELCOME to Connected Rights, your twist in the tale of digital rights news and analysis.

UBER HAS DEMONSTRATED PRECISELY WHAT NOT TO DO when a major data breach happens. Yesterday, the company confessed that around a year ago it paid off hackers who had stolen the personal information of 57 million Uber users and drivers around the world: http://bit.ly/2zVJSto

The data, which included names, email addresses and mobile phone numbers – plus driver’s license details in the cases of 600,000 Uber drivers in the US – was stored on a third-party cloud service. When Uber found out about the theft, it identified the hackers and “obtained assurances that the downloaded data had been destroyed” in exchange for a reported $100,000.

Uber did not tell regulators, drivers or customers about the breach at the time. It had also not encrypted the sensitive data that it was holding. The coverup, revealed by new-broom CEO Dara Khosrowshahi, probably leaves Uber severely liable in states such as California and in New York, where the state attorney general’s office has opened an investigation. A negligence lawsuit has already been filed: https://bloom.bg/2AmM0ef

Regular readers of this newsletter will no doubt also be aware that, had this happened after May 2018, Uber would have been crucified under the EU General Data Protection Regulation, which will levy fines of up to 4 percent of global turnover for security fiascos such as this one.

So, all together now: don’t store personal data with terrible security, and if the worst happens, tell regulators and tell your customers ASAP. Otherwise you deserve what you get.

GOOGLE COLLECTS ANDROID USERS’ LOCATION DATA even if they’ve turned off location services and haven’t even stuck a SIM card in their phone yet, according to a Quartz scoop: http://bit.ly/2AjWcEv

The location comes from nearby cell towers (obviously, if there’s no SIM card, a Wi-Fi connection is necessary to send the tower’s addresses to Google). And although Google may not have done anything untoward with the information, the fact that it was collected and sent could prove a security risk under certain circumstances.

According to the piece: “The cell tower addresses have been included in information sent to the system Google uses to manage push notifications and messages on Android phones for the past 11 months, according to a Google spokesperson. They were never used or stored, the spokesperson said, and the company is now taking steps to end the practice after being contacted by Quartz. By the end of November, the company said, Android phones will no longer send cell-tower location data to Google, at least as part of this particular service, which consumers cannot disable.”

Meanwhile, in an intriguing twist to the tale, it seems Google nemesis Oracle may have had a part in ensuring Google’s transgressions came to light: http://for.tn/2jeOoc0

SKYPE HAS DISAPPEARED FROM APP STORES IN CHINA, and it’s not precisely clear why. Apple said the Ministry of Public Security had told it that “a number of voice over internet protocol apps do not comply with local law”, so it removed them, Skype included. But which local law? http://for.tn/2zpeRyM

Was it the law that demands the ability to bypass encryption? Skype doesn’t do end-to-end encryption, so maybe not. Perhaps it was the law that demands the use of real names in online communications. Either way, these are not good times for non-Chinese online communications companies operating in the country.

MEANWHILE, SKYPE ALSO GOT INTO TROUBLE in Belgium, for not handing over message and call data to investigators in a criminal case: http://reut.rs/2z7KeJN

Skype tried to argue that it wasn’t a telecoms operator, so it didn’t have to submit to a court request under telecoms laws. No, the appeals court said, Skype really is a telecoms operator and it needs to hand over the data and pay its €30,000 fine. This doesn’t change the fact that, at the time of the relevant communications in 2012, Skype used a peer-to-peer model that basically means it has no data to hand over. Microsoft may try to appeal the case further.

Want to support this newsletter? If so, thanks so much! Here’s my Patreon page. Many thanks to those who are already contributing.

NET NEUTRALITY MAY SOON BECOME A THING OF THE PAST in the US, thanks to FCC chair Ajit Pai’s steadfast opposition to the practice. So goodbye to competition and hello to cut-down packages of selected websites – this is what you get when the president chooses a cable industry lobbyist to handle such things: http://wapo.st/2A0FjeP

NOT CONTENT WITH TELLING PARENTS TO DESTROY their kids’ My Friend Cayla dolls (connected to the internet; terrible security), the German federal network agency has now ordered parents to stop snooping on their kids using special children’s smartwatches. And to destroy the devices, obviously: http://bit.ly/2B8WRFL

This time the issue isn’t so much the devices’ security, but the way in which they’re used. The smartwatches can essentially be turned into bugs – they have SIM cards in them and can be set to quietly call a number, allowing the person on the other end of the call to monitor what’s going on in the child’s environment.

According to the agency, some parents have even been using these bugs to listen in on their children’s teachers, and all of this is terribly illegal under German privacy law. Also of note: monitoring your kids like this will only help to normalise persistent surveillance. It’s an awful thing to do. Stop doing it, no matter what the law says.

ONLINE PLATFORMS SHOULD NOT HAVE TO INSTALL “CENSORSHIP MACHINES” to monitor everything their users are uploading, the European Parliament’s civil liberties committee has said. The consumer protection committee has already voted the same way, regarding the EU’s new copyright law: http://bit.ly/2mOthT7

However, the culture and industry committees have voted the other way, calling for this surveillance to be built into the new law. That leaves the lead committee on the copyright law – the legal affairs committee – which will vote in late January.

MEANWHILE, PLATFORMS INCLUDING FACEBOOK, GOOGLE AND TWITTER have along with major media outlets signed up to a new initiative called The Trust Project, which will see little “trust indicators” stuck on the articles you see online: http://bit.ly/2zX1s0e

The indicators should make it easy for people to see who’s behind the article, with details on the journalist and their expertise, as well as on the news outlet’s funding and mission. It’s all intended to fight back against the “fake news” phenomenon, but – if it works as planned – the project would also give people a heck of a lot more valuable information about media outlets than they got in the pre-social-media age. And it would force media outlets to improve their editorial standards.

Let’s hope it pans out as promised.

If you’d like me to write articles for you about digital rights issues, speak at your event or provide privacy advice for your business, drop me an email at david@dmeyer.eu.

SOME PRETTY MAJOR WEBSITES ARE RUNNING HIDDEN CODE that records every keystroke you make when you visit them, Princeton University researchers have found: http://bit.ly/2B9CLLS

This “session replay” code is meant to help companies understand how people use their sites, but they can scoop up a lot of sensitive information. After the researchers published their findings, some sites using such code, including the pharmacy Walgreens and men’s retailer Bonobos, promised to stop.

PEOPLE ARE SHYING AWAY FROM BUYING SMART-HOME DEVICES because they’re worried about the security and privacy implications. According to a Deloitte survey, nearly 40 percent of respondents were worried about the devices tracking their usage, and more were concerned that they would be exposing themselves more than they want to. http://read.bi/2zHeaQo

And on top of that, most of the respondents said smart-home device manufacturers weren’t telling them enough about security risks. This is great – it shows people really are paying attention to this stuff, and it suggests that manufacturers can’t get away with their cavalier attitudes towards privacy and security.

IN THE WAKE OF SECURITY CONCERNS OVER THE IPHONE X‘s Face ID facial recognition feature (see last week’s CR), Korean banks have decided not to allow it as a security mechanism, for now: http://bit.ly/2hHHZpz

This will be particularly bothersome as Apple’s latest flagship phone not only uses Face ID for its biometric security functions; it also lacks the bank-approved fingerprint reader that served the same purpose in earlier iPhone models.

About the author

I’m David Meyer, a tech journalist with more than a decade’s experience writing about technology. I’ve covered many topics in that time, though I’m most interested in the policy decisions and technological breakthroughs that will shape our world. You can find me on Twitter as @superglaze and on Facebook as @davidmeyerwrites.