Google+ Privateness Snafu Leaves a Cloud Over the Tech Panorama

Google was caught not disclosing a potential data breach — leaving questions as to whether a lack of transparency is the new normal.

In the wake of Google shutting down its Google+ social networking platform after a privacy snafu, questions remain about the responsibility of tech giants when it comes to consumer data and its handling.

A software bug in an API for the social site was discovered by Google’s own internal security team this spring, which opened the door for outside developers to access private Google+ profile data. The bug was in play between 2015 and March 2018, when Google found and fixed the issue. It did not, however, make the issue public, nor did it notify users of a potential breach of their accounts — news of which broke Monday in the Wall Street Journal.

The API in question allowed users to give various applications access to their profile data, and the public profile information of their friends. However, the bug allowed apps to also access profile fields that were shared with the user, but not marked as public. That data consisted of static, optional Google+ Profile fields including name, email address, occupation, gender and age. This flaw allowed more than 400 apps using the Google+ API to potentially access the personal information of approximately 500,000 users.

After the story came to light, Google had no choice but to address why it didn’t disclose the problem. Ben Smith, Google fellow and vice president of engineering at the internet giant, took to Google’s blog to try and explain.

“We found no evidence that any developer was aware of this bug, or abusing the API, and we found no evidence that any Profile data was misused,” he wrote. He also acknowledged that Google has no way of knowing if users were affected or not, but that when deciding whether to issue notifications, “we apply several criteria.”

In this case, “our Privacy & Data Protection Office reviewed this issue, looking at the type of data involved, whether we could accurately identify the users to inform, whether there was any evidence of misuse, and whether there were any actions a developer or user could take in response,” he said. “None of these thresholds were met in this instance.”

However, the memo reviewed by the WSJ from Google’s legal and policy staff had a decidedly different take on the problem. The memo warned senior executives that disclosure in the environment at the time – in which the Facebook Cambridge Analytica scandal was cresting – would invite “immediate regulatory interest” and cause reputational damage.

“Don’t be Evil mutated into Don’t be Caught,” Colin Bastable, CEO of Lucy Security, said via email – citing Google’s recently retracted unofficial motto. “Google’s understandable desire to hide their embarrassment from regulators and users is the reason why states and the feds impose disclosure requirements – the knock-on effects of security breaches are immense.”

Pravin Kothari, CEO of CipherCloud, added: “It’s not surprising that companies that rely on user data are incented to avoid disclosing to the public that their data may have been compromised, which would impact consumer trust. These are the reasons that the government should and will continue to use in their inexorable march to a unified national data privacy omnibus regulation.”

And indeed, the news comes as Congress and consumer privacy groups are beating an ever-louder drum to increase accountability in data handling on the part of tech giants. Alphabet CEO Sundar Pichai has agreed to testify before Congress on the issue (along with other hot-button topics, such as political bias in search results) in the coming weeks, and it’s certain that the Google+ incident will be part of the mix.

“I know many of us have been out to Silicon Valley and spent time with the technology companies that are changing our world. And to date — I think a lot of that progress has been for the better,” House Majority Leader Kevin McCarthy (R.-Calif.) told USA Today last month. “But as big tech’s business grows, we have not had enough transparency and that has led to an erosion of trust and perhaps worse — harm to consumers.”

Ironically, the Google+ issue came to light as Google escaped a multibillion dollar fine in the UK. The high court there blocked a mass lawsuit that could have resulted in a £3.3 billion compensation settlement for claims that Google harvested personal data, including political views and health information, from more than four million UK residents without their permission.

Google is alleged to have unlawfully gathered and shared the personal ­information of millions of iPhone users in the UK by bypassing the default ­privacy settings on the Apple device between June 2011 and February 2012.

“It’s clear that data privacy for individuals is a growing concern for European governments. It’s worth noting that the alleged infraction is more than five years old in this case” said Tim Erlin, vice president of product management and strategy at Tripwire, via email. “With the legal landscape around privacy changing at a relatively rapid pace, we should expect the boundaries to be tested. While the court has blocked the case for now, I would expect that we’ll hear more about it, and other cases, as the dust settles on changing data privacy laws.”

No More Google+

As the news broke, Google said that it would be shuttering Google+ for consumers over the next 10 months (it will persist as an enterprise effort). The service launched in 2011 to compete with Facebook, but it never got the traction nor engagement of the social leader – Smith noted that 90 percent of user sessions lasted less than five seconds. And, it proved difficult to keep up with, as the API bug underscores.

“While our engineering teams have put a lot of effort and dedication into building Google+ over the years, it has not achieved broad consumer or developer adoption, and has seen limited user interaction with apps,” Smith said. He added, “Our review showed that our Google+ APIs, and the associated controls for consumers, are challenging to develop and maintain.”

Google is not the first tech giant to wrestle with API flaws, of course, and code bugs are far from uncommon.

“First Facebook, now Google. Software problems at huge tech companies continue to expose ‘the product,’ which in the case of advertising-driven tech companies happens to be your data,” Gary McGraw, vice president of security technology at Synopsys, told Threatpost. “Getting software security right is difficult, but not impossible. Just as was the case in the Facebook ‘View As’ design flaw, we see evidence in this Google+ case of just how tricky solid software engineering can be even for tech wizards. Making sure that APIs do not accidentally break security and privacy requirements is super important and is an aspect of design. Design flaws sometimes emerge in the gaps between systems that might otherwise seem fine on their own. The mind-boggling complexity of today’s commercial systems is a major factor here, making systematically uncovering and correcting design flaws when software is being designed and built harder than ever.”

Project Strobe

The sunsetting of the service for consumers is part of a larger privacy project called Project Strobe, also announced Monday. It’s focused on providing a more stringent framework around which Google and Android apps can gain which permissions.

For instance, the privacy updates will include more fine-grained control over what Google account data users choose to share with each app.

“Instead of seeing all requested permissions in a single screen, apps will have to show you each requested permission, one at a time, within its own dialog box,” Smith explained. “For example, if a developer requests access to both calendar entries and Drive documents, you will be able to choose to share one but not the other.”

The search behemoth is also limiting the consumer Gmail API so that only apps directly enhancing email functionality—such as email clients, email backup services and productivity services (e.g., CRM and mail merge services)—will be authorized to access user data. This new policy will go into effect Jan. 9.

And, on the mobile front, Google Play will limit which Android apps can ask for permission to access a user’s phone (including call logs) and SMS data.

“Only an app that you’ve selected as your default app for making calls or text messages will be able to make these requests,” Smith said, adding that there are some exceptions—e.g., voicemail and backup apps.

Despite Google making these changes (and recently addressing a privacy backlash regarding Chrome), the cloud of privacy concerns hanging over tech giants isn’t going to go away anytime soon — with some concerned that a lack of transparency might be the “new normal.”

“The decision not to disclose the discovered vulnerability speaks to a fear of reputational damage and possible legal ramifications or litigation in light of recent Senate hearings and the GDPR,” Jessica Ortega, website security analyst at SiteLock, told Threatpost. “This type of behavior may become more common among tech companies aiming to protect their reputation in the wake of legislation and privacy laws–they may choose not to disclose vulnerabilities that they are not legally required to report in order to avoid scrutiny or fines. Ultimately it will be up to users to proactively monitor how their data is used and what applications have access to that data by using strong passwords and carefully reviewing access requests prior to using an app like Google+.” Write a comment