Why did Google+ wait 7 months to disclose a bug?
- Google is discontinuing Google+ for consumers after the organization found a bug in the Google+ People APIs in March, according to a company post from Ben Smith, Google Fellow and VP of engineering. The company said 438 applications may have used the impacted API, and up to 500,000 Google+ accounts may have been compromised.
- The social network had low engagement but the company shut it down to avoid further scrutiny over regulation and damage to its reputation, according to The Wall Street Journal, which first reported the announcement. The bug was discovered and patched in March and the company believes it arose following the launch of the API's interaction with a "subsequent Google+ code change," according to the announcement.
- Google engineers did not find any evidence of abuse regarding the bug, the API or profile misuse. The company concluded no notification was needed at the time of discovery because it did not meet Google's disclosure thresholds. The company requirements include determining whether it accurately identified users to inform them of the breach; found evidence of misuse; or identified actions developers and users could take in response to the breach.
By Google's admission, Google+ was underused — "90% of Google+ user sessions are less than five seconds," the company said — and that may have contributed to the bug's delayed disclosure.
The review of Google+ "crystallized what we've known for a while: that while our engineering teams have put a lot of effort and dedication into building Google+ over the years, it has not achieved broad consumer or developer adoption, and has seen limited user interaction with apps," according to Smith.
Still, the company waited about seven months before disclosing the bug to the public. In its decision to do so, the company might have caused the very complaints it was trying to avoid. Now more than ever the public is demanding transparency into how companies are handling their data, and breaches could weaken Google's argument against a rigid U.S. privacy law.
Automating searches for threats or vulnerabilities in a system is something most companies are striving for. But Google still accompanies its automation with human detection. "Ideally, my job should not exist," said Heather Adkins, director of information and security at Google, at RSA in San Francisco in April.
Security professionals should have an intuition into how malicious actors work and need to look for "forensic artifacts" left behind from nefarious activity, she said.
At least for Google+, the company slightly derailed from Adkins's words.
Just a few weeks ago, Facebook disclosed a breach resulting in 50 million compromised accounts. Both companies are more reliant on consumer data than any other tech company in Silicon Valley, making the penalties from a potential federal privacy law all the greater.
Google and Facebook rely on consumer data to sell ads and make money; with stricter policies and harsher consequences for mishandling data, both companies could suffer exponentially.
Every security professional knows that when it comes to breaches, it's not a matter of if, but when. Google's disclosure highlights two important things:
Even companies as a large and protected as Google are vulnerable.
Google waited months to disclosure a security incident despite a year-long battle with delayed public disclosures.
Follow Samantha Ann Schwartz on Twitter