Today, you can find more online security tips in a few seconds than you could use in a lifetime. While this collection of best practices is rich, it’s not always useful; it can be difficult to know which ones to prioritize, and why.

Questions like ‘Why do people make some security choices (and not others)?’ and ‘How effectively does the security community communicate its best practices?’ are at the heart of a new paper called, “...no one can hack my mind”: Comparing Expert and Non-Expert Security Practices” that we’ll present this week at the Symposium on Usable Privacy and Security.

This paper outlines the results of two surveys—one with 231 security experts, and another with 294 web-users who aren’t security experts—in which we asked both groups what they do to stay safe online. We wanted to compare and contrast responses from the two groups, and better understand differences and why they may exist.

Experts’ and non-experts’ top 5 security practices

Here are experts’ and non-experts’ top security practices, according to our study. We asked each participant to list 3 practices:

Common ground: careful password management

Clearly, careful password management is a priority for both groups. But, they differ on their approaches.

Security experts rely heavily on password managers, services that store and protect all of a user’s passwords in one place. Experts reported using password managers, for at least some of their accounts, three-times more frequently than non-experts. As one expert said, “Password managers change the whole calculus because they make it possible to have both strong and unique passwords.”

On the other hand, only 24% of non-experts reported using password managers for at least some of their accounts, compared to 73% of experts. Our findings suggested this was due to lack of education about the benefits of password managers and/or a perceived lack of trust in these programs. “I try to remember my passwords because no one can hack my mind,” one non-expert told us.

Key differences: software updates and antivirus software

Despite some overlap, experts’ and non-experts’ top answers were remarkably different.

35% of experts and only 2% of non-experts said that installing software updates was one of their top security practices. Experts recognize the benefits of updates—“Patch, patch, patch,” said one expert—while non-experts not only aren’t clear on them, but are concerned about the potential risks of software updates. A non-expert told us: “I don’t know if updating software is always safe. What [if] you download malicious software?” and “Automatic software updates are not safe in my opinion, since it can be abused to update malicious content.”

Meanwhile, 42% of non-experts vs. only 7% of experts said that running antivirus software was one of the top three three things they do to stay safe online. Experts acknowledged the benefits of antivirus software, but expressed concern that it might give users a false sense of security since it’s not a bulletproof solution.

More broadly, our findings highlight fundamental misunderstandings about basic online security practices. Software updates, for example, are the seatbelts of online security; they make you safer, period. And yet, many non-experts not only overlook these as a best practice, but also mistakenly worry that software updates are a security risk.

No practice on either list—expert or non-expert—makes users less secure. But, there is clearly room to improve how security best practices are prioritized and communicated to the vast majority of (non expert) users. We’re looking forward to tackling that challenge.

Posted by Vegard Johnsen, Product Manager Google Ad Traffic QualityToday the Trustworthy Accountability Group (TAG) announced a new pilot blacklist to protect advertisers across the industry. This blacklist comprises data-center IP addresses associated with non-human ad requests. We're happy to support this effort along with other industry leaders—Dstillery, Facebook, MediaMath, Quantcast, Rubicon Project, TubeMogul and Yahoo—and contribute our own data-center blacklist. As mentioned to Ad Age and in our recent call to action, we believe that if we work together we can raise the fraud-fighting bar for the whole industry.Data-center traffic is one of many types of non-human or illegitimate ad traffic. The newly shared blacklist identifies web robots or “bots” that are being run in data centers but that avoid detection by the IAB/ABC International Spiders & Bots List. Well-behaved bots announce that they're bots as they surf the web by including a bot identifier in their declared User-Agent strings. The bots filtered by this new blacklist are different. They masquerade as human visitors by using User-Agent strings that are indistinguishable from those of typical web browsers.In this post, we take a closer look at a few examples of data-center traffic to show why it’s so important to filter this traffic across the industry.Impact of the data-center blacklistWhen observing the traffic generated by the IP addresses in the newly shared blacklist, we found significantly distorted click metrics. In May of 2015 on DoubleClick Campaign Manager alone, we found the blacklist filtered 8.9% of all clicks. Without filtering these clicks from campaign metrics, advertiser click-through rates would have been incorrect and for some advertisers this error would have been very large.Below is a plot that shows how much click-through rates in May would have been inflated across the most impacted of DoubleClick Campaign Manager’s larger advertisers.

Two examples of bad data-center trafficThere are two distinct types of invalid data-center traffic: where the intent is malicious and where the impact on advertisers is accidental. In this section we consider two interesting examples where we’ve observed traffic that was likely generated with malicious intent.Publishers use many different strategies to increase the traffic to their sites. Unfortunately, some are willing to use any means necessary to do so. In our investigations we’ve seen instances where publishers have been running software tools in data centers to intentionally mislead advertisers with fake impressions and fake clicks.First exampleUrlSpirit is just one example of software that some unscrupulous publishers have been using to collaboratively drive automated traffic to their websites. Participating publishers install the UrlSpirit application on Windows machines and they each submit up to three URLs through the application’s interface. Submitted URLs are then distributed to other installed instances of the application, where Internet Explorer is used to automatically visit the list of target URLs. Publishers who have not installed the application can also leverage the network of installations by paying a fee.At the end of May more than 82% of the UrlSpirit installations were being run on machines in data centers. There were more than 6,500 data-center installations of UrlSpirit, with each data-center installation running in a separate virtual machine. In aggregate, the data-center installations of UrlSpirit were generating a monthly rate of at least half a billion ad requests— an average of 2,500 fraudulent ad requests per installation per day.Second exampleHitLeap is another example of software that some publishers are using to collaboratively drive automated traffic to their websites. The software also runs on Windows machines, and each instance uses the Chromium Embedded Framework to automatically browse the websites of participating publishers—rather than using Internet Explorer.Before publishers can use the network of installations to drive traffic to their websites, they need browsing minutes. Participating publishers earn browsing minutes by running the application on their computers. Alternatively, they can simply buy browsing minutes—with bundles starting at $9 for 10,000 minutes or up to 1,000,000 minutes for $625.Publishers can specify as many target URLs as they like. The number of visits they receive from the network of installations is a function of how long they want the network of bots to spend on their sites. For example, ten browsing minutes will get a publisher five visits if the publisher requests two-minute visit durations.In mid-June, at least 4,800 HitLeap installations were being run in virtual machines in data centers, with a unique IP associated with each HitLeap installation. The data-center installations of HitLeap made up 16% of the total HitLeap network, which was substantially larger than the UrlSpirit network. In aggregate, the data-center installations of HitLeap were generating a monthly rate of at least a billion fraudulent ad requests—or an average of 1,600 ad requests per installation per day.

Not only were these publishers collectively responsible for billions of automated ad requests, but their websites were also often extremely deceptive. For example, of the top ten webpages visited by HitLeap bots in June, nine of these included hidden ad slots -- meaning that not only was the traffic fake, but the ads couldn’t have been seen even if they had been legitimate human visitors.http://vedgre.com/7/gg.html is illustrative of these nine webpages with hidden ad slots. The webpage has no visible content other than a single 300×250px ad. This visible ad is actually in a 300×250px iframe that includes two ads, the second of which is hidden. Additionally, there are also twenty-seven 0×0px hidden iframes on this page with each hidden iframe including two ad slots. In total there are fifty-five hidden ads on this page and one visible ad. Finally, the ads served on http://vedgre.com/7/gg.html appear to advertisers as though they have been served on legitimate websites like indiatimes.com, scotsman.com, autotrader.co.uk, allrecipes.com, dictionary.com and nypost.com, because the tags used on http://vedgre.com/7/gg.html to request the ad creatives have been deliberately spoofed.An example of collateral damageUnlike the traffic described above, there is also automated data-center traffic that impacts advertising campaigns but that hasn’t been generated for malicious purposes. An interesting example of this is an advertising competitive intelligence company that is generating a large volume of undeclared non-human traffic.This company uses bots to scrape the web to find out which ad creatives are being served on which websites and at what scale. The company’s scrapers also click ad creatives to analyse the landing page destinations. To provide its clients with the most accurate possible intelligence, this company’s scrapers operate at extraordinary scale and they also do so without including bot identifiers in their User-Agent strings.While the aim of this company is not to cause advertisers to pay for fake traffic, the company’s scrapers do waste advertiser spend. They not only generate non-human impressions; they also distort the metrics that advertisers use to evaluate campaign performance—in particular, click metrics. Looking at the data across DoubleClick Campaign Manager this company’s scrapers were responsible for 65% of the automated data-center clicks recorded in the month of May.Going forwardGoogle has always invested to prevent this and other types of invalid traffic from entering our ad platforms. By contributing our data-center blacklist to TAG, we hope to help others in the industry protect themselves.We’re excited by the collaborative spirit we’ve seen working with other industry leaders on this initiative. This is an important, early step toward tackling fraudulent and illegitimate inventory across the industry and we look forward to sharing more in the future. By pooling our collective efforts and working with industry bodies, we can create strong defenses against those looking to take advantage of our ecosystem. We look forward to working with the TAG Anti-fraud working group to turn this pilot program into an industry-wide tool.

Posted by Neil Martin, Export Compliance Counsel, Google Legal
Tim Willis, Hacker Philanthropist, Chrome Security TeamCross-posted on the Google Public Policy BlogAs the usage and complexity of software grows, the importance of security research has grown with it. It’s through diligent research that we uncover and fix bugs — like Heartbleed and POODLE — that can cause serious security issues for web users around the world.The time and effort it takes to uncover bugs is significant, and the marketplace for these vulnerabilities is competitive. That’s why we provide cash rewards for quality security research that identifies problems in our own products or proactive improvements to open-source products. We’ve paid more than $4 million to researchers from all around the world - our current Hall of Fame includes researchers from Germany, the U.S., Japan, Brazil, and more than 30 other countries.Problematic new export controlsWith the benefits of security research in mind, there has been some public head scratching and analysis around proposed export control rules put forth by the U.S. Department of Commerce that would negatively affect vulnerability research.The Commerce Department's proposed rules stem from U.S. membership in the Wassenaar Arrangement, a multilateral export control association. Members of the Wassenaar Arrangement have agreed to control a wide range of goods, software, and information, including technologies relating to "intrusion software" (as they've defined that term).We believe that these proposed rules, as currently written, would have a significant negative impact on the open security research community. They would also hamper our ability to defend ourselves, our users, and make the web safer. It would be a disastrous outcome if an export regulation intended to make people more secure resulted in billions of users across the globe becoming persistently less secure.Google comments on proposed rulesEarlier today, we formally submitted comments on the proposed rules to the United States Commerce Department’s Bureau of Industry and Security (BIS). Our comments are lengthy, but we wanted to share some of the main concerns and questions that we have officially expressed to the U.S. government today:

Rules are dangerously broad and vague. The proposed rules are not feasible and would require Google to request thousands - maybe even tens of thousands - of export licenses. Since Google operates in many different countries, the controls could cover our communications about software vulnerabilities, including: emails, code review systems, bug tracking systems, instant messages - even some in-person conversations! BIS’ own FAQ states that information about a vulnerability, including its causes, wouldn’t be controlled, but we believe that it sometimes actually could be controlled information.

You should never need a license when you report a bug to get it fixed. There should be standing license exceptions for everyone when controlled information is reported back to manufacturers for the purposes of fixing a vulnerability. This would provide protection for security researchers that report vulnerabilities, exploits, or other controlled information to any manufacturer or their agent.

Global companies should be able to share information globally. If we have information about intrusion software, we should be able to share that with our engineers, no matter where they physically sit.

Clarity is crucial. We acknowledge that we have a team of lawyers here to help us out, but navigating these controls shouldn’t be that complex and confusing. If BIS is going to implement the proposed controls, we recommend providing a simple, visual flowchart for everyone to easily understand when they need a license.

These controls should be changed ASAP. The only way to fix the scope of the intrusion software controls is to do it at the annual meeting of Wassenaar Arrangement members in December 2015.

We’re committed to working with BIS to make sure that both white hat security researchers’ interests and Google users’ interests are front of mind. The proposed BIS rule for public comment is available here, and comments can also be sent directly to publiccomments@bis.doc.gov. If BIS publishes another proposed rule on intrusion software, we’ll make sure to come back and update this blog post with details.

In the coming weeks, these detection improvements will become more noticeable in Chrome: users will see more warnings (like the one below) about unwanted software than ever before.

We want to be really clear that Google Safe Browsing’s mandate remains unchanged: we’re exclusively focused on protecting users from malware, phishing, unwanted software, and similar harm. You won’t see Safe Browsing warnings for any other reasons.

Unwanted software is being distributed on web sites via a variety of sources, includingadinjectors as well as ad networks lacking strict quality guidelines. In many cases, Safe Browsing within your browser is your last line of defense.

Google Safe Browsing has protected users from phishing and malware since 2006, and from unwanted software since 2014. We provide this protection across browsers (Chrome, Firefox, and Safari) and across platforms (Windows, Mac OS X, Linux, and Android). If you want to help us improve the defenses for everyone using a browser that integrates Safe Browsing, please consider checking the box that appears on all of our warning pages:

Safe Browsing’s focus is solely on protecting people and their data from badness. And nothing else.

Since 2010, our security reward programs have helped make Google products safer for everyone. Last year, we paid more than 1.5 million dollars to security researchers that found vulnerabilities in Chrome and other Google Products.

Today, we're expanding our program to include researchers that will find, fix, and prevent vulnerabilities on Android, specifically. Here are some details about the new Android Security Rewards program:

For vulnerabilities affecting Nexus phones and tablets available for sale on Google Play (currently Nexus 6 and Nexus 9), we will pay for each step required to fix a security bug, including patches and tests. This makes Nexus the first major line of mobile devices to offer an ongoing vulnerability rewards program.

In addition to rewards for vulnerabilities, our program offers even larger rewards to security researchers that invest in tests and patches that will make the entire ecosystem stronger.

The largest rewards are available to researchers that demonstrate how to work around Android’s platform security features, like ASLR, NX, and the sandboxing that is designed to prevent exploitation and protect users.

Android will continue to participate in Google’s Patch Rewards Program which pays for contributions that improve the security of Android (and other open source projects). We’ve also sponsored mobile pwn2own for the last 2 years, and we plan to continue to support this and other competitions to find vulnerabilities in Android.

As we have often said, open security research is a key strength of the Android platform. The more security research that's focused on Android, the stronger it will become.

Posted by Elie Bursztein, Anti-Abuse Research Lead and Ilan Caron, Software EngineerWhat was your first pet’s name?What is your favorite food?What is your mother’s maiden name?What do these seemingly random questions have in common? They’re all familiar examples of ‘security questions’. Chances are you’ve had to answer one these before; many online services use them to help users recover access to accounts if they forget their passwords, or as an additional layer of security to protect against suspicious logins.But, despite the prevalence of security questions, their safety and effectiveness have rarely been studied in depth. As part of our constant efforts to improve account security, we analyzed hundreds of millions of secret questions and answers that had been used for millions of account recovery claims at Google. We then worked to measure the likelihood that hackers could guess the answers.Our findings, summarized in a paper that we recently presented at WWW 2015, led us to conclude that secret questions are neither secure nor reliable enough to be used as a standalone account recovery mechanism. That’s because they suffer from a fundamental flaw: their answers are either somewhat secure or easy to remember—but rarely both.

Click infographic for larger version

Easy Answers Aren’t SecureNot surprisingly, easy-to-remember answers are less secure. Easy answers often contain commonly known or publicly available information, or are in a small set of possible answers for cultural reasons (ie, a common family name in certain countries).Here are some specific insights:

With a single guess, an attacker would have a 19.7% chance of guessing English-speaking users’ answers to the question "What is your favorite food?" (it was ‘pizza’, by the way)

With ten guesses, an attacker would have a nearly 24% chance of guessing Arabic-speaking users’ answer to the question "What’s your first teacher’s name?"

With ten guesses, an attacker would have a 21% chance of guessing Spanish-speaking users’ answers to the question, "What is your father’s middle name?"

With ten guesses, an attacker would have a 39% chance of guessing Korean-speaking users’ answers to the question "What is your city of birth?" and a 43% chance of guessing their favorite food.

Many different users also had identical answers to secret questions that we’d normally expect to be highly secure, such as "What’s your phone number?" or "What’s your frequent flyer number?". We dug into this further and found that 37% of people intentionally provide false answers to their questions thinking this will make them harder to guess. However, this ends up backfiring because people choose the same (false) answers, and actually increase the likelihood that an attacker can break in.

Difficult Answers Aren’t Usable

Surprise, surprise: it’s not easy to remember where your mother went to elementary school, or what your library card number is! Difficult secret questions and answers are often hard to use. Here are some specific findings:

40% of our English-speaking US users couldn’t recall their secret question answers when they needed to. These same users, meanwhile, could recall reset codes sent to them via SMS text message more than 80% of the time and via email nearly 75% of the time.

Some of the potentially safest questions—"What is your library card number?" and "What is your frequent flyer number?"—have only 22% and 9% recall rates, respectively.

For English-speaking users in the US the easier question, "What is your father’s middle name?" had a success rate of 76% while the potentially safer question "What is your first phone number?" had only a 55% success rate.

Why not just add more secret questions?

Of course, it’s harder to guess the right answer to two (or more) questions, as opposed to just one. However, adding questions comes at a price too: the chances that people recover their accounts drops significantly. We did a subsequent analysis to illustrate this idea (Google never actually asks multiple security questions).

According to our data, the ‘easiest’ question and answer is "What city were you born in?"—users recall this answer more than 79% of the time. The second easiest example is "What is your father’s middle name?", remembered by users 74% of the time. If an attacker had ten guesses, they’d have a 6.9% and 14.6% chance of guessing correct answers for these questions, respectively.

But, when users had to answer both together, the spread between the security and usability of secret questions becomes increasingly stark. The probability that an attacker could get both answers in ten guesses is 1%, but users will recall both answers only 59% of the time. Piling on more secret questions makes it more difficult for users to recover their accounts and is not a good solution, as a result.

The Next Question: What To Do?

Secret questions have long been a staple of authentication and account recovery online. But, given these findings its important for users and site owners to think twice about these.

We strongly encourage Google users to make sure their Google account recovery information is current. You can do this quickly and easily with our Security Checkup. For years, we’ve only used security questions for account recovery as a last resort when SMS text or back-up email addresses don’t work and we will never use these as stand-alone proof of account ownership.

In parallel, site owners should use other methods of authentication, such as backup codes sent via SMS text or secondary email addresses, to authenticate their users and help them regain access to their accounts. These are both safer, and offer a better user experience.

Posted by Kurt Thomas, Spam & Abuse ResearchIn March, we outlined the problems with unwanted ad injectors, a common symptom of unwanted software. Ad injectors are programs that insert new ads, or replace existing ones, into the pages you visit while browsing the web. We’ve received more than 100,000 user complaints about them in Chrome since the beginning of 2015—more than any other issue. Unwanted ad injectors are not only annoying, they can pose serious security risks to users as well.

Today, we’re releasing the results of a study performed with the University of California, Berkeley and Santa Barbara that examines the ad injector ecosystem, in-depth, for the first time. We’ve summarized our key findings below, as well as Google’s broader efforts to protect users from unwanted software. The full report, which you can read here, will be presented later this month at the IEEE Symposium on Security & Privacy.

Ad injectors’ businesses are built on a tangled web of different players in the online advertising economy. This complexity has made it difficult for the industry to understand this issue and help fix it. We hope our findings raise broad awareness of this problem and enable the online advertising industry to work together and tackle it.

How big is the problem?

This is what users might see if their browsers were infected with ad injectors. None of the ads displayed appear without an ad injector installed.

To pursue this research, we custom-built an ad injection “detector” for Google sites. This tool helped us identify tens of millions of instances of ad injection “in the wild” over the course of several months in 2014, the duration of our study.

More detail is below, but the main point is clear: deceptive ad injection is a significant problem on the web today. We found 5.5% of unique IPs—millions of users—accessing Google sites that included some form of injected ads.

How ad injectors work

The ad injection ecosystem comprises a tangled web of different players. Here is a quick snapshot.

Software: It all starts with software that infects your browser. We discovered more than 50,000 browser extensions and more than 34,000 software applications that took control of users’ browsers and injected ads. Upwards of 30% of these packages were outright malicious and simultaneously stole account credentials, hijacked search queries, and reported a user’s activity to third parties for tracking. In total, we found 5.1% of page views on Windows and 3.4% of page views on Mac that showed tell-tale signs of ad injection software.

Distribution: Next, this software is distributed by a network of affiliates that work to drive as many installs as possible via tactics like: marketing, bundling applications with popular downloads, outright malware distribution, and large social advertising campaigns. Affiliates are paid a commision whenever a user clicks on an injected ad. We found about 1,000 of these businesses, including Crossrider, Shopper Pro, and Netcrawl, that use at least one of these tactics.

Injection Libraries: Ad injectors source their ads from about 25 businesses that provide ‘injection libraries’. Superfish and Jollywallet are by far the most popular of these, appearing in 3.9% and 2.4% of Google views, respectively. These companies manage advertising relationships with a handful of ad networks and shopping programs and decide which ads to display to users. Whenever a user clicks on an ad or purchases a product, these companies make a profit, a fraction of which they share with affiliates.

Ads: The ad injection ecosystem profits from more than 3,000 victimized advertisers—including major retailers like Sears, Walmart, Target, Ebay—who unwittingly pay for traffic to their sites. Because advertisers are generally only able to measure the final click that drives traffic to their sites, they’re often unaware of many preceding twists and turns, and don’t know they are receiving traffic via unwanted software and malware. Ads originate from ad networks that translate unwanted software installations into profit: 77% of all injected ads go through one of three ad networks—dealtime.com, pricegrabber.com, and bizrate.com. Publishers, meanwhile, aren’t being compensated for these ads.

Examples of injected ads ‘in the wild’

How Google fights deceptive ad injectors

We pursued this research to raise awareness about the ad injection economy so that the broader ads ecosystem can better understand this complex issue and work together to tackle it.

Based on our findings, we took the following actions:

Keeping the Chrome Web Store clean: We removed 192 deceptive Chrome extensions that affected 14 million users with ad injection from the Chrome Web Store. These extensions violated Web Store policies that extensions have a narrow and easy-to-understand purpose. We’ve also deployed new safeguards in the Chrome Web Store to help protect users from deceptive ad injection extensions.

Protecting Chrome users: We improved protections in Chrome to flag unwanted software and display familiar red warnings when users are about to download deceptive software. These same protections are broadly available via the Safe Browsing API. We also provide a tool for users already affected by ad injectors and other unwanted software to clean up their Chrome browser.

Most recently, we updated our AdWords policies to make it more difficult for advertisers to promote unwanted software on AdWords. It's still early, but we've already seen encouraging results since making the change: the number of 'Safe Browsing' warnings that users receive in Chrome after clicking AdWords ads has dropped by more than 95%. This suggests it's become much more difficult for users to download unwanted software, and for bad advertisers to promote it. Our blog post from March outlines various policies—for the Chrome Web Store, AdWords, Google Platforms program, and the DoubleClick Ad Exchange (AdX)—that combat unwanted ad injectors, across products.

We’re also constantly improving our Safe Browsing technology, which protects more than one billion Chrome, Safari, and Firefox users across the web from phishing, malware, and unwanted software. Today, Safe Browsing shows people more than 5 million warnings per day for all sorts of malicious sites and unwanted software, and discovers more than 50,000 malware sites and more than 90,000 phishing sites every month.

Considering the tangle of different businesses involved—knowingly, or unknowingly—in the ad injector ecosystem, progress will only be made if we raise our standards, together. We strongly encourage all members of the ads ecosystem to review their policies and practices so we can make real improvement on this issue.

Posted by Drew Hintz, Security Engineer and Justin Kosslyn, Google Ideas[Cross-posted on the Official Google Blog] Would you enter your email address and password on this page?

This looks like a fairly standard login page, but it’s not. It’s what we call a “phishing” page, a site run by people looking to receive and steal your password. If you type your password here, attackers could steal it and gain access to your Google Account—and you may not even know it. This is a common and dangerous trap: the most effective phishing attacks can succeed 45 percent of the time, nearly 2 percent of messages to Gmail are designed to trick people into giving up their passwords, and various services across the web send millions upon millions of phishing emails, every day.

To help keep your account safe, today we’re launching Password Alert, a free, open-source Chrome extension that protects your Google and Google Apps for Work Accounts. Once you’ve installed it, Password Alert will show you a warning if you type your Google password into a site that isn’t a Google sign-in page. This protects you from phishing attacks and also encourages you to use different passwords for different sites, a security best practice.

Here's how it works for consumer accounts. Once you’ve installed and initialized Password Alert, Chrome will remember a “scrambled” version of your Google Account password. It only remembers this information for security purposes and doesn’t share it with anyone. If you type your password into a site that isn't a Google sign-in page, Password Alert will show you a notice like the one below. This alert will tell you that you’re at risk of being phished so you can update your password and protect yourself.

Password Alert is also available to Google for Work customers, including Google Apps and Drive for Work. Your administrator can install Password Alert for everyone in the domains they manage, and receive alerts when Password Alert detects a possible problem. This can help spot malicious attackers trying to break into employee accounts and also reduce password reuse. Administrators can find more information in the Help Center.

We work to protect users from phishing attacks in a variety of ways. We’re constantly improving our Safe Browsing technology, which protects more than 1 billion people on Chrome, Safari and Firefox from phishing and other dangerous sites via bright, red warnings. We also offer tools like 2-Step Verification and Security Key that people can use to protect their Google Accounts and stay safe online. And of course, you can also take a Security Checkup at any time to make sure the safety and security information associated with your account is current.

Posted by Niels Provos, Distinguished Engineer, Security TeamTo protect users from malicious content, Safe Browsing’s infrastructure analyzes web pages with web browsers running in virtual machines. This allows us to determine if a page contains malicious content, such as Javascript meant to exploit user machines. While machine learning algorithms select which web pages to inspect, we analyze millions of web pages every day and achieve good coverage of the web in general.In the middle of March, severalsources reported a large Distributed Denial-of-Service attack against the censorship monitoring organization GreatFire. Researchers have extensively analyzed this DoS attack and found it novel because it was conducted by a network operator that intercepted benign web content to inject malicious Javascript. In this particular case, Javascript and HTML resources hosted on baidu.com were replaced with Javascript that would repeatedly request resources from the attacked domains.While Safe Browsing does not observe traffic at the network level, it affords good visibility at the HTTP protocol level. As such our infrastructure picked up this attack, too. Using Safe Browsing data, we can provide a more complete timeline of the attack and shed light on what injections occurred when.For this blog post, we analyzed data from March 1st to April 15th 2015. Safe Browsing first noticed injected content against baidu.com domains on March 3rd, 2015. The last time we observed injections during our measurement period was on April 7th, 2015. This is visible in the graph below, which plots the number of injections over time as a percentage of all injections observed:
We noticed that the attack was carried out in multiple phases. The first phase appeared to be a testing stage and was conducted from March 3rd to March 6th. The initial test target was 114.113.156.119:56789 and the number of requests was artificially limited. From March 4rd to March 6th, the request limitations were removed.

The next phase was conducted between March 10th and 13th and targeted the following IP address at first: 203.90.242.126. Passive DNS places hosts under the sinajs.cn domain at this IP address. On March 13th, the attack was extended to include d1gztyvw1gvkdq.cloudfront.net. At first, requests were made over HTTP and then upgraded to to use HTTPS. On March 14th, the attack started for real and targeted d3rkfw22xppori.cloudfront.net both via HTTP as well as HTTPS. Attacks against this specific host were carried out until March 17th.

On March 18th, the number of hosts under attack was increased to include the following: d117ucqx7my6vj.cloudfront.net, d14qqseh1jha6e.cloudfront.net, d18yee9du95yb4.cloudfront.net, d19r410x06nzy6.cloudfront.net, d1blw6ybvy6vm2.cloudfront.net. This is also the first time we find truncated injections in which the Javascript is cut-off and non functional. At some point during this phase of the attack, the cloudfront hosts started serving 302 redirects to greatfire.org as well as other domains. Substitution of Javascript ceased completely on March 20th but injections into HTML pages continued. Whereas Javascript replacement breaks the functionality of the original content, injection into HTML does not. Here HTML is modified to include both a reference to the original content as well as the attack Javascript as shown below:<html><head><meta name="referrer" content="never"/><title> </title></head><body> <iframe src="http://pan.baidu.com/s/1i3[...]?t=Zmh4cXpXJApHIDFMcjZa" style="position:absolute; left:0; top:0; height:100%; width:100%; border:0px;" scrolling="yes"></iframe></body><script type="text/javascript">[... regular attack Javascript ...]

In this technique, the web browser fetches the same HTML page twice but due to the presence of the query parameter t, no injection happens on the second request. The attacked domains also changed and now consisted of: dyzem5oho3umy.cloudfront.net, d25wg9b8djob8m.cloudfront.net and d28d0hakfq6b4n.cloudfront.net. About 10 hours after this new phase started, we see 302 redirects to a different domain served from the targeted servers.

The attack against the cloudfront hosts stops on March 25th. Instead, resources hosted on github.com were now under attack. The first new target was github.com/greatfire/wiki/wiki/nyt/ and was quickly followed by github.com/greatfire/ as well as github.com/greatfire/wiki/wiki/dw/.

On March 26th, a packed and obfuscated attack Javascript replaced the plain version and started targeting the following resources: github.com/greatfire/ and github.com/cn-nytimes/. Here we also observed some truncated injections. The attack against github seems to have stopped on April 7th, 2015 and marks the last time we saw injections during our measurement period.

From the beginning of March until the attacks stopped in April, we saw 19 unique Javascript replacement payloads as represented by their MD5 sum in the pie chart below.
For the HTML injections, the payloads were unique due to the injected URL so we are not showing their respective MD5 sums. However, the injected Javascript was very similar to the payloads referenced above.

The sizes of the injected Javascript payloads ranged from 995 to 1325 bytes.

We hope this report helps to round out the overall facts known about this attack. It also demonstrates that collectively there is a lot of visibility into what happens on the web. At the HTTP level seen by Safe Browsing, we cannot confidently attribute this attack to anyone. However, it makes it clear that hiding such attacks from detailed analysis after the fact is difficult.

Had the entire web already moved to encrypted traffic via TLS, such an injection attack would not have been possible. This provides further motivation for transitioning the web to encrypted and integrity-protected communication. Unfortunately, defending against such an attack is not easy for website operators. In this case, the attack Javascript requests web resources sequentially and slowing down responses might have helped with reducing the overall attack traffic. Another hope is that the external visibility of this attack will serve as a deterrent in the future.

Posted by Neal Mohan, VP Product Management, Display and Video AdsJerry Dischler, VP Product Management, AdWords
Since 2008 we’ve been working to make sure all of our services use strong HTTPS encryption by default. That means people using products like Search, Gmail, YouTube, and Drive will automatically have an encrypted connection to Google. In addition to providing a secure connection on our own products, we’ve been big proponents of the idea of “HTTPS Everywhere,” encouraging webmasters to preventandfix security breaches on their sites, and using HTTPS as a signal in our search ranking algorithm.

This year, we’re working to bring this “HTTPS Everywhere” mission to our ads products as well, to support all of our advertiser and publisher partners. Here are some of the specific initiatives we’re working on:

We’ve moved all YouTube ads to HTTPS as of the end of 2014.

Search on Google.com is already encrypted for a vast majority of users and we are working towards encrypting search ads across our systems.

By June 30, 2015, the vast majority of mobile, video, and desktop display ads served to the Google Display Network, AdMob, and DoubleClick publishers will be encrypted.

Also by June 30, 2015, advertisers using any of our buying platforms, including AdWords and DoubleClick, will be able to serve HTTPS-encrypted display ads to all HTTPS-enabled inventory.

Of course we’re not alone in this goal. By encrypting ads, the advertising industry can help make the internet a little safer for all users. Recently, the Interactive Advertising Bureau (IAB) published a call to action to adopt HTTPS ads, and many industry players are also working to meet HTTPS requirements. We’re big supporters of these industry-wide efforts to make HTTPS everywhere a reality.

Our HTTPS Everywhere ads initiatives will join some of our other efforts to provide a great ads experience online for our users, like “Why this Ad?”, “Mute This Ad” and TrueView skippable ads. With these security changes to our ads systems, we’re one step closer to ensuring users everywhere are safe and secure every time they choose to watch a video, map out a trip in a new city, or open their favorite app.